This application claims priority under 35 U.S.C. §119(a) to a Korean Patent Application filed in the Korean Intellectual Property Office on Dec. 26, 2014 and assigned Serial No. 10-2014-0190724, the content of which is incorporated herein by reference.
1. Field of the Disclosure
The present disclosure relates generally to a security system, and more particularly, to a method for and an apparatus for operating the security system.
2. Description of the Related Art
The Internet, which is a human centered connectivity network through which humans generate and consume information, is now evolving to the Internet of Things (IoT) in which distributed entities exchange and process information without human intervention. The Internet of Everything (IoE) has also been developed, which is a combination of IoT technology and the Big Data processing technology through a connection with a cloud server. Technology elements, such as, for example, “sensing technology”, “wired/wireless communication and network infrastructure”, “service interface technology”, and “Security technology”, have been demanded for IoT implementation. Accordingly, a sensor network, Machine-to-Machine (M2M) communication, and Machine Type Communication (MTC) have been researched.
An IoT environment may provide intelligent Internet technology services, which provide a new value by collecting and analyzing data generated among connected things. IoT may be applied to a variety of fields including, for example, smart home, smart building, smart city, smart car or connected cars, smart grid, health care, smart appliances, and advanced medical services, through the convergence and combination of existing Information Technology (IT) with various industrial applications.
Security systems, which generally use one or more security cameras, are configured to monitor a situation in a desired monitoring area. Multiple cameras, which are installed for security or crime prevention in each monitoring area, store recorded videos or output the recorded videos on a real-time basis. The multiple cameras may be installed in a monitoring area, such as, for example, in a building, on a street, at home, etc. Multiple cameras that are installed in a home are connected with a home network system that connects home devices installed in the home through a wired or wireless network, which enables control over the home devices.
In the security system, a camera senses occurrence of an intruder, that is, an object, and tracks and records the object. However, if the object falls beyond or deviates from a visible range of the camera, it may be impossible to track the object. For example, if a subject is occluded by an obstacle, if the subject moves beyond a view of the camera, or if recording becomes difficult to perform due to an increased distance between the subject and the camera, it may be impossible to perform tracking of the subject.
Thus, a technique has been developed in which a camera automatically starts recording, senses motion of the object, and automatically moves with the object, if the object is detected by the camera. Nonetheless, if the object moves beyond a visible range of the camera, the camera may not detect the object.
The present disclosure has been made to address at least the above problems and/or disadvantages and to provide at least the advantages described below. Accordingly, an aspect of the present disclosure provides a method and an apparatus for providing a security service by using multiple cameras.
Another aspect of the present disclosure provides a method and an apparatus for monitoring a situation in a particular area by using multiple cameras.
Another aspect of the present disclosure provides a method and an apparatus for providing a security service by using multiple cameras.
Another aspect of the present disclosure provides a method and an apparatus for tracking and recording an object by using multiple cameras.
Another aspect of the present disclosure provides a method and an apparatus for providing a security service by using multiple sensors.
Another aspect of the present disclosure provides a method and an apparatus for sensing an abnormal situation in a monitoring area by using multiple cameras and multiple sensors.
According to an embodiment of the present disclosure, a camera in a security system is provided. The camera includes a video recording unit configured to record a video. The camera also includes a controller configured to identify a subject from the video, to predict a moving path of the subject, to discover at least one neighboring camera corresponding to the moving path, to select at least one target camera from among the at least one neighboring camera, and to generate a recording command including information about the subject and the moving path. The camera further includes a communication unit configured to transmit the recording command to the at least one target camera.
According to another embodiment of the present disclosure, a method for operating a camera in a security system is provided. A video is recorded. A subject from the video is identified. A moving path of the subject is predicted. At least one neighboring camera corresponding to the moving path is discovered. At least one target camera is selected from among the at least one neighboring camera. A recording command including information about the subject and the moving path is transmitted to the at least one target camera.
According to an additional embodiment of the present disclosure, an article of manufacture is provided for operating a camera in a security system. The article of manufacture includes a non-transitory machine readable medium containing one or more programs, which when executed implement the steps of: recording a video; identifying a subject from the video; predicting a moving path of the subject; discovering at least one neighboring camera corresponding to the moving path; selecting at least one target camera from among the at least one neighboring camera; and transmitting a recording command comprising information about the subject and the moving path to the at least one target camera.
The above and other aspects, features, and advantages of the present disclosure will be more apparent from the following detailed description when taken in conjunction with the accompanying drawings, in which:
Embodiments of the present disclosure are described in detail with reference to the accompanying drawings. The same or similar components may be designated by the same or similar reference numerals although they are illustrated in different drawings. Detailed descriptions of constructions or processes known in the art may be omitted to avoid obscuring the subject matter of the present disclosure.
It is to be noted that some components shown in the drawings are exaggerated, omitted, or schematically illustrated, and the drawn size of each component does not exactly reflect its real size.
It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer program instructions may also be stored in a non-transitory computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the non-transitory computer-readable memory produce an article of manufacture including instruction means which implement the function/act specified in the flowchart and/or block diagram block or blocks. The computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operations to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide operations for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
Furthermore, the respective block diagrams may illustrate parts of modules, segments, or codes including one or more executable instructions for performing specific logic function(s). Moreover, it should be noted that the functions of the blocks may be performed in a different order in several modifications. For example, two successive blocks may be performed substantially at the same time, or may be performed in reverse order according to their functions.
The term “unit”, as used herein, means, but is not limited to, a software or hardware component, such as a Field Programmable Gate Array (FPGA) or Application Specific Integrated Circuit (ASIC), which performs certain tasks. A unit may advantageously be configured to reside on a non-transitory addressable storage medium and configured to be executed on one or more processors. Thus, a unit may include, by way of example, components, such as software components, object-oriented software components, class components and task components, processes, functions, attributes, procedures, subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays, and variables. The functionality provided in the components and units may be combined into fewer components and units or further separated into additional components and units. In addition, the components and units may be implemented such that they execute one or more Central Processing Units (CPUs) in a device or a secure multimedia card.
Embodiments of the present disclosure focus on wireless communication systems based on Orthogonal Frequency Division Multiplexing (OFDM), however, the subject matter of the present disclosure may also be applied to other communication systems and services having similar technical backgrounds and channel forms without largely departing from the scope of the present disclosure according to a determination of those of ordinary skill in the art.
Referring to
The cameras 102 and 104 are installed at designated locations in a monitoring area, and may be configured to perform recording at all times or may be configured to perform recording upon sensing motion. Accordingly, at least some of the cameras 102 and 104 may interwork with an adjacent motion sensor or may include a motion sensor.
Referring to
Referring to
A technique for discovering a camera in a moving path or route of a subject among multiple cameras, a technique for selecting and connecting to a camera that is suitable for recording the subject, and a technique for sending a recording command to the selected camera are described in greater detail below.
Referring to
In step 315, the camera determines a moving path of the subject. More specifically, the camera calculates a location of the subject while tracking movement of the subject. The location of the subject may be a relative location with respect to the camera. For example, the camera predicts a future location of the subject based on movement of the subject.
In step 320, the camera searches for at least one neighboring camera. That is, the camera may discover at least one camera that is adjacent to the subject based on the calculated location of the subject. The search and discovery may be performed based on at least one of absolute locations of neighboring cameras, zones where the neighboring cameras are located, and relative locations of the neighboring cameras with respect to the camera. For example, the relative locations of the neighboring cameras may be measured using triangulation based on a directional signal and a signal strength.
In step 325, the camera selects at least one target camera capable of recording the subject based on the discovery result. Additionally, the camera may consider capabilities of each neighboring camera, such as, for example, resolution, frame rate, brightness, Pan/Tilt/Zoom movements (PTZ) function, and so forth, when selecting the target cameras.
In step 330, the camera sends a recording command for setting up the selected target camera to the selected target camera. The recording command may include at least one of information for identifying the subject, information regarding a motion pattern of the subject, location information necessary for continuously recording a moving path of the subject, a recording resolution, and a frame rate.
Referring to
The video recording unit 430 may include a camera driving unit and a camera module, and may perform a general camera function, such as, for example, capturing a still image and recording a video of the subject. The video recording unit 430 may also detect motion of the subject and report the detection to the controller 410, and move along with the subject under control of the controller 410. Accordingly, the video recording unit 430 may include a motion sensor.
The storing unit 420 stores a program code, data, and/or information necessary for operations of the controller 410. The storing unit 420 also receives a recorded video generated by the video recording unit 430 through the controller 410, and stores the received recorded video therein when necessary. The controller 410 stores recorded videos generated during a predetermined period in the storing unit 420. The storing unit 420 may further store additional information necessary for control over the camera 400, e.g., at least one of absolute/relative location information, capability information, and recording commands of the camera 400 and other cameras.
The communication unit 440 may interwork with another camera or another neighboring communication device using a short-range wireless communication means or a wired cable. According to an embodiment of the present disclosure, the communication unit 440 may be connected with another device through a wireless technique such as, for example, Bluetooth, Bluetooth Low Energy (BLE), ZigBee, infrared communication, Wireless Fidelity (Wi-Fi), Wi-Fi Direct, home Radio Frequency (RF), Digital Living Network Alliance (DLNA), or the like. The communication unit 440 may also be connected with another device through a wired technique such as, for example, a High-Definition Multimedia Interface (HDMI) cable, a Universal Serial Bus (USB) cable, a micro/mini USB cable, an Audio-Video (AV) cable, or the like. The communication unit 440 discovers neighboring cameras under the control of the controller 410 to provide locations and/or capability information of the neighboring cameras to the controller 410, and sends a recording command delivered from the controller 410 to a corresponding camera.
The UI 450 may include output modules such as, for example, a display, a speaker, an alarm lamp, and so forth, and input modules such as, for example, a touchscreen, a keypad, and so forth. The UI 450 may be used by a user in directly controlling the camera 400.
The location measuring unit 460 measures absolute information or relative information regarding a location in which the camera 400 is installed, and provides the measured absolute information or relative information to the controller 410. The location measuring unit 460 may be embodied as, for example, a Global Positioning System (GPS) module. The absolute information may be, for example, a latitude and a longitude measured by the GPS module. The relative information may be, for example, a relative location with respect to a predetermined reference (e.g., a gateway, a server, a control console, or the like).
The controller 410 may be embodied as a processor and may include a Central Processing Unit (CPU), a Read-Only Memory (ROM) storing a control program for control over the camera 400, and a Random Access Memory (RAM) used as a memory region for tasks performed in the camera 400. The controller 410 controls the video recording unit 430 by executing programs stored in the ROM or the RAM, or by executing application programs that may be stored in the storing unit 420. The controller communicates with neighboring cameras through the communication unit 440, and generates a recording command and sends the recording command to the neighboring cameras, or stores information collected from the neighboring cameras in the storing unit 420.
More specifically, the controller 410 collects location information that is measured by the location measuring unit 460, location information that is input by a user, or location information that is set at the time of manufacturing. The controller 410 identifies a subject based on a recorded video delivered from the video recording unit 430, and detects motion. In another embodiment of the present disclosure, the controller 410 receives a sensing result obtained by an adjacent motion sensor through the communication unit 440, and detects motion of the subject. The controller 410 also discovers neighboring cameras through the communication unit 440 in order to select a target camera to which a recording command is to be sent. The controller generates the recording command, and sends the generated recording command to the selected target camera through the communication unit 440.
Referring to
In step 520, the sensor 504 sends a recording command to the first camera 506 and the third camera 502, which respond to the discovery signal. In step 525, the first camera 506 and the third camera 502 begin recording in response to the recording command. The first camera 506 and the third camera 502 may begin recording after moving their view toward the sensor 504. The location of the sensor 504 may be known to the first camera 506 and the third camera 502, or may be delivered together with the recording command.
The first camera 506 identifies a subject as a target object to be tracked and recorded, in step 530, and tracks a moving path of the subject in operation 535. For example, the first camera 506 may identify whether the subject is a human and whether the subject is a resident or a non-resident in order to determine whether to track the moving path of the subject.
In step 540, the first camera 506 sends a discovery signal to the second camera 508, neighboring the first camera 506, and located on or near the moving path of the subject. In an embodiment of the present disclosure, the first camera 506 may send the discovery signal through a directional signal directed along the moving path. In another embodiment, the first camera 506 may select the second neighbor camera 508 near the moving path based on previously stored location information or zone information of the neighboring cameras, designate the second camera 508, and send the discovery signal to the designated second camera 508. The second camera 508 sends a response to the discovery signal to the first camera 506.
The first camera 506 selects the second camera 508, which responds to the discovery signal, as a target camera, in step 545, delivers target object information to the second camera 508, in step 550, and sends the recording command to the second camera 508, in step 555. In another embodiment of the present disclosure, the recording command may be sent including target object information related to an identification and/or motion of the subject. The first camera 506 may continuously perform recording for a predetermined time after sending the recording command, or may continuously perform recording while detection of motion of the subject.
In step 560, the second camera 508 begins recording in response to the recording command. The second camera 508 may begin recording in a direction toward the location or moving path of the subject. Information about the location or moving path of the subject may be delivered together with the target object information or the recording command. In step 565, the second camera 508 identifies the subject as the target object to be tracked and recorded, and thereafter, similar operations are repeated.
As such, embodiments of the present disclosure may continuously track the subject through interworking between a sensor and/or cameras, without intervention of a CPU or a user, and may directly identify and track and record an unspecified intruder. Moreover, when necessary, multiple cameras may record the subject from various angles.
Referring to
Referring to
Techniques for tracking a moving path of a subject using cameras is described in greater detail below.
Referring to
Referring to
M-SEARCH*HTTP/1.1
ST: urn:SmartHomeAlliance-org:device:Wi-Fi_Camera
MX: 5
MAN: “ssdp:discover”
HOST: 239.255.255.250:1900
In this example, the discovery signal 812 includes information Wi-Fi_Camera indicating that a target device to be discovered is a Wi-Fi camera and information identifying the first camera 802, e.g., an Internet protocol (IP) address and a port number.
The second camera 804 and the third camera 806 receive the discovery signal 812 and send respective response signals 814. The response signals 814 may be configured as set forth below.
HTTP/1.1 200 OK
ST: urn:SmartHomeAlliance-org:device:Wi-Fi_Camera
SERVER: Linux 1.01 SHP/2.0 CameraMaster/1.0
Positioning Type=absolute location
Position=latitude/longitude
The response signal 814 may include information Wi-Fi_Camera, indicating that a device sending the response signal 814 is a Wi-Fi camera, and location information. The location information may include a latitude and a longitude, for example, for an absolute location.
Referring to
At predetermined time intervals or upon sensing motion of a subject, the first camera 902 tracks and records the subject and calculates a moving path of the subject in 910. When the subject is predicted to deviate from a visible range of the first camera 902, the first camera 902 broadcasts a discovery signal 912 seeking neighboring cameras and discovering at least one neighboring camera. As a result of predicting the motion of the subject, the first camera 902 determines that the subject enters a particular zone, e.g., a kitchen zone, and generates the discovery signal 912 for discovering a camera in the kitchen zone. For example, the discovery signal 912 may be configured as set below.
M-SEARCH*HTTP/1.1
ST: urn:SmartHomeAlliance-org:device:Wi-Fi_Camera_kitchen
MK: 5
MAN: “ssdp:discover”
HOST: 239.255.255.250:1900
The discovery signal 912 may include information Wi-Fi-Camera_kitchen, indicating that a target device to be discovered is a Wi-Fi camera located in the kitchen zone, and an IP address and a port number for identifying the first camera 902.
The second camera 904 and the third camera 906 receive the discovery signal 912 and send respective response signals 914 to the third camera 906. The response signals 914 may be configured as set forth below.
HTTP/1.1 200 OK
ST: urn:SmartHomeAlliance-org:device:Wi-Fi_Camera_kitchen
SERVER: Linux 1.01 SHP/2.0 CameraMaster/1.0
Positioning Type=relational location
Position=camera1/90 degree/5 m
The response signal 914 includes information Wi-Fi_Camera_kitchen, indicating that a device sending the response signal 914 is a Wi-Fi camera located in the kitchen zone, and location information. The location information may include a reference device camera1, a direction 90 degree, and a distance 5 m, for example, for a relative location.
Referring to
At predetermined time intervals or upon sensing motion of a subject, the first camera 1002 tracks and records the subject and calculates a moving path of the subject in 1010. When the subject is predicted to deviate from a visible range of the first camera 1002, the first camera 1002 outputs a discovery signal 1012 seeking neighboring cameras and discovering at least one camera neighboring the third camera 1006 through a directional signal. That is, as a result of predicting motion of the subject, the first camera 1002 determines that the subject enters a visible range of the third camera 1006, and forms a directional signal toward the third camera 1006. For example, the discovery signal 1012 may be configured as set forth below.
M-SEARCH*HTTP/1.1
ST: urn:SmartHomeAlliance-org:device:Wi-Fi_Camera
MX: 5
MAN: “ssdp:discover”
HOST: 239.255.255.250:1900
The discovery signal 1012 may include information Wi-Fi-Camera_kitchen, indicating that a target device to be discovered is a Wi-Fi camera, and an IP address and a port number for identifying the first camera 1002.
The third camera 1006 receives the discovery signal 1012 and sends a response signal 1014. The response signal 1014 may be configured as set forth below.
HTTP/1.1 200 OK
ST: urn:SmartHomeAlliance-org:device:Wi-Fi_Camera
SERVER: Linux 1.01 SHP/2.0 CameraMaster/1.0
Positioning Type=relational position
Position=camera1/90 degree/5 m
The response signal 1014 includes information Wi-Fi-Camera_kitchen, indicating that a device sending the response signal 1014 is a Wi-Fi camera, and location information. The location information may include a reference device camera1, a direction 90 degree, and a distance 5 m, for example, for a relative location.
Another example of the response signals 814, 914, and 1014 may be configured as set forth below.
HTTP/1.1 200 OK
ST: urn:SmartHomeAlliance-org:device:Wi-Fi_Camera(_zone)
EXT:
USN:uuid:abc41940-1a01-4090-8677-abcdef123456
CACHE-CONTROL: max-age=1800
LOCATION: http://168.219.208.38:8888/Location
SERVER: Linux 1.01 SHP/2.0 CameraMaster/1.0
Location information about a responding camera may be expressed with an IP address and a port number.
Another example of the response signals 814, 914, and 1014 may be configured, as set forth below.
HTTP/1.1 200 OK
ST: urn:SmartHomeAlliance-org:device:Wi-Fi_Camera(_zone)
EXT:
USN:uuid:abc41940-1a01-4090-8677-abcdef123456
CACHE-CONTROL: max-age=1800
LOCATION:
SERVER: Linux 1.01 SHP/2.0 CameraMaster/1.0
Positioning Type=xxx
Position=yyy
Location information about a responding camera may be expressed with a type (an absolute location, a relative location, or a zone) of the location information and a value indicating a location.
In
Referring to
The camera collects information about the discovered at least one neighboring camera, in step 1140. The camera determines whether there is a neighboring camera capable of recording the subject, in step 1145. The determination may be performed based on the calculated moving path and location and/or capability information regarding each neighboring camera. If there is a neighboring camera that is capable of recording the subject, the camera selects at least one target camera to which a recording command is to be sent, in step 1150. When selecting the target camera, the camera may select a neighboring camera located near the moving path of the subject, based on location information included in a response signal received from neighboring cameras. In another embodiment of the present disclosure, the camera may discover a neighboring camera located in front of the moving path of the subject by using a directional antenna. In another embodiment of the present disclosure, the camera may select a neighboring camera based on previously stored location information about neighboring cameras.
In step 1155, the camera generates a recording command for setting up the target camera. The recording command may include at least one of information about a motion pattern of the subject, location information necessary for continuously recording the moving path of the subject, a recording resolution, and a frame rate. In step 1160, the camera sends the recording command to the target camera to request the target camera to start recording.
If a neighboring camera capable of recording the subject does not exist, the camera selects all the cameras discovered in step 1135 as target cameras, in step 1165, and sends the recording command to the target cameras to request the target cameras to start recording, in step 1170. In step 1175, the camera requests the target cameras to search the subject by controlling pan, tilt, or the like. In another embodiment of the present disclosure, the camera may request recording of the subject through the recording command.
Referring to
If the camera is the target camera, the camera sends a response signal to the neighboring camera in response to the discovery signal, in step 1220. The response signal may include at least one of location information and capability information regarding the camera. The camera receives a recording command instructing it to start recording, from the neighboring camera, in step 1225, and starts recording in a direction indicated by the recording command, in step 1230. For example, the recording command may include target object information related to an identification and/or a motion of the subject.
The camera searches for the subject through recording, in step 1235, and determines whether the subject is discovered, in step 1240. If the recording command indicates an identification of the subject, the camera may determine whether the subject indicated by the recording command is included in a recorded video, in step 1240. If the indicated subject or an arbitrary subject is discovered, the camera continues recording while tracking the subject, in step 1245. If the indicated subject or an arbitrary subject is not discovered, the camera terminates recording immediately or after a predetermined time, in step 1250.
Referring to
An example of the recording command may be configured as set forth below.
The recording command may include adjustment values for tilt and pan with which a target camera is to initiate recording.
Another example of the recording command may be configured as set forth below.
The recording command may include information instructing the target camera to initiate recording of an object video.
Referring to
The third camera 1406 detects the subject and continues tracking the subject, in step 1414. If the subject is predicted to leave a visible range of the third camera 1406, the third camera 1406 broadcasts a recording command including information about the subject, in step 1416. The recording command is received at the first camera 1402 and the fourth camera 1408 located near the third camera 1406. The fourth camera 1408 begins recording in response to the recording command, in step 1418. The first camera 1402 ignores the recording command, in step 1418a, because it is already continuing to record the subject. The fourth camera 1408 detects the subject and continues tracking the subject.
An example of the recording command may be configured as set forth below.
The recording command may include minimum values for tilt and pan for an arbitrary camera.
Another example of the recording command may be configured as set forth below.
The recording command may include maximum values for tilt and pan for an arbitrary camera.
A description is provided below of embodiments in which an abnormal situation is sensed using multiple sensors for in-house security and safety management.
Referring to
Referring to
Referring to
Referring to
As shown in
Referring to
The sensor S2 senses generation of an new event of smoke, in operation 1914, and the corresponding camera B detects occurrence of the new event of smoke according to a report from the sensor S2, in operation 1916. The sensor S2 requests the sensors S1 and S4, which are registered in its cluster, to start the monitoring operation, in step 1918, such that the cameras A and D corresponding to the sensors S1 and S4 initiate recording.
As such, each sensor and each camera deliver occurrence of an event and initiation of a monitoring operation to related other devices, allowing an abnormal situation to be continuously monitored.
Referring to
For example, the camera C may be a pet's collar cam mounted on a pet. The camera C is grouped with neighboring sensors located around a house, and the camera C initiates recording based on the event. As the pet moves, the camera C may record an accurate video from various angels. In another example, the camera D may be a camera mounted on a toy robot. The camera D may move to a position where a sensor having sensed the event is located, to perform recording toward the event. In another example, the camera E may be a camera mounted on a robotic cleaner. The camera E may perform recording while moving.
As shown in
For the sensor S1 located outside the house, the sensors S3 and B1, the camera D on the toy robot, and the camera E on the robotic cleaner are managed as a family, and the sensor S1 has no neighbor. For the sensor S2, the camera C on the pet, the camera D on the toy robot, and the camera E on the robotic cleaner are managed as a family, and the sensor 3 is managed as a neighbor. For the sensor B1, other pets may be managed as a family and a neighbor may be another sensor located near the sensor B1 along with movement of the pet.
Referring to
The server determines whether an abnormal situation occurs based on the sensing result from at least one of the sensors, in step 2215, and determines a corresponding operation, in step 2220, if the abnormal situation occurs. The abnormal situation and the corresponding operation may be determined, for example, based on Table 1.
For example, if a flood is sensed, the server automatically cuts off a water supply and cuts off an electricity supply to avoid a short-circuit. The server may further transmit a video of a situation to a user. If a gas leakage is sensed, the server automatically cuts off a gas supply and cuts off an electricity supply to avoid a fire. If motion is sensed, the server checks a lock of a door and locks the door if the door is unlocked. The server may further record a video through a web camera and saves the recorded video. If a heart rate sensor senses an abnormal event, the server notifies registered family members of the event and notifies the nearest doctor or emergency service of the event. If a fall sensor senses an abnormal event, the server notifies registered family members of the event. If smoke is sensed, the server cuts off a power supply and blocks the movement of an elevator. The server may further lock a computer and call a fire department. If a door lock is opened by wrong trials, the server disables the door lock from opening and notifies the user of the event. If a water leakage is sensed, the server automatically cuts off a water supply. If a window glass is broken, the server rotates cameras to a corresponding zone and notifies the user and a security company of the event. The server may further notify registered neighbors of the event. If the barking of a pet is sensed, the server collects video from the pet's collar camera. If other events are sensed, the server notifies the user of the events and transmits video to the user.
In step 2225, the server transmits a control command for controlling devices in the monitoring system, according to the determined operation or sends an emergency call to a registered user/service.
Referring to
Referring to
Referring to
Referring to
Various embodiments of the present disclosure may be embodied as computer readable codes on a computer readable recording medium. The computer readable recording medium is any data storage device that can store data, which can be thereafter read by a computer system. Examples of the computer readable recording medium include ROM, RAM, Compact Disc (CD)-ROMs, magnetic tapes, floppy disks, optical data storage devices, carrier waves, and data transmission through the Internet. The computer readable recording medium can also be distributed over network coupled computer systems so that the computer readable code is stored and executed in a distributed fashion. Also, functional programs, codes, and code segments for accomplishing embodiments of the present disclosure can be easily construed by programmers skilled in the art to which the present disclosure pertains.
Various embodiments of the present disclosure can be implemented in hardware or a combination of hardware and software. The software can be recorded to a volatile or non-volatile storage device, such as a ROM irrespective of deletable or re-recordable, to a memory such as a RAM, a memory chip, a memory device, or an integrated circuit, or to a storage medium that is optically or magnetically recordable and readable by a machine (e.g. a computer), such as a CD, a Digital Versatile Disc (DVD), a magnetic disk, or a magnetic tape. The storage is an example of a machine-readable storage medium suitable for storing a program or programs including instructions to implement the embodiments of the present disclosure.
Accordingly, the present disclosure includes a program including a code for implementing the apparatus or the method as appended in the claims and a machine-readable storage medium that stores the program. The program may be transferred electronically through any medium such as a communication signal transmitted through a wired or wireless connection and the present disclosure covers equivalents thereof.
The apparatus, according to various embodiments of the present disclosure, may receive a program from a program providing apparatus, which is wire/wirelessly connected thereto, and thereafter store the program. The program providing apparatus may include a memory for storing a program including instructions allowing the apparatus to perform a preset content protection method, information required for a contents protection method, or the like, a communication unit for performing a wired/wireless communication with the apparatus, and a controller for transmitting a corresponding program to a transmitting and receiving apparatus either in response to a request from the apparatus or automatically.
While certain embodiments have been shown and described, it will be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the disclosure as defined by the appended claims.
Number | Date | Country | Kind |
---|---|---|---|
10-2014-0190724 | Dec 2014 | KR | national |