The present invention relates generally to detection of an object by a security system and, more particularly, generating attributes related to the detected object.
In the present-day environment, video or image based security systems are used to capture images based on various triggers. In some implementations, captured images are stored in a data store for review and playback. In some implementations, captured images may be sent to a user in real time. In some implementations the event of capture may be reported to a user.
In some examples, multiple devices may be capturing images. However, images captured by multiple devices may not be reviewed as a whole to develop a story or attributes about the detected object.
As more and more image capturing devices are deployed in a neighborhood, there is a need to piece together information gathered by a plurality of the image capturing device to develop a story or attribute about a detected object of interest. It is with these needs in mind, this disclosure arises.
In one embodiment, a method for determining an object is disclosed. A security appliance with process and memory is provided. A plurality of security devices are deployed within a defined neighborhood. Image of an object is received by the security appliance, image of the object captured by at least one security device located the defined neighborhood. Image of the object is processed by the security appliance to generate a first plurality of attributes for the object. The object is associated as belonging to the defined neighborhood.
In yet another embodiment, a system to determine an object is disclosed. A security appliance with process and memory is provided. A plurality of security devices are deployed within a defined neighborhood. Image of an object is received by the security appliance, image of the object captured by a first security device located in a first location. Image of the object is processed by the security appliance to generate a first plurality of attributes for the object. The object is associated as belonging to the defined neighborhood.
This brief summary has been provided so that the nature of the disclosure may be understood quickly. A more complete understanding of the disclosure can be obtained by reference to the following detailed description of the preferred embodiments thereof in connection with the attached drawings.
The foregoing and other features of several embodiments are now described with reference to the drawings. In the drawings, the same components have the same reference numerals. The illustrated embodiments are intended to illustrate but not limit the invention. The drawings include the following Figures:
The embodiments herein and the various features and advantageous details thereof are explained more fully with reference to the non-limiting embodiments that are illustrated in the accompanying drawings and detailed in the following description. Descriptions of well-known components and processing techniques are omitted so as to not unnecessarily obscure the embodiments herein. The examples used herein are intended merely to facilitate an understanding of ways in which the embodiments herein may be practiced and to further enable those of skill in the art to practice the embodiments herein. Accordingly, the examples should not be construed as limiting the scope of the embodiments herein.
The embodiments herein disclose a systems and methods for generating an address for an object, based on its location. Referring now to the drawings, where similar reference characters denote corresponding features consistently throughout the figures, various examples of this disclosure is described.
A security appliance 114 may be executed in computing resource 102A. Additionally, one or more application servers may be executed in computing resource 102B and 102C. As an example, application server A 116 is executed on computing resource 102B and application server B 118 is executed on computing resource 102C. As one skilled in the art appreciates, application servers may be configured to provide one or more services.
In some examples, application servers may be configured as a map server, configured to provide a map associated with a location. In some examples, application servers may be configured as an image processor, capable of identifying various images, based on detecting one or more attributes of the image. In some examples, application servers may be configured as authentication servers, configured to authenticate a user to provide access to various services and functions. For example, selective access to the security appliance may be granted to one or more users, based on verification of credentials of a user by the authentication server. As one skilled in the art appreciates, these are only some of the examples of functions and features of application servers and application servers may be configured to provide various other services.
Now, referring to
The processor engine 202 is configured to perform various arithmetic and logical functions of the security appliance 114. The memory 218 is used to stored and retrieve various programs, sub-routines, including transient and permanent information used or created by the processor engine 202. The data store 208 is used to store and retrieve various information generated, received or used by the security appliance 114. In one example, the data store 208 may include a user data store 220 and a video data store 224.
The admin user interface 204 is configured to present an admin user interface to receive one or more information from an admin user of the security appliance 114. As one skilled in the art appreciates, an admin user may have certain privileges that may be different from a subscriber user. The subscriber user interface 206 is configured to present a subscriber user interface to receive one or more information from a subscriber user of the security appliance 114. As one skilled in the art appreciates, a subscriber user may have certain privileges that may be different than an admin user. In one example, various information received from a subscriber user may be stored in the user data store 220.
The video receiver 210 is configured to receive images or video into the security appliance. In one example, the images or video received may be stored in the video data store 224. Video or images may be received from various sources, for example, from a video capturing device 226, a video feed 228 or a smart video capturing device 230. In some examples, a video capturing device 226 may capture a plurality of images, as a video chunk for a short duration of time, sometimes in the range of 30 to 60 seconds.
In some examples, the video capturing device 226 may push the captured video chunk to the security appliance 114. In some examples, the video capturing device 226 may send the video chunk to a designated storage location for storage, for example, a storage location in a computing environment, as an example, a cloud storage device. In some examples, a link to the stored location may be sent to the security appliance 114 for retrieval. In some examples, the security appliance 114 may be configured to retrieve the details of the link and access the link to retrieve the stored video chunk at the storage location. In some examples, the security appliance 114 may periodically retrieve the stored video chunk from the storage location.
In yet another example, the storage device may be configured to periodically send video feeds 228 to the security appliance 114. In yet another example, a smart video capturing device 230 may send or push the video chunk to the security appliance.
As one skilled in the art appreciates, various video chunks received by the security appliance 114 may conform to different protocols. The video receiver 210 is configured to decipher various video chunks and attributes related to the video chunks. The video and corresponding attributes are stored in the video data store 224 for further processing. Some of the attributes may include the identification of the security device, location of the security device, time stamp indicating when the video chunk was recorded and the likes.
The application programming interface 212 provides an interface to communicate with various external services, for example, services provided by one or more application servers. In one example, the application programming interface 212 may provide an interface to communicate with an object detection server 232. In one example, the application programming interface 212 may provide an interface to communicate with a map server 234. In yet another example, the application programming interface may provide an interface to communicate with a social media server 236.
As one skilled in the art appreciates, the security appliance 114 may communicate with various external devices using one or more different communication protocols. The communication engine 214 is configured to communicate with external devices, using one or more protocols recognized by one or more external devices.
Having described an example security appliance 114 of this disclosure, now referring to
The login processor 304 is configured to receive various information, for example, user name and password for verification, verify the credentials and grant selective access to various functions and features of the security appliance 114 to a user. In one example, credential information may be received from a subscriber user, who interacts with the security appliance 114 using the subscriber user interface 206. In another example, credential information may be received from an admin user, who interacts with the security appliance 114 using the admin user interface 204.
The geo mapper engine 306 is configured to present a geo map of the location of the security device. For example, the geo mapper engine 306 may receive an address of the location of the security device from a user and based on the received address, retrieve a geo map of the location, for example, from a map server. The retrieved geo map of the location of the security device may be selectively presented to the user on a display device, by the security appliance 114.
The object detection engine 308 is configured to analyze the received video chunks and detect one or more objects present in the received video chunk. Once an object is identified, one or more attributes of the object are detected. The object along with one or more detected attributes are stored in the data store. In some examples, the attributes of the object may be referred to as meta data for the object. The object along with the meta data for the object is stored in a data store, along with a time stamp corresponding to the video chunk and the security device that produced the video chunk.
The database engine 310 is configured to communicate with one or more data stores. In some examples, the database engine 310 may be configured to retrieve stored video chunks in a data store. In some examples, the data store may be local or internal to the security appliance 114. In some examples, the data store may be external or remote to the security appliance 114, for example, accessible over the link 106. In some examples, the database engine 310 is configured to store the object and the meta data associated with the object in a data store. The data store may be internal to the security appliance 114, for example, data store 208 or a data store external to the security appliance 114. In some examples, the database engine 310 may associate one or more user information to the video chunk and the detected objects from the video chunk.
The analytics engine 312 is configured to analyze various detected objects and their attributes and develop a story regarding the detected object. For example, the movement of the object within the neighborhood. The analytics engine 312 may also develop statistics related to detected objects, incidents observed in a given neighborhood over time and the likes.
The AI engine 314 may be configured to analyze various activities detected by the security appliance for a given neighborhood, generate historic data of activities detected for a given neighborhood, generate indicators of likely future events in a given neighborhood or other neighborhoods. For example, break-ins may happen in one neighborhood on certain days or times of the week and a similar break-ins may happen in another neighborhood on certain other days or times. Based on the analysis of the historic information, the AI engine 314 may predict a likely future event in another neighborhood, based on activities in one neighborhood.
Now referring to
The camera controller 410 is configured to selectively control various functions of the camera 402. In one example, a sensor (not shown) may be disposed in the legacy security device 400, for example, in the housing of the camera 402, to detect any movement in the view of the camera 402 and send a trigger signal to the camera controller 408. Based on the received trigger signal, the camera controller 408 may selectively turn on the camera 402 to capture any images visible to the camera 402. The images captured by the camera 402 is processed by the video capture engine 412. The processed video is then stored in the data store 408 for further action. In one example, processing of the video may include one or more of encoding the video in a known or proprietary format, enhancing the video, compressing the video and encrypting the video. In one example, the communication interface 414 may be used to communicate with external devices. In some examples, an alert signal may be sent to a user, by the communication interface 414.
In some examples, the legacy security device 400 may be coupled to a digital video recorder (DVR) (not shown) which may be configured to communicate with the security device 400 and store one or more of the video images captured by the security device 400. In some examples, the legacy security device 400 may be configured to communicate over the internet and store one or more of the video images in a storage device accessible over the internet. In some examples, the DVR may be configured to communicate over the internet and store one or more of the video images in a storage device accessible over the internet.
In some examples, video images stored in the storage device accessible over the internet may be selectively retrieved by the security appliance 114. In some examples, the security appliance 114 may be provided with access information to access the stored video images from the DVR. In some examples, the stored video images in the legacy security device 400 may be accessible by the security appliance 114. In some examples, a user may selectively upload the stored video images to the security appliance 114.
Having described a legacy security device 400, a smart security device 420 will now be described with reference to
The smart security device 420 functions similar to legacy security device 400 in that video images are captured and stored in the data store 408. The video stream processor 422 retrieves the stored video images and identifies one or more objects in the video images and sends the objects for further processing by the object engine 424. The object engine 424 identifies various attributes of the object and creates meta data associated with the detected object. The detected object along with the meta data is stored in the data store 424. In one example, the object engine 424 classifies one or more detected objects as known objects, based on observing the presence of the detected object in multiple video streams over time. Detected objects that do not occur frequently may be classified as an unknown object. In one example, the unknown object and associated meta data may be sent to the security appliance, by the smart security device.
The SA interface 426 is configured to communicate with the security appliance 114. In one example, the security appliance 114 may send a command to the smart security device to initiate capture of the images by the camera 402. For example, if there are multiple smart security devices in a neighborhood, and one of the smart security device sends a message to the security device that an unknown object was detected, the security appliance 114 may selectively enable other smart security devices in the neighborhood to initiate capture of the video images by their camera. Objects detected by these other smart security devices may be classified as known or unknown object. If multiple smart security devices detect the same object, based on a time of detection and location of the smart security device, a possible path the object moved in the neighborhood is determined.
In some examples, objects classified as unknown may be sent to a user to classify. In some examples, the smart security device 420 may send the objects classified as unknown to the user for classification. In some examples, security appliance 114 may send the objects classified as unknown to the user for classification. Based on user response, the objects classified as unknown may be reclassified as a known object. In some examples, updated classification of the object may be stored in the smart security device 420. In some examples, the updated classification of the object may be stored in the security appliance 114.
In some examples, the security appliance 114 may send one or more objects it has received from security devices in the neighborhood to the smart security device 420 for identification. The smart security device 420 may compare the received object and compare the received objects with objects stored in the data store 408 for a comparison. If the received object matches one or more of the stored objects that are classified as known, the smart security device 420 responds to the security device 114, indicating the received object is a known object.
Now referring to
Now, referring to table 502, some of the attributes of interest for user are location 510, camera list 512, incident list 514, neighbor list 516 and familiar object list 516. Location 510 indicates the location of the user. In one example, the location of the user may include a street address. In one example, the street address may be mapped to a geo location or geo coordinates, like latitude and longitude. Camera list 512 indicates one or more cameras associated with the user. Incident list 514 indicates incidents reported by the user. Neighbor list 516 indicates neighbors associated with the user. As one skilled in the art appreciates, a neighbor will be another user, with associated user, camera, incident and video chunk tables. Familiar object list 518 corresponds to objects determined to be known object based on classification of objects captured by one or more cameras associated with the user.
Now, referring to table 504, some of the attributes of interest for a camera are described. Streaming URL 520, which corresponds to a storage location where video chunks from the camera are stored. In one example, the streaming URL 520 may be used by the security appliance to selectively retrieve stored video chunks. As previously described, in some examples, the storage location may be local to the security device and in some examples, the storage location may be external to the security device. Camera UID 522 corresponds to a unique identification for the camera. Camera privacy 524 corresponds to a privacy setting for the camera. A user may selectively assign a privacy setting for the camera, for example either as a “public” camera or a “private” camera. If the privacy setting of a camera is set as “public”, then, the video chunks captured by the camera may be accessible to the security appliance. If the privacy setting of the camera is set as “private”, then, the video chunks captured by the camera may not be accessible to the security appliance. In some examples, further granularity in the privacy settings may be provided. One example of further granularity may include assigning a “protected” category, wherein the video cannot be shared with neighbors, but can be processed by the security appliance. Another example of further granularity may include assigning a “Time-limited public”, meaning the security appliance can retain the video only for a limited period of time. In one example, a user may be provided an option to select the time period. Camera type 526 may indicate the type of camera, for example, a legacy security device or a smart security device. Username 528 and password 530 are associated with the access control credentials to access the camera. Chunk list 532 corresponds to the list of video chunks created and stored by the camera.
Now referring to table 506, some of the attributes of interest for incident are described. Title 534 corresponds to a title of the incident. In one example, this is created by the user. Description 536 corresponds to a brief description of the incident. In one example, this is created by the user. Camera details 538 corresponds to the camera that captured the incident. Object list 540 corresponds to the list of objects observed in the video chunk that corresponds to the incident. Date from 542 and date to 544 correspond to a time window during which the incident took place. For example, if a package was delivered at 2:00 PM on Jan. 10, 2019 and was noticed missing at 6:00 PM on Jan. 10, 2019, the date from 542 would correspond to 2:00 PM on Jan. 10, 2019 and date to 544 would correspond to 6:00 PM on Jan. 10, 2019. Report 546 corresponds to the report associated with the incident.
Now, referring to table 508, some of the attributes of interest for video chunk are described. Chunk path 548 refers to the path, address or url to the stored video chunk. Timestamp 550 corresponds to the time associated with when the video chunk was captured. Objectlist 552 corresponds to the objects detected in the video chunk. Processed path 554 corresponds to the path, address or url to the processed video chunk. As previously described, the video chunk is processed or analyzed for detecting objects and detected objects along with the meta data associated with the detected object are stored. Processed path 554 corresponds to the path, address, or url that links to the processed video chunk. Status 556 corresponds to whether the video chunk has been processed or not. Chunk duration 558 corresponds to the duration of the video chunk. As previously described, duration of a video chunk may be of the order of about 30 seconds to about 60 seconds.
Now, referring to
Now, referring to
Object ID 582 may correspond to a unique identifier for the object. Object ID 582 may be numeric, alphabet or a combination of alpha-numeric number. Object classification 584 identifies a class or group to which the object belongs to. For example, based on the analysis of the video chunk, one or more objects may be identified. Each identified objects is given an object ID. Next, the object is analyzed to determine which group it belongs to. For example, a person, vehicle, animal etc. Based on the analysis, Object classification 584 is updated to indicate the group to which the object belongs. Timestamp 588 refers to the time at which the object was detected. Camera type 590 corresponds to the type of camera where the object was captured. Camera location 592 corresponds to the location details of the camera that captured the image of the object.
Object characteristics 594 corresponds to various characteristics of the object. In one example, the object characteristics 594 may be different, based on the object classification 584. Some of the object characteristics 594 may be whether the object is a friendly, unfriendly or unknown object. In case of a vehicle, the object characteristics 594 may be the color of the vehicle, license plate number and the likes. In one example, objects that are friendly, may be associated with corresponding neighborhood table shown in
Linked cameras 596 corresponds to a list of other cameras that are linked or associated with the camera where the object was captured. For example, if there are multiple cameras in a location and an object was captured in one camera and later classified, it may be beneficial to associate the object with other cameras in the same location.
Object history 598 corresponds to history associated with the identified object, for example, if the object was a subject or target of prior incidence analysis.
Object signature 598 corresponds to a signature created by analyzing various features of the object image. In some example, the signature is created by using perceptual algorithms that permit representing an image in a small vector space, for example, an array of 128 floating point numbers known as embeddings. These embeddings have a characteristics that embeddings of same or similar objects are mathematically close to each other. If a signature for a captured image is calculated as an embedding, it may be compared with signatures calculated for other images to determine if the captured image is similar to a previously captured image. This will be further described with reference to
Now, referring to
Now, referring to
In this example, the signatures are an array of 128 numbers. For example, referring to
Euclidean distance D=√((p11−p21)2+(p12−p22)2 . . . +(p1128−p2128)2) Equation 1
In one example, the Euclidean distance between signature Sig-XYZ-OB1 and signature Sig-XYZ-OB2 is computed. In this example, a normalized Euclidean distance of 0.5889 is computed. This distance is below the threshold value of 0.8. So, based on this calculation, images Sig-XYZ-OB1 and Sig-XYZ-OB2 are declared as similar.
In one example, the Euclidean distance between signature Sig-XYZ-OB1 and signature Sig-XYZ-OB3 is computed. In this example, a normalized Euclidean distance of 0.7735 is computed. This distance is below the threshold value of 0.8. So, based on this calculation, images Sig-XYZ-OB1 and Sig-XYZ-OB3 are declared as similar.
In one example, the Euclidean distance between signature Sig-XYZ-OB1 and signature Sig-XYZ-OB4 is computed. In this example, a normalized Euclidean distance of 0.9264 is computed. This distance is above the threshold value of 0.8. So, based on this calculation, images Sig-XYZ-OB1 and Sig-XYZ-OB4 are declared as not similar. In one example, the Euclidean distance between signature Sig-XYZ-OB3 and signature Sig-XYZ-OB4 is computed. In this example, a normalized Euclidean distance of 0.7742 is computed. This distance is below the threshold value of 0.8. So, based on this calculation, images Sig-XYZ-OB3 and Sig-XYZ-OB4 are declared as similar.
As one skilled in the art appreciates, by comparing signatures of multiple images captured for the same object, a more accurate determination can be made to recognize the object as a known object or an unknown object.
As one skilled in the art appreciates, by storing one or more signatures of a previously captured image and comparing the stored signatures with a signature for a newly captured image, the security appliance can determine if the newly captured image represents an object that was previously captured.
In some examples, the security device may be configured to generate a signature of a captured image and send the generated signature to the security appliance to verify against signatures of previously captured images to determine if the captured image is a known or an unknown image.
Further, as one skilled in the art appreciates, in some examples, the signatures created as embeddings are one way in the sense, they may not be used to recreate the image. By only sending a signature of the captured image, it may be beneficial to protect the privacy of the person and the like. In some examples, the list of signatures classified as “known” may be transmitted by a security device to all the other security devices. These security devices may independently conclude that a newly captured image is a known image, by generating a signature for the newly captured image and comparing the generated signature with a list of known signatures. In this way, only signatures for images that do not match with the signatures of known images may be sent to the security appliance for further processing.
In some examples, every time a security device detects a new object, for example, a new face, it compares the image of the new object with its own list of known objects. If it is an unknown object, the security device generates a signature for the new object and sends it to the security device to verify if it is a known object at the security device. The security appliance compares the signature received from the security devices with its list of known objects and if it is a match, it is indicative that the new object is known to one or more security device in its network of security devices. The security appliance may then send meta data associated with the recognized object to the security device. The security device updates its list of known object with received meta data from the security appliance.
As one skilled in the art appreciates, by using signatures, the security devices and security appliance can readily determine if an image of an object captured by the security device is known to other security devices in the network of security devices. Further, details of an object can be shared by a security device to the security appliance and other security devices, using the generated signature, without sharing any personally identifiable information of the object. In some examples, sharing a signature requires significantly less data transfer than sharing an image or a video clip.
In some examples, the object signature may generated by the security appliance 114 as previously described. For example, the processor engine 202 may be configured to generate the object signature. In some examples, the object detection engine 308 of the processor engine 202 may be configured to generate the object signature. For example, a legacy security device 400 may not have the capability to process the image and generate the object signature and may send the image for processing by the security appliance.
In some examples, the object signature may be generated by a smart security device 420. For example, object engine 424 may be advantageously configured to generate the object signature. In such an example, the smart security device 420 may be selectively configured to send the object signature to the security appliance 114, instead of sending an image of the object.
In some examples, the security device may be configured to perform comparison of various object signatures. In some examples, the security appliance may be configured to perform comparison of various object signatures.
Now, referring to
Now, referring to
Now, referring to
The smart security device 420 sends a message to the security appliance 114 to create an incident (726). As one skilled in the art appreciates, the smart security device 420 can scan the video chunks and based on its analysis can initiate an incident, for example when an unknown or unfriendly object is identified. The message in one example includes camera ID, objects of interest and a timestamp.
The security appliance 114 updates the incident table in the database (728). The security appliance 114 then reads the incident details, identifies the user 702 associated with the smart security device 420 (730). The security appliance 114 then notifies the identified user about the new incident (732). In one example, the notification may include details of the incident and a request to the user 702 to upload any video images that may be relevant to the incident stored in legacy security devices, for example, based on the timestamp of the incident.
The security appliance 114 also notifies other subscribers or users in the neighborhood regarding the new incident (734). In one example, the notification may include details of the incident and a request to the users in the neighborhood to upload any video images that may be relevant to the incident stored in legacy security devices, for example, based on the timestamp of the incident.
The subscriber user and users in the neighborhood upload any video images of interest to the security appliance 114 (736). The security appliance 114 updates the database with the received video images (738).
In one example, the security appliance 114 also sends a message to other smart security devices in the neighborhood about the incident and request upload of objects relevant to the incident (732a). Smart security devices in the neighborhood send objects relevant to the incident to the security appliance 114 (736a). Received information from the smart security devices in the neighborhood are stored in the data store (738a).
Users may upload any additional video images, when the incident is still active (740). The incident age may be set to be a predefined time period, for example, one week (742). Various video images and objects received by the security appliance 114 is analyzed by the security appliance and an incident report is generated (744). In one example, the security appliance identifies set of objects detected by the user camera as well as cameras in the neighborhood. Based on the detected objects, a geospatial and temporal analysis is performed to determine movement of the detected object in the neighborhood, identification of unfamiliar objects and detection of anomalous behavior. Based on the analysis, the security appliance generates a report. In one example, the generated report may include one or more images, indicative of the evidence for the incident. The generated incident report is sent to the user associated with the smart security device 402 that initiated the incident (746).
In one example, the incident report may include time stamped video footage containing all objects of interest from the subscriber user's smart security device, time stamped video footage containing same objects of interest from the smart security devices in the neighborhood, time stamped video footage containing the same objects of interest from legacy security devices within the neighborhood, best fit track (or movement) of the object of interest across the neighborhood, and meta data pertaining to the object of interest (for example, license plate number, color/make/model of vehicle, build/height of a person and the like).
Now, referring to
Now, referring to flow diagram 800, an incident user 802 initiates an incident for processing by the security appliance 114 (804). In one example, the incident user 802 uploads a video image corresponding to the incident. The video image may be captured by a legacy security device. The security appliance 114 updates the incidents table in the database (806). The security appliance 114 reads the created incident and queries the subscriber table to identify the incident user (808). Then, the security appliance 114 determines the geographic area around the subscriber's neighborhood and retrieves a list of smart security devices in the subscriber's neighborhood.
The security appliance 114 then notifies all the subscribers or users in the neighborhood about the incident (810). In one example, the notification may include details of the incident and a request to the users in the neighborhood to upload any video images that may be relevant to the incident stored in legacy security devices, for example, based on the timestamp of the incident.
The users in the neighborhood upload any video images of interest to the security appliance 114 (812). The security appliance 114 updates the database with the received video images (814).
In one example, the security appliance 114 also sends a message to the smart security devices in the neighborhood about the incident and request upload of objects relevant to the incident (810a). Smart security devices in the neighborhood send objects relevant to the incident to the security appliance 114 (812a). Received information from the smart security devices in the neighborhood are stored in the data store (814a).
Users may upload any additional video images, when the incident is still active (816). The incident age may be set to be a predefined time period, for example, one week (818). Various video images and objects received by the security appliance 114 is analyzed by the security appliance and an incident report is generated (820).
In one example, objects received by the security appliance 114 is compared with a list of objects that have been associated with incidences that were reported and analyzed previously. In some examples, the list of objects from earlier reported incidences may be referred to as a suspect object list. In some examples, the objects may be auto detected by the security appliance, without any human interactions, In some examples, the objects may have to be presented to a user to help characterize various attributes of the object and identify if the object is of interest in the reported incidence. Once one or more objects of interest are identified, the security appliance 114 can check whether other security devices in the neighborhood have captured same object.
As one skilled in the art appreciates, once an object has been classified, for example, as a person or a vehicle, further analysis may be performed. For example, if the object is a person, face of the person can be extracted and features of the face (sometimes referred to as “faceprint”) may be mapped and compared with other objects that were identified and classified in other security devices. In one example, there may be a list of known suspect person table, with extracted features of the face. This suspect person table may be searched for a possible match. In some examples, if there is a match, then, the image is discarded and a reference identifier of the suspect person may be used. If no match is found, then, the faceprint of the person may be stored for further classification. Over time, if the image of the person is captured at a given location in multiple occasions, the person may be classified as a known or friendly person, associated with that location. In one example, the person is also associated with the neighborhood.
If the object is classified as a vehicle, then, meta data associated with the vehicle may be selectively extracted, by analyzing the object. Extracted meta data for the vehicle may be stored in the object attribute table. Over time, if the vehicle is captured at a given location in multiple occasions, the vehicle may be classified as a known or friendly vehicle, associated with that location. In one example, the vehicle is also associated with the neighborhood.
Based on geospatial and temporal analysis, likely participation of the object of interest in the reported incidence is determined. Thereafter, an incident report is generated.
The generated incident report is sent to the incident user that initiated the incident (822). In one example, the incident report may include time stamped video footage containing all objects of interest from the incident user's security device, time stamped video footage containing same objects of interest from the smart security devices in the neighborhood, time stamped video footage containing the same objects of interest from legacy security devices within the neighborhood, best fit track (or movement) of the object of interest across the neighborhood, and meta data pertaining to the object of interest (for example, license plate number, color/make/model of vehicle, build/height of a person and the like).
Now, referring to
In one example, when an object is detected by one camera, it may send a signal to other neighborhood smart security devices to turn on the camera and capture the images. For example, when camera C1 detects an object or movement, it may send a signal to camera C2 and camera C6 to turn on the camera and capture the images. When camera C2 detects an object or a movement, camera C2 may send a signal to camera C3, camera C4 and camera C5 to turn on the camera and capture images. In one example, the signal will turn on the camera and capture images for a defined period of time. In one example, the defined period of time may be based on the distance between the location of the cameras and an estimated time it would take for a moving object to travel from one location to another location.
In one example, owner of camera C1 reports an incident, for example, tampering of his mailbox, on the night of Feb. 2, 2019. Objects captured by the camera C1 is reviewed for a defined time range. For example, two objects, object ID 215142 and object ID 215143 are identified. Based on the analysis of stored objects, an object tracking table for each of the identified objects are created. For example, referring to
Now, referring to
Now, referring to
Now, a geospatial and temporal analysis of the data stored in the object tracking table 860 is performed. In one example, based on the address of the camera location, a corresponding geo location is retrieved. In one example, a request may be sent to a map server, with the address and receive corresponding geo location of the camera. In one example, the geo location may be the latitude and longitude of the location of the camera. In one example, distance between the cameras of interest may be calculated. In one example, a map server may be configured to provide a distance between various addresses. In some examples, the map server may be configured to provide the distance based on paths or roads that correspond to the address of the camera location. Next, based on the permitted speed limit in a neighborhood, an estimate of time to travel from one camera location to another camera location is calculated. Based on the time to travel from one camera location to another camera location, a possible route for the movement of the object is determined.
Now, referring to
For example, there is a difference of one minute in timestamp between camera C1 and camera C2. Based on the distance of L1 and a time difference T1 of one minute, a likely speed of travel of the object ID 215143 is calculated. Likely speed of travel S1 may be calculated by dividing the time difference T1 by the distance L1. Calculated speed S1 is then compared with permitted speed limit P1 for the neighborhood to see if the calculated speed S1 is within a threshold value Q1 of the permitted speed limit P1. In one example, threshold value Q1 may be set to be within 10% of the permitted speed limit P1. In this example, the calculated speed S1 is within the threshold value Q1 of the speed limit P1. This conclusion validates that the object ID 215143 moved from location of camera C1 to location of camera C2.
Similarly, there is a difference of two minutes in timestamp between camera C2 and camera C5. Based on the distance of L2 and a time difference T2 of two minutes, a likely speed of travel of the object ID 215143 is calculated. Likely speed of travel S2 may be calculated by dividing the time difference T2 by the distance L2. Calculated speed S2 is then compared with permitted speed limit P1 for the neighborhood to see if the calculated speed S2 is within a threshold value Q1 of the permitted speed limit P1. In one example, threshold value Q1 may be set to be within 10% of the permitted speed limit P1. In this example, the calculated speed S2 is within the threshold value Q1 of the speed limit P1. This conclusion validates that the object ID 215143 moved from location of camera C2 to location of camera C5.
Referring back to
As one skilled in the art appreciates, in some examples, more than one likely path may be predicted based on the analysis of the object tracking table. In such a scenario, a plurality of likely paths may be identified for the object of interest.
In one example, one or more geo fences may be selectively defined within the neighborhood 830. For example, a first geo fence 831 and a second geo fence 833. In one example, one or more cameras may be selectively selected around an edge of the geo fence and referred to as edge cameras. The edge cameras may be configured to detect whether an object entered the geo fence or exited the geo fence. For example, cameras C1832, C5838, C6844, C7846 and C8848 may be selectively designated as edge cameras for the first geo fence 831. Similarly, cameras C1832, C7846 and C9849 may be selectively designated as edge cameras for the second geo fence 833.
As one skilled in the art appreciates, a table similar to table 885 may be used to store one or more attributes of the second geo fence 833. For example, the geo fence camera list for geo fence 833 may include cameras C1832, C7846, and C9849.
In one example of a use case, if an object is detected by any one of the edge cameras and it is determined that the object entered the geo fence, for example, geo fence 831, or geo fence 833, one or more of the policies are checked by the security appliance for violation. If the policy is set to send an alert when an unknown object enters the geo fence, an alert is sent to a designated list of recipients. As one skilled in the art appreciates, using one or more cameras in the geo fence, a direction of motion of the object within the geo fence may be advantageously determined as previously described, to determine if the object entered the geo fence or exited the geo fence. A similar analysis may be performed for an object exiting the geo fence and if any of the policies require notification of the event, corresponding action may be taken.
In one example, the direction of motion of an object may be advantageously determined by analyzing a plurality of frames of the image of the object. Typically, a security device can be configured to record a video at a certain number of frames per second. The frames per second can vary from 1 frame per second (fps) to 30 fps. The higher the fps, the better will be the smoothness of the video as seen by human eyes. In many applications, a security device is oriented in such a way to observe movement of objects (example, vehicles, people, etc) in the field of view of a camera of the security device. For example, the camera may be positioned on a building with a field of view of a street. In some examples, the camera may be positioned above an entrance door to a building, with a field of view of visitors entering the building.
In many applications, it becomes very important to know the direction of movement of an object of interest. By knowing the exact location of the building, the orientation of the camera at that location, and the direction of movement of the object of interest, it is possible to map the movement on a geographic map, be it an indoor map or an outdoor map.
With a camera set to record at a relatively high frame rate, for example, greater than 15 fps, there would typically be multiple frames of an object of interest, from the time the object first came into the field or view of the camera, to the time the object exited the field of view of the camera. By analyzing the same object in multiple frames over time against the backdrop of fixed objects in the field of view, we can infer the direction of movement of the object of interest. Now, referring to
In one example, referring to
Furthermore, if the frame rate is 30 fps, the elapsed time between two frames 8002A and 8002B (assuming they are consecutive) is 1/30 seconds. The width of the frame can be calculated using geometric principles. From such a calculation, the position of the vehicle 8008 in frame 8002A and the position of the vehicle in frame 8002B can be calculated, hence, distance travelled. Given the distance travelled and time to travel (in this case, 1/30 seconds), the speed of movement of the vehicle 8008 can be advantageously calculated.
In another example, referring to
Furthermore, principles of physics and geometry can be applied to derive the speed of movement, given that the change in perspective of the object (example, the object appearing bigger in relation to the fixed background) can be used to determine distance, and the frame rate of the video gives a measure of time.
In some examples, if the object of interest is captured by one or more security devices while moving across the field of view as described with reference to
In some examples, the frames of the captured video may be analyzed by the security appliance 114 as previously described. For example, the processor engine 202 may be configured to perform the analysis of various frames. In some examples, the object detection engine 308 of the processor engine 202 may be configured to perform the analysis of various frames. For example, a legacy security device 400 may not have the capability to process the image and perform the analysis of various frames and may send the image for processing by the security appliance.
In some examples, a smart security device 420 may perform the analysis of various frames. For example, object engine 424 may be advantageously configured to perform the analysis of various frames. In such an example, the smart security device 420 may be selectively configured to send the conclusion of the analysis to the security appliance 114, instead of sending frames of images of the object for analysis.
In some examples, the security device may be configured to perform the analysis of various frames. In some examples, the security appliance may be configured to perform the analysis of various frames.
Now, referring back to
In one example, a list of objects, for example, vehicles belonging to the neighborhood 830 may be advantageously created by tracking and analyzing various vehicles periodically entering or exiting the first geo fence 831 or second geo fence 833. This list may be presented to an administrator or a user of the security appliance to validate the classification of the objects, for example, as a known, unknown, resident, visitor and the like.
In one example, a list of objects present within the geo fence and corresponding entry or exit time stamp may be created. For example, list of objects present within a defined time frame may be advantageously retrieved. This list of objects present within a predefined time frame may be advantageously used to perform incident analysis for any reported incidents. As one skilled in the art appreciates, various meta data related to the entry into a geo fence or exit of an object from a geo fence may be selectively stored and retrieved for further analysis.
In one example, a list of objects belonging to the neighborhood along with their signatures (as previously described) may be advantageously stored in the security appliance. When one or more security devices detects an object entering or leaving the neighborhood, corresponding object signature is generated. The generated object signature may be advantageously compared with the object signatures of the list of objects belonging to the neighborhood. Based on the comparison of the object signatures, a determination can be made whether the object corresponding to the generated object signature is a known object in the neighborhood. In one example, if the object is not a known object in the neighborhood, an alert may be sent to one or more users of the security appliance.
In some examples, various data collected from multiple geo fences may be combined to form a collective data for the defined neighborhood, for storage and further analysis.
Now, referring to
In block S904, image of an object is received by the security appliance. The image is captured by a security device located in a first location. For example, image captured by security device A is received by the security appliance, as described with reference to
In block S906, image of another object is received by the security appliance. The image is captured by another security device located in a second location. For example, image captured by security device B is received by the security appliance, as described with reference to
In block S908, image of the object is processed by the security appliance to generate a first plurality of attributes for the object. For example, meta data of the object is created by analyzing the object. For example, meta data of the object may be created as described with reference to
In block S910, image of the another object is processed by the security appliance to generate a second plurality of attributes for the another object. For example, meta data of the another object is created by analyzing the object. For example, meta data of the object may be created as described with reference to
In block S912, the first plurality of attributes for the object and the second plurality of attributes for the another object is compared and based on the comparison, the object and the another object is determined to be the same. By comparing one or more attributes of the object and another object in the corresponding object table, the object and another object may be determined to be the same. For example, one or more object characteristics of the object may be compared to determine whether the object and another object are same.
The embodiments disclosed herein can be implemented through at least one software program running on at least one hardware device and performing various functions of the security appliance. Various functions of the security appliance as described herein can be at least one of a hardware device, or a combination of hardware device and software module. In some examples, one or more functions described with reference to the security appliance may be performed in the security device. For example, in some examples, analysis of the objects may be performed in the security device, for example, a smart security device. Based on the analysis of the objects, object attribute table may be generated by the security device. In some example, the generated object attribute table may be selectively accessible to the security appliance.
The hardware device can be any kind of device which can be programmed including e.g. any kind of computer like a server or a personal computer, or the like, or any combination thereof, e.g. one processor and two FPGAs. The device may also include means which could be e.g. hardware means like e.g. an ASIC, or a combination of hardware and software means, e.g. an ASIC and an FPGA, or at least one microprocessor and at least one memory with software modules located therein. Thus, the means are at least one hardware means, and at least one software means. The method embodiments described herein could be implemented in pure hardware or partly in hardware and partly in software. Alternatively, the invention may be implemented on different hardware devices, e.g. using a plurality of CPUs.
The foregoing description of the specific embodiments will so fully reveal the general nature of the embodiments herein that others can, by applying current knowledge, readily modify and/or adapt for various applications such specific embodiments without departing from the generic concept, and, therefore, such adaptations and modifications should and are intended to be comprehended within the meaning and range of equivalents of the disclosed embodiments. It is to be understood that the phraseology or terminology employed herein is for the purpose of description and not of limitation. Therefore, while the embodiments herein have been described in terms of preferred embodiments, those skilled in the art will recognize that the embodiments herein can be practiced with modification within the spirit and scope of the claims as described herein.
This application is a continuation-in-part application of and claims priority to U.S. patent application Ser. No. 16/281,083 filed on Feb. 20, 2019 and entitled “SYSTEM AND METHOD FOR IMAGE ANALYSIS BASED SECURITY SYSTEM”. The disclosure of U.S. patent application Ser. No. 16/281,083 is incorporated herein by reference in their entirety, as if set out in full.
Number | Date | Country | |
---|---|---|---|
Parent | 16281083 | Feb 2019 | US |
Child | 17329139 | US |