Method and system for modeling image of interest to users

Information

  • Patent Grant
  • 10733231
  • Patent Number
    10,733,231
  • Date Filed
    Tuesday, March 22, 2016
    8 years ago
  • Date Issued
    Tuesday, August 4, 2020
    4 years ago
Abstract
A system and method for modeling and distributing image data of interest to users is disclosed. Users on user devices such as mobile phones send request messages for image data captured by surveillance cameras of the system. The request messages include information for selecting the image data, such as camera number and time of recording of the image data, in examples. In response, an application server of the system collects the image data from the surveillance cameras, and supplies image data to the users based on a model that the application server creates and updates for each of the users. The model ranks image data of potential interest for each of the users, where the model is based on the information for selecting the image data provided by the users. Preferably, a machine learning application of the application server creates the model for each of the users.
Description
RELATED APPLICATIONS

This application is related to:


U.S. application Ser. No. 15/076,701 filed on Mar. 22, 2016, entitled “Method and system for surveillance camera arbitration of uplink consumption,” now U.S. Patent Publication No.: 2017/0278368 A1;


U.S. application Ser. No. 15/076,703 filed on Mar. 22, 2016, entitled “Method and system for pooled local storage by surveillance cameras,” U.S. Patent Publication No.: 2017/0280102 A1;


U.S. application Ser. No. 15/076,704 filed on Mar. 22, 2016, entitled “System and method for designating surveillance camera regions of interest,” now U.S. Patent Publication No.: 2017/0277967 A1:


U.S. application Ser. No. 15/076,705 filed on Mar. 22, 2016, entitled “System and method for deadzone detection in surveillance camera network,” now U.S. Patent Publication No.: 2017/0278366 A1;


U.S. application Ser. No. 15/076,706 filed on Mar. 22, 2016, entitled “System and method for overlap detection in surveillance camera network,” now U.S. Patent Publication No.: 2017/0278367 A1;


U.S. application Ser. No. 15/076,708 filed on Mar. 22, 2016, entitled “System and method for retail customer tracking in surveillance camera network,” now U.S. Patent Publication No.: 2017/0278137 A1;


U.S. application Ser. No. 15/076,710 filed on Mar. 22, 2016, entitled “System and method for using mobile device of zone and correlated motion detection,” now U.S. Patent Publication No.: 2017/0280103 A1:


U.S. application Ser. No. 15/076,712 filed on Mar. 22, 2016, entitled “Method and system for conveying data from monitored scene via surveillance cameras,” now U.S. Patent Publication No.: 2017/0277947 A1;


U.S. application Ser. No. 15/076,713 filed on Mar. 22, 2016, entitled “System and method for configuring surveillance cameras using mobile computing devices,” now U.S. Patent Publication No.: 2017/0278365 A1;


and


U.S. application Ser. No. 15/076,717 filed on Mar. 22, 2016, entitled “System and method for controlling surveillance cameras,” now U.S. Patent Publication No.: 2017/0280043 A1.


All of the afore-mentioned applications are incorporated herein by this reference in their entirety.


BACKGROUND OF THE INVENTION

Traditionally, surveillance camera systems were often proprietary/closed-standards systems. Surveillance cameras of the systems captured image data of scenes within or around a premises, and the image data was compiled into a matrix view for real time viewing at possibly a security guard station or back office video monitor. The video storage system for storing the image data was typically a video cassette recorder (VCR located in a network room or back office within the premises being monitored.


More recently, the surveillance camera systems have begun using open standards, which has enabled users to more easily access the image data of the surveillance cameras. The surveillance cameras and other components of the systems typically communicate over a data network, such as a local area network. On user devices such as laptops, computer workstations and mobile phones, users can access and select image data from specific surveillance cameras for real-time viewing upon and downloading to the user devices. In addition, the users on the user devices can also access previously recorded image data stored on devices such as network video recorders on the network.


The surveillance camera systems can also include video analytics systems that analyze image data from the surveillance cameras. Often, the analytics systems will track moving objects against fixed background models. More sophisticated functions include object detection to determine the presence of an object or a type of the object. The analytics systems generate video primitives or meta data for the detected objects and determined events, which the analytics systems can further process or send over the data networks to other systems for storage and incorporation into the image data as metadata, for example.


SUMMARY OF THE INVENTION

It would be beneficial to anticipate image data of potential interest for users based information from the user or by learning based, for example, on their requests for image data. For this purpose, a surveillance camera system might receive requests for image data from users on user devices. The requests include information for selecting the image data, such as the camera number and time of day that the image data were recorded, in examples. An application server of the system receives the requests, and utilizes a machine learning application to build models for each of the users based on the information for selecting the image data in the requests.


These models could be used a number of ways. The models could rank image data of potential interest for each of the users. An application server of the system applies the model for each user to the image data, and model-suggested image data for each user is created in response. The system then sends the requested image data along with the model-suggested image data to the users. In this way, subsequent selection of image data by the users, including selection of image data of the model-suggested image data sent to the users, can be used to update the model for each of the users.


It would also be beneficial to anticipate image data of potential interest for users based on information derived from the image data selected by the users. For this purpose, the system includes an analytics system that generates video primitives from the image data. The application server can then update the models for each of the users based on the video primitives, in one example.


When the users receive the model-suggested image data set at the user devices, yet another benefit that can be achieved is for the users to directly “vote” which of the image data in the model-suggested image data are important (or not). For this purpose, the application server might additionally supplies voting buttons associated with the model-suggested image data to the users on the user devices. Separate sets of voting buttons are associated with each specific frame and/or stream of image data within the model-suggested image data.


In response to the user selecting the voting buttons (e.g. “thumbs up” or “thumbs down”) associated with specific image data of the model-suggested image data, the voting buttons generate associated voting information. This voting information for the image data, along with information that identifies the image data, is included in request messages that the users send to the application server and are examples of information for selecting image data provided by the users. This provides a feedback mechanism to the machine learning capability of the application server, which can update the model for each of the users in response to the voting information provided by the users for selected image data at the user devices.


The system also maintains video history data for each of users. The video history data can include prior requests for image data and video primitives generated by the analytics system for the previously requested image data. The system can then update the models for each of the users based on the video history data.


In general, according to one aspect, the invention features a method for distributing image data to users. The method comprises collecting image data from surveillance cameras and supplying the image data to user devices of the users, based on models for each of the users that rank image data of potential interest for each of the users.


In examples, the models are created for each of the users based on information for selecting the image data provided by the users. The information for selecting the image data provided by each of the users might include a surveillance camera number from which the image data was generated, a camera angle, a zoom setting, a time of recording, and/or voting information and/or data from other sensors.


The models for each of the users can be updated based on voting information provided by the users and/or based on video history data maintained for each of the users. The history data could include image data selected for or provided by the users, and video primitives generated from image data selected by the users.


Supplying the image data to user devices of the users might comprise creating or tagging model-suggested image data for each of the users by applying the models for each of the users to the image data, and sending the model-suggested image data to the users. A display graphic could be built for presentation on the user devices that includes the model-suggested image data for each of the users and then sent.


User are able to rank the image data supplied to each of the users by using voting buttons in association with the image data supplied to each of the users, in one case.


In general, according to another aspect, the invention features a method for ranking image data for distribution to users. This method comprises collecting image data from surveillance cameras and analyzing image data accessed by the users at user devices and building a model that ranks image data of potential interest for each of the users.


In general, according to still another aspect, the invention features a user based image data distribution system. The system comprises an application server that collects image data from one or more surveillance cameras and supplies image data to user devices of the users, based on models for each of the users that rank image data of potential interest for each of the users. The image data supplied to the user devices of the users preferably includes model-suggested image data which is based on a ranking of the image data of potential interest for each of the users.


The application server can additionally supply voting buttons associated with the model-suggested image data supplied to the user devices of the users, where selection of the voting buttons associated with the model-suggested image data enables the users to rank the image data supplied to each of the users. Then, in response to selection of the voting buttons associated with the model-suggested image data, the user devices of the users generate voting information for image data of the model-suggested image data and include the voting information in messages sent to the application server to rank the image data supplied to each of the users.


Applications running on the user devices of the users preferably receive the model-suggested image data and associated voting buttons supplied by the application server. The applications also send request messages to the application server, where the request messages include the information for selecting the image data provided by each of the users.


The above and other features of the invention including various novel details of construction and combinations of parts, and other advantages, will now be more particularly described with reference to the accompanying drawings and pointed out in the claims. It will be understood that the particular method and device embodying the invention are shown by way of illustration and not as a limitation of the invention. The principles and features of this invention may be employed in various and numerous embodiments without departing from the scope of the invention.





BRIEF DESCRIPTION OF THE DRAWINGS

In the accompanying drawings, reference characters refer to the same parts throughout the different views. The drawings are not necessarily to scale; emphasis has instead been placed upon illustrating the principles of the invention. Of the drawings:



FIG. 1 is a schematic diagram of an image data distribution system for an exemplary premises including one or more surveillance cameras, where the system collects image data from the one or more surveillance cameras, and supplies image data for each of the users based on models that rank image data of potential interest for each of the users;



FIG. 2 is a schematic diagram of an embodiment of a surveillance camera that supports image data distribution to users in accordance with principles of the present invention;



FIG. 3 is a flow diagram showing a method for employing models for ranking image data and supplying the image data to each of the users; and



FIG. 4 is a schematic diagram of a video playback application running on a user device that includes model-suggested image data supplied to the users by the system, where the system creates the model-suggested image data according to the method of FIG. 3 by applying the model for the user to image data from the surveillance cameras.





DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS

The invention now will be described more fully hereinafter with reference to the accompanying drawings, in which illustrative embodiments of the invention are shown. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.


As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items. Further, the singular forms including the articles “a”, “an” and “the” are intended to include the plural forms as well, unless expressly stated otherwise. It will be further understood that the terms: includes, comprises, including and/or comprising, when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. Further, it will be understood that when an element, including component or subsystem, is referred to and/or shown as being connected or coupled to another element, it can be directly connected or coupled to the other element or intervening elements may be present.



FIG. 1 shows an exemplary image data distribution system 10 to which the present invention is directed. The system 10 includes surveillance cameras 103, sensor devices such as motion sensor 79 and contact switch sensor 78 that communicate over a local network 210. The system 10 also includes an analytics system 222, application server 220, and a local image data storage system 212, in this illustrated example.


A user 60-2 holds a user device 400 for communicating with the application server 220. Each user device 400 includes a display screen 410 and one or more applications 412, or “apps.” The apps 412 display information to the user 60-2 via the display screen 410 of the user device 400. The apps 412 execute upon the operating systems of the user devices 400. The apps 412 communicate with the application server 220 over a network cloud 50 via a wireless connection 264. Examples of user devices 400 include smartphones, tablet computing devices, and laptop computers running operating systems such as Windows, Android, Linux, or IOS, in examples.


In the illustrated example, surveillance cameras 103 such as camera1103-1 and camera2103-2 are installed within a premises 52. Field of view 105-1 of camera1103-1 captures individuals 60-1 as they enter or exit through a doorway 66 of the premises. Field of view 105-2 of camera2103-2 captures activity near a safe 64. Image data 250 of the scene captured by the surveillance cameras 103 are stored either locally within the cameras 103 or sent by the cameras 103 over the network 210 for storage within the local image data storage system 212. In one example, the local image data storage system 212 is a network video recorder (NVR).


Motion sensor 79 detects motion of individuals 60-1 through doorway 66. In response to detecting motion, sensor 79 sends sensor message 254-1 including sensor data 253-1 for the motion detection events over the network 210 to the application server 220. In a similar vein, contact switch sensor 78 detects opening of a door of safe 64, and includes sensor data 253-2 associated with safe door opening events in sensor messages 254-2 over the network 210 to the application server 220.


The analytics system 222 receives requests from the application server 220 to analyze image data 250. In response, the analytics system 222 generates video primitives 296 that describe objects and/or events of interest determined from analyzing the image data 250. In examples, the video primitives may be a text description of some or all of the objects and observable features within a set of image data 250. These video primitives 296 also may include descriptions of the objects, their locations, velocities, shape, colors, location of body parts, etc.


The application server 220 includes a machine learning application 306 and creates entries 272 for each of the users in a user database 298. Each entry 272 includes a model 304, video history data 302, and a suggested image data list 300. The application server 220 also builds an intelligent display graphic 286 for presentation on the user devices 400 of the users 60-2. In one implementation, the entries 272 can additionally include sensor data 253.


In one implementation, the analytics system 222 and/or the application server is/are integrated within one or more of the surveillance cameras 103. Moreover, the image data 250 can additionally be stored locally within the surveillance cameras 103.



FIG. 2 shows some of the components of an exemplary surveillance camera 103. In the example, the surveillance camera 103 stores its image data 250 locally and includes an integrated application server 220 and analytics system (here, a camera analytics system 176) as discussed herein above, for one embodiment.


The camera 103 includes a processing unit (CPU) 138, an imager 140, a camera image data storage system 174 and a network interface 142. An operating system 136 runs on top of the CPU 138. A number of processes or applications are executed by the operating system 136. The processes include an application server 220 and a camera analytics system 176.


The camera 103 saves image data 250 captured by the imager 140 to the camera image data storage system 174. Each camera 103 can support one or more streams of image data 250. The application server 220 receives and sends messages 264 via its network interface 142. The application server 220 also stores the user database 298 within the camera image data storage system 174.


The control process 162 sends the image data 250 to the integrated camera analytics system 176 for analysis in some cases. The camera analytics system 176 analyzes the image data 250 and generates video primitives 296 in response to the analysis. The video primitives can also be stored to the camera image data storage system 174.


In some cases, the cameras 103 may also or alternatively stream image data to the user device 400 or the external analytics system 312 and these analytics systems analyze the image data 250.



FIG. 3 shows a preferred method of the camera-integrated or separate application server 220 for distributing image data 250 to users 60-2 on user devices 400. Via a video playback application 412 on the user device 400, a user 60-2 requests image data 250 from one or more of the surveillance cameras 103. In response, the application server 220 employs models 304 for ranking the requested image data 250 and supplies the image data 250 to each of the users 60-2.


In step 502, the application server 220 waits for request messages from users 60-2 for selecting image data 250 generated by one or surveillance cameras 103. According to step 504, the application server 220 receives a request message for image data 250 from a user 60-2 logged onto a video playback application 412 on a user device 400, where the request message possibly includes information for selecting image data 250 for display on the user device 400, and where the information includes surveillance camera number, camera angle, zoom setting, and/or time of recording, and/or any voting information, in examples.


In step 506, the application server 220 determines if this the first request from the user 60-2 to view image data 250. If this is the first request from the user 60-2, an entry for the user 60-2 is created in the user database 298 in step 508. Otherwise, the method transitions to step 514 to access the user entry 272 for the current user 60-2 in the user database 298.


Upon conclusion of step 508, in step 510, the application server 220 sends the information for selecting the image data 250 to the machine learning application 306. Then, in step 512, the application server 220 receives a baseline model 304 from the machine learning application 306 and stores the baseline model 304 to the user entry 272 for the user 60-2 in the user database 298. The model 306 for each of the users 60-2 ranks image data 250 of potential interest for each of the users 60-2. Preferably, the machine learning application 306 creates the model 306 for each of the users or classes of users based on information for selecting the image data 250 provided by the users 60-2.


Upon conclusion of both steps 514 and 512, the entry 272 for the user 60-2 that issued the request for image data 250 is further processed in step 516. According to step 516, the application server 220 appends the information for selecting the image data including voting information to video history data 302 of the entry 272 for the user 60-2.


In step 518, the application server 220 collects image data 250 from video storage, in response to the information for selecting the image data provided by the user 60-2. In embodiments, the video storage is the local image data storage system 212 that communicates over the local network 210 or the camera image data storage system 174 of the individual surveillance cameras 103. According to step 520, the application server 220 processes event data from sensor devices 78,79 and/or requests video primitives 296 from the analytics system 222 for the collected image data 250, where the video primitives 296 include information associated with objects, motion of objects, and/or activities of interest determined from the selected image data 250, in examples.


According to step 522, the application server 220 appends the event data and/or the video primitives 296 received from the analytics system 222 to video history data 302 of the user entry 272 for the current user 60-2. In step 524, the application server 220 passes the video history data 250 and optional voting information provided by the users 60-2 to the machine learning application 306 for updating the model 304. In this way, the application server 220, via its machine learning application 306, can update the models for each of the users based on voting information provided by the users and/or based on the video history data 302 maintained for each of the users 60-2. In one implementation, the video history data 302 can also include video primitives 296 generated from image data 250 selected by the users.


In step 526, the application server 220 receives an updated model 304 from the machine learning application 306 that ranks image data 250 of potential interest for the user 60-2 based on the video history data 302 and/or the voting information. The application server 220 saves the updated model 304 to the entry 272 for the user 60-2 in step 528.


In step 530, the application server 220 creates model-suggested image data 300 (or updates existing model-suggested image data) by applying the model 304 to the image data 250, and saves the model-suggested image data 300 to the user entry 272 for the user. In step 532, the application server 220 saves the user entry 272 to the user database 298.


According to step 534, the application server 220 then provides the requested image data 250 to the video playback application 412 of the user 60-2 on user device 400. Then, in step 536, the application server 220 builds an intelligent display graphic 286 that includes the model-suggested image data 300 and associated voting buttons 420/422, and sends the intelligent display graphic 286 to the video playback application 412 on the user device 400.


Upon conclusion of step 536, the method transitions back to the beginning of step 504 to receive additional request messages from users, where the request messages include the information for selecting image data provided by the users.



FIG. 4 shows a video playback application 412 running on a user device 400.


The video playback application 412 enables a user 60-2 to select image data 250 for display on the user device 400 via request messages sent from the video playback application 412 to the application server 220. The request messages include information provided by the user 60-2 for selecting the image data 250. In response to the selection, the application server 220 returns the requested image data 250, and builds models 306 for the user 60-2 based on the information provided by the user 60-2 for selecting the image data 250 in the request messages.


Then, the video playback application 412 also receives model-suggested image data 300. The application server 220 creates the model-suggested image data 300 for each of the users by applying the models 304 for each of the users 60-2 to the image data 250, and then sends the model-suggested image data 300 to the users 60-2.


The video playback application 412 provides a model-suggested image data selector 428. Via the model-suggested image data selector 428, the user 60-2 can control whether the model-suggested image data 300 provided by the application server 220 is displayed by the video playback application 412. Because the model-suggested image data selector 428 is currently selected by the user, the intelligent display graphic 286 built by the application server 220 is displayed on the display screen 410 of the user device 400. The intelligent display graphic 286 was built in accordance with the method of FIG. 3.


The intelligent display graphic 286, in turn, includes image data 250 of the model-suggested image data 300 arranged in panes 289 of a grid. In the specific example illustrated in FIG. 4, the model-suggested image data 300 includes four exemplary frames of image data 250-1 through 250-4 and are included within respective panes 289-1 through 289-4. In one implementation, the intelligent display graphic 286 includes thumbnails to represent the frames of image data 250 within the model-suggested image data 300.


The intelligent display graphic 286 also includes time frame selectors 427 and buttons 414. The time frame selectors 427 enable the user 60-2 to select time frames for viewing items within the model-suggested image data 300. In the example, separate time frame selectors 427-1 through 427-4 enable selection of different pre-selected time ranges within a 24 hour period for a given date selection 426, where the date selection is in [mm-dd-yyyy <timezone>] format, in one example. The date selection 426 can additionally support a date range. In the example, the user 60-2 has selected a time range of between 2 pm and 6 pm via selector 427-3. In response, the video playback application 412 will include the time frame selection 427-3 and date selection 426 in the information for selecting the image data 250 (here, the model-suggested image data 300) included within the request message sent to the application server 220.


Also included within panes 289-1 through 289-4 are image data selectors 424 and voting buttons 420/424. Separate sets of the image data selectors 424 and the voting buttons 420/424 are associated with the image data 250-1 through 250-4 and are described with respect to the contents provided within each pane 289 of the grid of the intelligent display graphic 286.


Pane 289-1 of the display graphic 286 includes image data 250-1, a “thumbs up” voting button 420-1, a “thumbs down” voting button 422-1, and image data selector 424-1 that prompts the user to select image data 250-1 for display. The image data 250-1 presents a scene that includes individual 60 that was captured from surveillance camera 103-1, indicated by camera number 166-1 “camera1.” Selection of either the “thumbs up” voting button 420-1 or “thumbs down” voting button 422-1 causes associated voting information for the image data 250-1 to be included in the request message sent to the application server 220. The application server 220 can then adjust the model 304 for the user 60-2 based on the information in the request message (e.g. the voting information associated with the image data 250-1).


Pane 289-2 of the display graphic 286 includes image data 250-2, a “thumbs up” voting button 420-2, a “thumbs down” voting button 422-2, and image data selector 424-2 that prompts the user to select image data 250-2 for display. The image data 250-2 presents a scene including an individual 60 with zoom setting of 50%, indicated by reference 170-2. The image data 250-2 was captured from surveillance camera 103-1, indicated by camera number 166-2 “camera1.” Selection of either the “thumbs up” voting button 420-2 or “thumbs down” voting button 422-2 causes associated voting information for the image data 250-2 to be included in the request message sent to the application server 220.


In a similar vein, pane 289-3 of the display graphic 286 includes image data 250-3, a “thumbs up” voting button 420-3, a “thumbs down” voting button 422-3, and image data selector 424-3 that prompts the user to select image data 250-3 for display. The image data 250-3 presents a scene that includes safe 64. The image data 250-3 was captured from surveillance camera 103-2, indicated by camera number 166-3 “camera2.” Selection of either the “thumbs up” voting button 420-3 or “thumbs down” voting button 422-3 causes associated voting information for the image data 250-3 to be included in the request message sent to the application server 220.


Finally, pane 289-4 of the display graphic 286 includes image data 250-4, a “thumbs up” voting button 420-4, a “thumbs down” voting button 422-4, and image data selector 424-4 that prompts the user to select image data 250-4 for display. The image data 250-4 presents a scene that includes safe 64 with a zoom setting of 40%, indicated by reference 170-4. The image data 250-3 was captured from surveillance camera 103-2, indicated by camera number 166-4 “camera2.” Selection of either the “thumbs up” voting button 420-4 or “thumbs down” voting button 422-4 causes associated voting information for the image data 250-4 to be included in the request message sent to the application server 220.


The user 60-2 can then select the “OK” button 414-1 or the “CANCEL” button 414-2. In response to selection of the “OK” button 414-1, the selections made by the user 60-2 are included in a new request message sent to the application server 220. This is also indicated by step 536 in FIG. 3. In this way, the selections made by users 60-2 upon the image data 250 of the model-suggested image data 300 also update the model 304-1 for the user 60-2.


While this invention has been particularly shown and described with references to preferred embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the scope of the invention encompassed by the appended claims.

Claims
  • 1. A method for distributing image data to users, the method comprising: maintaining a database of models, each of the models corresponding to different users;in response to requests from users for video image data, creating models for the users if no models exist for the users;collecting image data from surveillance cameras;supplying the image data to user devices of the users, based on the models for each of the users in the database, the models ranking image data of potential interest for each of the users; andanalyzing image data accessed by the users at user devices and updating the models for the users based on the accessed image data by the users, the models ranking image data of potential interest for each of the users and also updating the models for each of the users based on voting information provided by the users which is collected when each of the users selects voting buttons of display graphics displayed in association with the image data supplied to each of the users on display screens of the user devices and updating the models for each of the users based on video history data maintained for each of the users, wherein the video history data includes information for selecting image data provided by the users, and video primitives generated from image data selected by the users, wherein the video primitives comprise a text description of some or all of the objects and observable features within the image data.
  • 2. The method of claim 1, further comprising creating the models for each of the users based on information for selecting the image data provided by the users.
  • 3. The method of claim 2, wherein the information for selecting the image data provided by each of the users includes a surveillance camera number from which the image data was generated, a camera angle, a zoom setting, a time of recording, and voting information.
  • 4. The method of claim 1, wherein supplying the image data to user devices of the users comprises creating a model-suggested image data set for each of the users by applying the models for each of the users to the image data, and sending the model-suggested image data set to the users.
  • 5. The method of claim 1, wherein supplying the image data to user devices of the users comprises: creating a model-suggested image data set for each of the users by applying the models for each of the users to the image data;building a display graphic for presentation on the user devices that includes the model-suggested image data set for each of the users; andsending the display graphic to the user devices of the users, which display the graphic, wherein the graphics include time frame selectors that enable the users to select time frames for viewing items within the model-suggested image data.
  • 6. The method of claim 1, further comprising different image data selectors associated with the different image data displayed on the display screens of the user devices for selecting the corresponding image data for display.
  • 7. The method of claim 1, further comprising displaying model-suggested image data selectors enabling the users to control whether model-suggested image data is displayed on the display screens.
  • 8. The method of claim 1, further comprising displaying time of day selectors for enabling the users to control time frames for the image data that is displayed on the display screens.
  • 9. A user based image data distribution system, the system comprising: user devices operated by users for displaying image data;an application server that collects image data from one or more surveillance cameras and maintains a database of models, each of the models corresponding to different users, wherein in response to requests from users for video image data, the application server creates models for the users if no models exist for the users, and supplies image data to the user devices of the users, based on the models for each of the users, the models for ranking image data of potential interest for each of the users; andan analytics system that generates video primitives for image data selected by the users, wherein the video primitives comprise a text description of some or all of the objects and observable features within the image data, and the application server updates the models for each of the users based on the video primitives, andwherein the application server additionally supplies voting buttons associated with the model-suggested image data supplied to the user devices of the users, and wherein selection of the voting buttons associated with the model-suggested image data displayed on display screens enables the users to rank the image data supplied to each of the users; and wherein in response to selection of the voting buttons, the user devices of the users generate voting information for image data of the model-suggested image data and include the voting information in messages sent to the application server to rank the image data supplied to each of the users wherein the models for each of the users are based on information for selecting the image data provided by the users, the information for selecting the image data provided by each of the users including a surveillance camera number from which the image data was generated, a camera angle, a zoom setting, a time of recording, and voting information.
  • 10. The system of claim 9, wherein the application server includes a machine learning application that creates the models for each of the users.
  • 11. The system of claim 9, wherein the image data supplied to the user devices of the users includes model-suggested image data for ranking the image data of potential interest for each of the users.
  • 12. The system of claim 9, wherein the models for each of the users are based on information for selecting the image data provided by the users.
  • 13. The system of claim 12, wherein the information for selecting the image data provided by each of the users includes a surveillance camera number from which the image data was generated, a camera angle, a zoom setting, a time of recording, and voting information.
  • 14. The system of claim 12, wherein the information for selecting the image data provided by each of the users is included within request messages sent from applications running on the user devices of the users.
  • 15. The system of claim 9, wherein the application server builds a display graphic that includes the image data for each of the users, and sends the display graphic for presentation on the user devices of the users in order to supply the image data to the user devices of the users.
  • 16. A method for distributing image data to users, the method comprising: maintaining a database of models, each of the models corresponding to different users;in response to requests from users for video image data, creating models for the users if no models exist for the users;collecting image data from surveillance cameras;supplying the image data to user devices of the users, based on the models for each of the users in the database, the models ranking image data of potential interest for each of the users;analyzing image data accessed by the users at user devices and updating the models for the users based on the accessed image data by the users, the models ranking image data of potential interest for each of the users and also updating the models for each of the users based on voting information provided by the users which is collected when each of the users selects voting buttons of display graphics displayed in association with the image data supplied to each of the users on display screens of the user devices and updating the models for each of the users based on video history data maintained for each of the users, wherein the video history data includes information for selecting image data provided by the users, and video primitives generated from image data selected by the users, wherein the video primitives comprise a text description of some or all of the objects and observable features within the image data; anddisplaying on the user devices intelligent display graphics that include time frame selectors that enable the users to select time frames for viewing items within the model-suggested image data with separate time frame selectors to enable selection of different pre-selected time ranges within a 24 hour period for a given date selection.
US Referenced Citations (150)
Number Name Date Kind
3217098 Oswald Nov 1965 A
4940925 Wand et al. Jul 1990 A
5164827 Paff Nov 1992 A
5204536 Vardi Apr 1993 A
5317394 Hale et al. May 1994 A
5729471 Jain et al. Mar 1998 A
5850352 Moezzi et al. Dec 1998 A
5940538 Spiegel et al. Aug 1999 A
5951695 Kolovson Sep 1999 A
5969755 Courtney Oct 1999 A
6341183 Goldberg Jan 2002 B1
6359647 Sengupta et al. Mar 2002 B1
6581000 Hills et al. Jun 2003 B2
6643795 Sicola et al. Nov 2003 B1
6724421 Glatt Apr 2004 B1
6812835 Ito et al. Nov 2004 B2
6970083 Venetianer et al. Nov 2005 B2
7091949 Hansen Aug 2006 B2
7242423 Lin Jul 2007 B2
7286157 Buehler Oct 2007 B2
7342489 Milinusic et al. Mar 2008 B1
7382244 Donovan et al. Jun 2008 B1
7409076 Brown et al. Aug 2008 B2
7428002 Monroe Sep 2008 B2
7450735 Shah et al. Nov 2008 B1
7456596 Goodall et al. Nov 2008 B2
7460149 Donovan et al. Dec 2008 B1
7529388 Brown et al. May 2009 B2
7623152 Kaplinsky Nov 2009 B1
7623676 Zhao et al. Nov 2009 B2
7733375 Mahowald Jun 2010 B2
7996718 Ou et al. Aug 2011 B1
8249301 Brown et al. Aug 2012 B2
8300102 Nam et al. Oct 2012 B2
8325979 Taborowski et al. Dec 2012 B2
8482609 Mishra et al. Jul 2013 B1
8483490 Brown et al. Jul 2013 B2
8502868 Buehler et al. Aug 2013 B2
8528019 Dimitrova Sep 2013 B1
8558907 Goh et al. Oct 2013 B2
8594482 Fan et al. Nov 2013 B2
8675074 Salgar et al. Mar 2014 B2
8723952 Rozenboim May 2014 B1
8849764 Long et al. Sep 2014 B1
8995712 Huang et al. Mar 2015 B2
9015167 Ballou et al. Apr 2015 B1
9058520 Xie et al. Jun 2015 B2
9094615 Aman et al. Jul 2015 B2
9129179 Wong Sep 2015 B1
9158975 Lipton et al. Oct 2015 B2
9168882 Mirza et al. Oct 2015 B1
9197861 Saptharishi et al. Nov 2015 B2
9280833 Brown et al. Mar 2016 B2
9412269 Saptharishi et al. Aug 2016 B2
9495614 Boman et al. Nov 2016 B1
9594963 Bobbitt et al. Mar 2017 B2
9641763 Bernal et al. May 2017 B2
9674458 Teich et al. Jun 2017 B2
9785898 Hofman et al. Oct 2017 B2
9860554 Samuelsson et al. Jan 2018 B2
9967446 Park May 2018 B2
20020104098 Zustak et al. Aug 2002 A1
20030093580 Thomas May 2003 A1
20030093794 Thomas May 2003 A1
20030101104 Dimitrova May 2003 A1
20030107592 Li Jun 2003 A1
20030107649 Flickner et al. Jun 2003 A1
20030163816 Gutta Aug 2003 A1
20030169337 Wilson et al. Sep 2003 A1
20040025180 Begeja Feb 2004 A1
20050012817 Hampapur et al. Jan 2005 A1
20050057653 Maruya Mar 2005 A1
20060001742 Park Jan 2006 A1
20060165379 Agnihotri Jul 2006 A1
20060173856 Jackson et al. Aug 2006 A1
20060181612 Lee et al. Aug 2006 A1
20060239645 Curtner et al. Oct 2006 A1
20060243798 Kundu et al. Nov 2006 A1
20070178823 Aronstam et al. Aug 2007 A1
20070182818 Buehler Aug 2007 A1
20070245379 Agnihortri Oct 2007 A1
20070279494 Aman et al. Dec 2007 A1
20070294207 Brown et al. Dec 2007 A1
20080004036 Bhuta et al. Jan 2008 A1
20080101789 Sharma May 2008 A1
20080114477 Wu May 2008 A1
20080158336 Benson et al. Jul 2008 A1
20080180537 Weinberg et al. Jul 2008 A1
20090006368 Mei Jan 2009 A1
20090237508 Arpa et al. Sep 2009 A1
20090268033 Ukita Oct 2009 A1
20090273663 Yoshida Nov 2009 A1
20090284601 Eledath et al. Nov 2009 A1
20100013917 Hanna et al. Jan 2010 A1
20100038417 Blankitny Feb 2010 A1
20100110212 Kuwahara et al. May 2010 A1
20100153182 Quinn et al. Jun 2010 A1
20100232288 Coatney et al. Sep 2010 A1
20110043631 Marman et al. Feb 2011 A1
20110128384 Tiscareno et al. Jun 2011 A1
20110246626 Peterson et al. Oct 2011 A1
20110289119 Hu et al. Nov 2011 A1
20110289417 Schaefer et al. Nov 2011 A1
20110320861 Bayer et al. Dec 2011 A1
20120017057 Higuchi Jan 2012 A1
20120072420 Moganti et al. Mar 2012 A1
20120098969 Wengrovitz et al. Apr 2012 A1
20120206605 Buehler Aug 2012 A1
20120226526 Donovan et al. Sep 2012 A1
20130106977 Chu et al. May 2013 A1
20130115879 Wilson et al. May 2013 A1
20130166711 Wang et al. Jun 2013 A1
20130169801 Martin et al. Jul 2013 A1
20130223625 de Waal et al. Aug 2013 A1
20130278780 Cazier et al. Oct 2013 A1
20130343731 Pashkevich et al. Dec 2013 A1
20140085480 Saptharishi Mar 2014 A1
20140172627 Levy et al. Jun 2014 A1
20140211018 de Lima et al. Jul 2014 A1
20140218520 Teich et al. Aug 2014 A1
20140282991 Watanabe et al. Sep 2014 A1
20140330729 Colangelo Nov 2014 A1
20140362223 LaCroix et al. Dec 2014 A1
20140375982 Jovicic et al. Dec 2014 A1
20150039458 Reid Feb 2015 A1
20150092052 Shin et al. Apr 2015 A1
20150121470 Rongo et al. Apr 2015 A1
20150208040 Chen et al. Jul 2015 A1
20150215583 Chang Jul 2015 A1
20150244992 Buehler Aug 2015 A1
20150249496 Muijs et al. Sep 2015 A1
20150294119 Gundam et al. Oct 2015 A1
20150358576 Hirose et al. Dec 2015 A1
20150379729 Datta et al. Dec 2015 A1
20150381946 Renkis Dec 2015 A1
20160014381 Rolf et al. Jan 2016 A1
20160065615 Scanzano et al. Mar 2016 A1
20160224430 Long et al. Aug 2016 A1
20160225121 Gupta et al. Aug 2016 A1
20160269631 Jiang et al. Sep 2016 A1
20160357648 Keremane et al. Dec 2016 A1
20160379074 Nielsen et al. Dec 2016 A1
20170193673 Heidemann et al. Jul 2017 A1
20170278365 Madar et al. Sep 2017 A1
20170278367 Burke et al. Sep 2017 A1
20170278368 Burke Sep 2017 A1
20170280043 Burke et al. Sep 2017 A1
20170280102 Burke Sep 2017 A1
20170280103 Burke et al. Sep 2017 A1
20180076892 Brilman et al. Mar 2018 A1
Foreign Referenced Citations (7)
Number Date Country
2 164 003 Mar 2010 EP
2 538 672 Dec 2012 EP
2003151048 May 2003 JP
2010074382 Apr 2010 JP
2007030168 Mar 2007 WO
2013141742 Sep 2013 WO
2014114754 Jul 2014 WO
Non-Patent Literature Citations (11)
Entry
Lu Weilin & Gan Keng Hoon, “Personalization of Trending Tweets using Like-Dislike Category Model”, 60 Procedia Comp. Sci. 236-245 (Dec. 2015) (Year: 2015).
International Search Report and the Written Opinion of the International Searching Authority, dated May 31, 2017, from International Application No. PCT/US2017/023430, filed Mar. 21, 2017. Fourteen pages.
International Search Report and the Written Opinion of the International Searching Authority, dated Jun. 12, 2017, from International Application No. PCT/US2017/023440, filed on Mar. 21, 2017. Fourteen pages.
International Search Report and the Written Opinion of the International Searching Authority, dated Jun. 19, 2017, from International Application No. PCT/US2017/023436, filed on Mar. 21, 2017. Fourteen pages.
International Search Report and the Written Opinion of the International Searching Authority, dated Jun. 21, 2017, from International Application No. PCT/US2017/023444, filed on Mar. 21, 2017. Thirteen pages.
International Search Report and the Written Opinion of the International Searching Authority, dated Jun. 28, 2017, from International Application No. PCT/US2017/023434, filed on Mar. 21, 2017. Thirteen pages.
International Preliminary Report on Patentability, dated Oct. 4, 2018, from International Application No. PCT/US2017/023440, filed on Mar. 21, 2017. Eight pages.
International Preliminary Report on Patentability, dated Oct. 4, 2018, from International Application No. PCT/US2017/023434, filed on Mar. 21, 2017. Eight pages.
International Preliminary Report on Patentability, dated Oct. 4, 2018, from International Application No. PCT/US2017/023430, filed Mar. 21, 2017. Eight pages.
International Preliminary Report on Patentability, dated Oct. 4, 2018, from International Application No. PCT/US2017/023436, filed on Mar. 21, 2017. Eight pages.
International Preliminary Report on Patentability, dated Oct. 4, 2018, from International Application No. PCT/US2017/023444, filed on Mar. 21, 2017. Seven pages.
Related Publications (1)
Number Date Country
20170277785 A1 Sep 2017 US