The present invention relates to characterizing crowds of people, and more particularly characterizing a crowd of people by at least one of population, dwell time and opportunity to see (OTS).
The purpose of advertising is to influence people into changing/enforcing behavior. In order to produce maximum effect using minimum resources promoters aim to tailor the message to the target audience and to target message delivery to the appropriate audience. Characterization of crowds of people within regions, e.g., buildings, can facilitate how advertising is targeted to regions.
According to an aspect of the present invention, a method is provided for characterizing crowds. In one embodiment, the computer-implemented method for characterizing the crowd includes recording a video stream of individuals at a location having at least one reference point for viewing, and extracting the individuals from frames of the video streams. The method can further include assigning tracking identification values to the individuals that have been extracted from the video streams; and measuring at least one type classification from the individuals having the tracking identification values. In one embodiments, the method further generates a crowd designation further characterizing the individuals having the tracking identification values in the location. The crowd designation can include at least one measurement of probability that the individuals having the tracking identification values in the location view the at least one reference point for viewing.
According to another aspect of the present invention, a system is provided for characterizing a crowd method. The system may include a hardware processor; and a memory that stores a computer program product. The computer program product when executed by the hardware processor, causes the hardware processor to record a video stream of individuals at a location having at least one reference point for viewing; and extract the individuals from frames of the video streams. In some embodiments, the hardware processor also assigns tracking identification values to the individuals that have been extracted from the video streams; and measures at least one type classification from the individuals having the tracking identification values. The system can further include generating a crowd designation further characterizing the individuals having the tracking identification values in the location. The crowd designation can include at least one measurement of probability that the individuals having the tracking identification values in the location view the at least one reference point for viewing.
According to yet another embodiment of the present invention, a computer program product for characterizing a crowd is described. The computer program product includes a computer readable storage medium having computer readable program code embodied therewith. The program instructions are executable by a processor to cause the processor to record a video stream of individuals at a location having at least one reference point for viewing. The program instructions can also include to extract, using the processor, the individuals from frames of the video streams; and assign, using the processor, tracking identification values to the individuals that have been extracted from the video streams. In some embodiments, the program instructions can also include to measure, using the processor, at least one type classification from the individuals having the tracking identification values; and to generate, using the processor, a crowd designation further characterizing the individuals having the tracking identification values in the location. The crowd designation includes at least one measurement of probability that the individuals having the tracking identification values in the location view the at least one reference point for viewing.
These and other features and advantages will become apparent from the following detailed description of illustrative embodiments thereof, which is to be read in connection with the accompanying drawings.
The disclosure will provide details in the following description of preferred embodiments with reference to the following figures wherein:
In accordance with embodiments of the present invention, systems and methods are provided to/for a real-time analytic system for characterizing crowds, which can support multiple applications including crowd counting, dwell time and OTS (Opportunity to See). “Crowd counting” is defined as the number of persons in a location. “Dwell time” is defined as the time duration of a person stay in a location. OTS is used to measure the success of advertising by monitoring the behavior of viewers in real time, and needs additional information such as the position of the individuals being analyzed relative to locations have advertisements to be observed, e.g., the opportunity to see (OTS) may consider the angle of each person facing advertisements.
Referring now to
The methods, systems and computer program products measure not only the number of people within the monitored environment 100, e.g., a population density, but can also provide a characterization of how long individuals say within the monitored environment, and provide a measurement of the opportunity that a person would see the advertisement, e.g., by looking at the point of reference. The methods, systems and computer program products can employ a series of camera's 105 to record individuals 102, 103, 104 in the monitored environment 100. The recorded videos are extracted into frames by the system for characterizing the crowds 200. In some embodiments, the system for characterizing crowds 200 can be in communication with the cameras 105 across a network 50. The network 50 may be any appropriate network, for example a local area network. In some examples, the network 50 may be a wireless network, such as a mesh network.
The characterization can include crowd density, e.g., how many individuals 102, 103, 104 are in the monitored environment 100. The characterization can also include the dwell time for the people within the monitored environment, e.g., how long the individuals 102, 103, 104 are present within the monitored environment 100. The characterization can also include a measurement of the opportunity to see (OTS) for the individuals 102, 103, 104. The characterization of the crowds may also include a measurement of the crowd type. This type of characterization may include data on the gender and age of the individuals 102, 103, 104. The above aforementioned data is all measured from obtained from analyzing the video camera feeds and tracking the individuals 102, 103, 104.
For example, an observing individual 102 may be positioned within the monitored environment 100 having a posture placing their attention on the at least one point of reference 101, while two other non-observing individuals 103, 104 do not have a posture that would place their attention on the point of reference 101. The non-observing individuals 103, 104 may not be facing the point of reference, or they may be traveling in a direction that is not conducive to viewing the point of reference 101. The observing individual 102 may have a pose indicative of
The ability to identify individuals from video frames, and to determine the position of the individuals, as well as the pose of the individuals relative to the point of reference 101 can be provided by computer vision methods.
Both the observing individual 102, and the non-observing individuals 103, 104, can all be measured and included in the population for the crowd. Although the example depicted in
The system for characterizing the crowds 200 can assign tracking identifications to the individuals in the monitored space. By tracking the individuals in the monitored environment, a dwell time can be provided for each of the individuals 102, 103, 104. Tracking can employ individual identification from the video frames, facial recognition, identification tagging of individuals matched to the facial images measured by the facial recognition and time tracking. In the example depicted in
Taking into account the number of individuals in the crowd, the dwell time for the individuals, and the positioning and posing of the individuals, e.g., opportunity to see, the crowd characterizing system can designate which monitored spaces 100 have a point of reference 101 that is best for targeted advertising.
Further, in some embodiments, the using crowd characterization system using the cameras 105 can also measure at least one type classification from the individuals 102, 103, 104. The crowd designation can be by at least one of gender and age. The gender and age of the individuals 102, 103, 104 can be measured using the camera's and computer vision.
Computer vision is concerned with the automatic extraction, analysis and understanding of useful information from a single image or a sequence of images. It involves the development of a theoretical and algorithmic basis to achieve automatic visual understanding. Computer vision can be provided by digital systems that can process, analyze, and make sense of visual data, e.g., data from frames from the video of the individuals taken by the cameras 105. In some embodiments, machines attempt to retrieve visual information, handle it, and interpret results through software algorithms. In some embodiments, the software algorithms employ pattern recognition and can be configured to provide age estimates and genders for the individuals in the video. For example, referring to
Using the crowd characterization for the population of the crowd, the dwell time for the individuals in the crowd, and the opportunity to see (OTS) values, as well as the characterization types, e.g., gender and age, of the individuals in the crowd, the crowd characterization system 200 can provide at least one measurement of probability that the individuals in the location being monitored will view the at least one reference point 101. More specifically, using the likelihood that the individuals will view the reference point 101 of the locating being monitored, and the type characterization, e.g., gender and/or age, of the individuals being tracked, the characterization system 200 can launch targeted advertising to the crowd. More specifically, the characterization system 200 can launch advertising having a subject that matches the age and gender of the individuals at reference point of a location being monitored having a high likelihood of being viewed by individuals. The characterization system 200 may include an interface for communicating over the network 50 to an application that displays advertising at the point of reference. The application using the signaled measurement of probability that the individuals in the location being monitored will view the at least one reference point 101, and the signaled characterization of the type characteristics of the individuals in the crowd, e.g., age and gender, will provide transmit the appropriate advertising subjects to be displayed at the at least one reference point 101. This can be done in real time while the crowd is being recorded by the cameras 105.
Referring now to
The crowd characterization system 200 also includes an interface 211 for receiving a video input 210 from the cameras 105. The interface 211 provides communications from the cameras 105 to the crowd characterization system 200, which includes feeding the video feed to the identity extractor 300 of the crowd characterization system 200. The interface 211 of the crowd characterization system 200 also includes an output 212 for the measurement of probability that the individuals in the location being monitored will view the at least one reference point 101, and the signaled characterization of the type characteristics of the individuals in the crowd. The output 212 can be in communication with an application that responsive to the measurement of probability that the individuals in the location being monitored will view the at least one reference point 101, and the signaled characterization of the type characteristics of the individuals in the crowd, transmits the appropriate advertising subjects to be displayed at the at least one reference point 101. The crowd characterization system 200 is a real-time analytic system that can support multiple applications including crowd counting, dwell time and OTS (Opportunity to See).
The crowd characterization system 200 may also include at least one hardware processor 209 and at least one memory device 208. As employed herein, the term “hardware processor subsystem” or “hardware processor” can refer to a processor, memory, software or combinations thereof that cooperate to perform one or more specific tasks. In useful embodiments, the hardware processor subsystem can include one or more data processing elements (e.g., logic circuits, processing circuits, instruction execution devices, etc.). The one or more data processing elements can be included in a central processing unit, a graphics processing unit, and/or a separate processor- or computing element-based controller (e.g., logic gates, etc.). The hardware processor subsystem can include one or more on-board memories (e.g., caches, dedicated memory arrays, read only memory, etc.). In some embodiments, the hardware processor subsystem can include one or more memories that can be on or off board or that can be dedicated for use by the hardware processor subsystem (e.g., ROM, RAM, basic input/output system (BIOS), etc.).
In some embodiments, the hardware processor subsystem can include and execute one or more software elements. The one or more software elements can include an operating system and/or one or more applications and/or specific code to achieve a specified result.
In other embodiments, the hardware processor subsystem can include dedicated, specialized circuitry that performs one or more electronic processing functions to achieve a specified result. Such circuitry can include one or more application-specific integrated circuits (ASICs), field-programmable gate arrays (FPGAs), and/or programmable logic arrays (PLAs).
Still referring to
In one example, the system for characterizing a crowd 200 includes the hardware processor 208, and the memory device 209 that stores a computer program product, which, when executed by the hardware processor 209, causes the hardware processor to record a video stream of individuals 102, 103, 104 at a location 100 having at least one reference point 101 for viewing. In some embodiments, the memory and hardware device in combination with the identity extractor 300 extract the individuals from frames of the video streams, and assign tracking identification values to the individuals that have been extracted from the video streams. In some embodiments, the feature extractor 400 with the hardware processor 209 and memory 208 measure at least one type classification from the individuals having the tracking identification values. In some embodiments, the crowd characterization designator generates a crowd designation further characterizing the individuals having the tracking identification values in the location, the crowd designation including at least one measurement of probability that the individuals having the tracking identification values in the location view the at least one reference point for viewing.
Referring to
As illustrated in
The person detector 311 can perform person detection on video frames on the video frames. In one example, the person detector 311 can perform person detection 410 using a neural network—based machine learning system that recognizes the presence of a person-shaped object within a video frame and that provides a location within the video frame, for example as a bounding box. The people detector 311 can output the location of each person (individual) in a frame.
The output of the people detector 311 may be to the tracker to assign identification 312. The locations of detected people within the frames from the people detector 400 are provided to the tracker to assign identification 312. Person tracking tracks the occurrence of particular individuals across sequences of images. The tracker 312 tracks each person (individual) so that each person has a unique track identification (id). The track identification is anonymous label. It is not the actual identity of the individual being tracked.
The face detector 313 can perform facial recognition on the video frames. Facial detection may be performed using, e.g., a neural network—based machine learning system that recognizes the presence of a face within a video frame and that provides a location within the video frame, for example as a bounding box. Face recognition may include filtering a region of interest within a received video frame, discarding unwanted portions of the frame, and generating a transformed frame that includes only the region of interest (e.g., a region with a face in it). Face detection can furthermore perform face detection on the transformed frame either serially, or in parallel. In some embodiments, for example when processing video frames that include multiple regions, the different regions of interest can be processed serially, or in parallel, to identify faces. The face detector 313 can provide the locations of all faces in a frame.
The output of the face detector 313 and the people detector 311 provide the input to the connector 314 to assign the persons with assigned identification to facial images. In some embodiments, the connector 314 using the locations of faces and persons connects each person to the person's face. For example, if a face location, which is a bounding box, is within and on the top part of a person's location (bounding box), then the face is connected to the person.
Referring to
The age detector 420 can use deep learning to extract the age number from the images, e.g., face images, of the person provided by the input 410. The posture, geometry, pattern, and facial wrinkles are all elements that facilitate the prediction of the user's age. Using the above noted characteristics from imaging, the age of the individuals can be estimated using artificial intelligence. The artificial intelligence may employ deep-learning architectures, such as deep neural networks, deep belief networks, recurrent neural networks and convolutional neural networks applied to computer vision to provide an age estimate.
The gender detector 430 can use deep learning to extract the gender, e.g., male or female, from the images, e.g., facial images, of the person provided by the input 410. The posture, geometry, pattern, and facial are all elements that facilitate the prediction of the user's age. Using the above noted characteristics from imaging, the age of the individuals can be estimated using artificial intelligence. The artificial intelligence may employ deep-learning architectures, such as deep neural networks, deep belief networks, recurrent neural networks and convolutional neural networks applied to computer vision to provide a gender characterization.
The position detector 440 can characterize the position of the individuals 102, 103, 104 relative to the point of reference 101. For example, the position detector 440 can employ computer vision techniques to extract from the images provided by the input 410 the angle of the individuals facing the cameras 105. For example, considering the positioning of a camera 105 mounted proximate to the point of reference 101, e.g., the camera 105 installs on top of the monitor at the point of reference 101 used to show advertisements, the angle of individuals facing the camera is considered as the angle of the facing the advertisements, and therefore can indicate whether the person is watching the point of reference including advertisements or not.
Still referring to
The tracking of the tagged individuals having the identification provided by the people detector can be provided by a crowd identification (ID) manager 520. The crowd ID manager 520 includes a database of historical individuals having a tagged identification (ID). The input 510 to the crowd characterizing designator 500 is from the output of the feature extractor 400. The input 510 includes tracked persons with type characterization features. The input 510 is to a crowd identification (ID) manager 520. The input 510 includes all persons in the current frames, and each person has a track ID (Id), face and person locations, age gender information, and face pose angle.
The crowd ID manager 520 includes storage for historical identifications (IDs) 521. The storage for historical identifications (IDs) 521 may be a table that includes all current active persons, and each person has a start time, end time, and impress time. The start time means the time of the person first seen in the camera, the end time means the time of the person last seen in the camera, the impression time is the number of seconds that the person's face angle to the camera is less than a threshold, named as impression threshold. For each person from the input 510 having a track ID, the crowd ID manager 520 checks whether the person's track id exists in the storage for historical identification (IDs) 521, e.g., exists in the history table or not.
The crowd ID manager 520 also includes a crowd updater 522. The crowd updater 522 assigns updates existing track IDs in the storage for historical identifications (IDs) 521, adds new track IDs to the storage for historical identifications (IDs) 521, and deletes track IDs from the storage for historical identifications (IDs) 521. The update, addition and delete functions performed by the crowd updater 522 can be dependent upon the input of new data received by the crowd characterization system 200 starting from the video being taken by the cameras 105.
If the crowd ID manager 520 receives a person having an existing track ID in the storage for historical identifications (IDs) 521, the crowd updater 522 updates the last seen time (end time) of the person in the table of the storage 521 to current time, and if the face angle is less than the impression threshold, the updater 522 can increase the person's impression time by a factor, e.g., a factor of 1 second. In some examples, for impression time, the updater 522 can check once every second, because the updater 522 will add 1 second when increasing due to impression time.
If the crowd ID manager 520 receives a person having a new track ID that is not existing in the storage for historical identifications (IDs) 521, the crowd updater 522 adds the new track ID to the storage 521, e.g., adds to the new track ID to the table within the storage 521, and sets the start time and end time to current time. In this instance initialize the impression time to 0.
The crowd ID manager may also include a function for removing track IDs that are not longer relevant. For example, the crowd updater 522 removes a track ID from the storage 521, e.g., adds to the new track ID to the table within the storage 521, if its end time is not updated for a while, for example, 3 seconds.
Using the three scenarios, the crowd ID manager 500 provides an updated history table which is stored in the storage for historical identifications (IDs) 521. For all persons in the updated history table in the storage for historical identifications (IDs) 521, the crowd ID manager 500 verifies them with regions of interest, e.g., the different monitored locations 100. Each region of interest, e.g., within the monitored locations 100 including the point of interest 101, is a bounding box in the video frame, and it defines a part of the video frame that is partially interested for use as a point of reference 101 for targeted advertising. The location of each person is used to determine whether this person belongs one region of interest or not. The whole frame can function as the default region of interest, unless customers specify. The crowd ID manager employs the tracked IDs and timing information correlated to the different video cameras 105 (which can be specific to the monitored locations 100) to determine which regions the people being tracked are present in, and can determine the times at which the persons being tracked are within the regions being monitored.
For each region of interest, e.g., monitored location 100 (such as point of reference 101, the crowd characterizing designator 500 outputs results for three applications using the tracking information, e.g., tracking ID for each region of interest, and the associated type characteristics. The three applications include a crowd counter 530, a dwell time timer 540, and an opportunity to see calculator 550.
The crowd characterizing designator 500 uses a crowd counter 530 that totals the number of persons within each region of interest, e.g., monitored location 100 (such as point of reference 101). The crowd counter 530 counts the number of the persons using the number of track identification (id).
The crowd characterizing designator 500 also employs a dwell time timer 540. The dwell time timer 540 performs dwell time calculations. The dwell time timer 540 measures the time difference between end time and start time as the duration of a person, e.g., a person having a track identification (id), staying before camera 105.
The crowd characterizing designator 500 also employs an opportunity to see (OTS) calculator 550. The OTS results include the dwell time, and the impression time. The OTS calculator 550 also incorporates the demographic information, e.g., age, gender and position (e.g., the angle of the person facing the point of interest 101). This information is obtained from the type characteristics that have been tied to the track identification. The outputs from the crowd counter 530, dwell time timer 540 and the OTS calculator 550 can all be automatically launched from the crowd characterizing designator 500 to an application that matches this information to advertising content. The matched advertising content is displayed at the point of reference. This provides targeted advertising to the type characteristics, gender and age, of the individuals being tracked at the appropriate locations being monitored and times.
Depending on the user's needs for the crowd characterization system 200, the system supports different configurations for different applications. For example, if a customer only needs the crowd counting function provided by the crowd counter 530, the configuration of the system 200 can only enable the people detector 311, and disables other detectors including people tracker 312, face detector 313, age detector 420, gender detector 430, and pose detector 440. In the manner, the hardware cost of the system can be reduced significantly. For dwell time, we only need to enable people detector 311 and people tracker 312 without any face-related detectors, such as the face detector 313, age detector 420, gender detector 430, and position detector 440. In some embodiments, the system 300 employs all detectors for calculating the opportunity to see (OTS), which would include the people detector 311, the people tracker 312, the face detector 313, age detector 420, gender detector 430, and position detector 440.
The processing system 700 includes a set of processing units (e.g., CPUs) 701, a set of GPUs 702, a set of memory devices 703, a set of communication devices 704, and set of peripherals 705. The CPUs 701 can be single or multi-core CPUs. The GPUs 702 can be single or multi-core GPUs. The one or more memory devices 703 can include caches, RAMs, ROMs, and other memories (flash, optical, magnetic, etc.). The communication devices 704 can include wireless and/or wired communication devices (e.g., network (e.g., WIFI, etc.) adapters, etc.). The peripherals 705 can include a display device, a user input device, a printer, an imaging device, and so forth. Elements of processing system 700 are connected by one or more buses or networks (collectively denoted by the figure reference numeral 710). The crowd characterization system 200 may be in communication with the bus 710.
In an embodiment, memory devices 703 can store specially programmed software modules to transform the computer processing system into a special purpose computer configured to implement various aspects of the present invention. In an embodiment, special purpose hardware (e.g., Application Specific Integrated Circuits, Field Programmable Gate Arrays (FPGAs), and so forth) can be used to implement various aspects of the present invention.
Of course, the processing system 700 may also include other elements (not shown), as readily contemplated by one of skill in the art, as well as omit certain elements. For example, various other input devices and/or output devices can be included in processing system 200, depending upon the particular implementation of the same, as readily understood by one of ordinary skill in the art. For example, various types of wireless and/or wired input and/or output devices can be used. Moreover, additional processors, controllers, memories, and so forth, in various configurations can also be utilized. These and other variations of the processing system 700 are readily contemplated by one of ordinary skill in the art given the teachings of the present invention provided herein.
Moreover, it is to be appreciated that various figures as described below with respect to various elements and steps relating to the present invention that may be implemented, in whole or in part, by one or more of the elements of system 700.
Embodiments described herein may be entirely hardware, entirely software or including both hardware and software elements. In a preferred embodiment, the present invention is implemented in software, which includes but is not limited to firmware, resident software, microcode, etc.
Embodiments may include a computer program product accessible from a computer-usable or computer-readable medium providing program code for use by or in connection with a computer or any instruction execution system. A computer-usable or computer readable medium may include any apparatus that stores, communicates, propagates, or transports the program for use by or in connection with the instruction execution system, apparatus, or device. The medium can be magnetic, optical, electronic, electromagnetic, infrared, or semiconductor system (or apparatus or device) or a propagation medium. The medium may include a computer-readable storage medium such as a semiconductor or solid state memory, magnetic tape, a removable computer diskette, a random access memory (RAM), a read-only memory (ROM), a rigid magnetic disk and an optical disk, etc.
Each computer program may be tangibly stored in a machine-readable storage media or device (e.g., program memory or magnetic disk) readable by a general or special purpose programmable computer, for configuring and controlling operation of a computer when the storage media or device is read by the computer to perform the procedures described herein. The inventive system may also be considered to be embodied in a computer-readable storage medium, configured with a computer program, where the storage medium so configured causes a computer to operate in a specific and predefined manner to perform the functions described herein.
A data processing system suitable for storing and/or executing program code may include at least one processor coupled directly or indirectly to memory elements through a system bus. The memory elements can include local memory employed during actual execution of the program code, bulk storage, and cache memories which provide temporary storage of at least some program code to reduce the number of times code is retrieved from bulk storage during execution. Input/output or I/O devices (including but not limited to keyboards, displays, pointing devices, etc.) may be coupled to the system either directly or through intervening I/O controllers.
Network adapters may also be coupled to the system to enable the data processing system to become coupled to other data processing systems or remote printers or storage devices through intervening private or public networks. Modems, cable modem and Ethernet cards are just a few of the currently available types of network adapters.
Referring now to
At block 2, the method continues with extracting individuals from the frames of the video streams, and from extracting facial features from the frames of the video streams. Facial feature detection can be provided using computer vision, and pattern recognition. In some embodiments, facial feature detection is provided by the face detector 313 of the identity extractor 300 that is described with reference to
Person detection can be applied to the frames of the video stream in real time, to detect people and faces. A track id is also added to each detected person and face to connect persons across different frames as noted in block 3 of
Referring to
The method may continue to block 5 of
The person's track id is checked to see if it exists in the history table. Referring to block 7, this can include checking the time for the video frame for the tracked ID and comparing the time of the new video frame with the existing timing measurements in track IDs for each monitored location.
Referring to block 8, an entry in history table can be removed if its end time is not updated for a set period of time which designates the individual being tracked is no longer within a location being monitored, for example, a time period of 3 seconds without being captured in a video frame.
At block 9, for each person having a currently tracked ID, the method can check whether the person's track ID already exists in the history table or not at block 9. If yes, e.g., the track ID exists in the stored history, the method can update the last seen time (end time) of the person in the table to current time at block 10. In some embodiments, if the face angle is less than the impression threshold, the method can increase the impression time for the person being tracked by a set time period, e.g., 1 second. For impression time, in one example, the method may check once every second, since the time period for increasing the impression time is 1 second when increasing it.
Referring back to block 9, if the track ID is a new one for the history table, e.g., the track ID does not match an existing track ID in the history table, the method may add the input track ID to the existing historical IDs that are stored the historical table at block 11. In this example, the method can set start time and end time to current time. In this example, the method may also initialize the impression time to 0.
At block 12 of
For each region of interest, the method can output results for three applications. In some embodiments, the crowd characterization characterizing designator 500 also includes three applications, e.g., a crowd counter 530, dwell time timer 540, and opportunity to see (OTS) calculator 550.
Block 13 shows the output for crowd counting. In some embodiments, the method can count the number of the persons using the number of track ID. In one example, crowd counting can be performed by a crowd counter 530. Further details regarding the crowd counter 530 are provided in the description of the crowd characterization designator 500 provided in
Block 14 shows the output of a time calculation. The method may employ the time difference between end time and start time as the duration of a person 102, 103, 104 staying before a camera 105. The time calculation may be performed by a dwell time timer 540. Further details regarding the dwell time timer 540 are provided in the description of the crowd characterization designator 500 provided in
Block 15 shows the output of an opportunity to see (OTS) calculation. The OTS results can include the dwell time, and the impression time. The OTS calculation can also incorporates the demographic information, e.g., age, gender and position (e.g., the angle of the person facing the point of interest 101). This information is obtained from the type characteristics that have been tied to the track identification.
The outputs from the crowd counting, time calculation (dwell time timer 540) and the OTS calculation 550 all be automatically launched from the crowd characterizing designator 500 to an application that matches this information to advertising content. The matched advertising content is displayed at the point of reference. This provides targeted advertising to the type characteristics, gender and age, of the individuals being tracked at the appropriate locations being monitored and times. The advertising application can play content at the at least one point of reference 101 for viewing that matches the type classification of the viewers 102, 103, 104 in the region being monitored 100. The advertising application can play the content at the point of reference when the measuring for the probability of viewing exceeds a threshold value. The threshold value may be a preset value that indicates enough of the viewership would be interested in the subject matter of the advertising.
In some embodiments, the methods, systems and computer program products that have been described above with reference to
In some embodiments, the methods, systems and computer program products can connects persons across frames with history information. For example, a tracker 312 can be employed to track persons, and faces are connected to persons using location information and a connector 314 to assign persons with tracking ID to facial images.
In some embodiments, the methods, systems and computer program products can provide a method to calculate dwell time and a method to calculate impression time.
The methods, systems and computer program products that have been described above with reference to
Reference in the specification to “one embodiment” or “an embodiment” of the present invention, as well as other variations thereof, means that a particular feature, structure, characteristic, and so forth described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrase “in one embodiment” or “in an embodiment”, as well any other variations, appearing in various places throughout the specification are not necessarily all referring to the same embodiment. However, it is to be appreciated that features of one or more embodiments can be combined given the teachings of the present invention provided herein.
It is to be appreciated that the use of any of the following “/”, “and/or”, and “at least one of”, for example, in the cases of “A/B”, “A and/or B” and “at least one of A and B”, is intended to encompass the selection of the first listed option (A) only, or the selection of the second listed option (B) only, or the selection of both options (A and B). As a further example, in the cases of “A, B, and/or C” and “at least one of A, B, and C”, such phrasing is intended to encompass the selection of the first listed option (A) only, or the selection of the second listed option (B) only, or the selection of the third listed option (C) only, or the selection of the first and the second listed options (A and B) only, or the selection of the first and third listed options (A and C) only, or the selection of the second and third listed options (B and C) only, or the selection of all three options (A and B and C). This may be extended for as many items listed.
The foregoing is to be understood as being in every respect illustrative and exemplary, but not restrictive, and the scope of the invention disclosed herein is not to be determined from the Detailed Description, but rather from the claims as interpreted according to the full breadth permitted by the patent laws. It is to be understood that the embodiments shown and described herein are only illustrative of the present invention and that those skilled in the art may implement various modifications without departing from the scope and spirit of the invention. Those skilled in the art could implement various other feature combinations without departing from the scope and spirit of the invention. Having thus described aspects of the invention, with the details and particularity required by the patent laws, what is claimed and desired protected by Letters Patent is set forth in the appended claims.
This application claims priority to 62/994,928, filed on Mar. 26, 2020, incorporated herein by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
62994928 | Mar 2020 | US |