SYSTEM FOR DETECTING OBJECTS USING MODULATED PULSES AND COUNTING DETECTED OBJECTS

Information

  • Patent Application
  • 20250147167
  • Publication Number
    20250147167
  • Date Filed
    January 09, 2025
    4 months ago
  • Date Published
    May 08, 2025
    2 days ago
Abstract
Systems, methods, and computer program products for processing data from object detection devices are provided. An example method includes receiving at least one detection data packet from a sensor. The example method also includes determining an estimated object count based on the at least one detection data packet. The estimated object count indicates a number of distinct objects in the area. The example method further includes comparing the estimated object count to an object count cap, wherein the object count cap is based on a desired confidence level. The example method still further includes determining a display object count for the area based on the comparison of the estimated object count to the object count cap. The display object count is less than or equal to the object count cap. The example method also includes causing a rendering of the display object count to a user interface.
Description
FIELD

An example embodiment relates generally to systems for detecting object(s), and more particularly to systems for detecting object(s) and determining a count of detected objects.


BACKGROUND

The reported count and reported accuracy of radar-based object-counters can be hard to express. As more objects are detected by many radar-based object-counters, the accuracy of the count decreases. As such, currently there is a limitation to how radar-based object-counter systems express their count and accuracy to users. Therefore, there exists a need for a system that can express its reported count and accuracy in way that is easier for users to understand.


SUMMARY

The following paragraphs present a summary of various embodiments of the present disclosure and are merely examples of potential embodiments. As such, the summary is not meant to limit the subject matter or variations of various embodiments discussed herein.


In some aspects, the techniques described herein relate to system for detecting objects including: a sensor for detecting at least one of a phase shift or an amplitude change in modulated pulses from a pulse generator, wherein the at least one of a phase shift or an amplitude change indicates one or more objects in an area; and a count determination device, wherein the count determination device includes at least one non-transitory storage device; and at least one processing device coupled to the at least one non-transitory storage device, wherein the at least one processing device is configured to: receive at least one detection data packet from the sensor, wherein the at least one detection data packet includes the at least one of the phase shift or the amplitude change; determine an estimated object count based on the at least one detection data packet, wherein the estimated object count indicates a number of distinct objects in the area; compare the estimated object count to an object count cap, wherein the object count cap is based on a desired confidence level of the system; based on the comparison of the estimated object count to the object count cap, determine a display object count for the area, wherein the display object count is less than or equal to the object count cap; and cause a rendering of the display object count to a user interface.


In some aspects, the techniques described herein relate to a system, wherein the object count cap is at least two objects.


In some aspects, the techniques described herein relate to a system, wherein the at least one processing device is also configured to determine an occupancy type of the area based on the display object count, wherein the occupancy type is unoccupied in an instance in which the display object count is zero, wherein the occupancy type is single occupancy in an instance in which the display object count is one, and wherein the occupancy type is multiple occupancy in an instance in which the display object count is above one.


In some aspects, the techniques described herein relate to a system, wherein the count determination device is further configured to: determine, based on the comparison of the estimated object count to the object count cap, a display accuracy; and cause a rendering of the display accuracy to the user interface.


In some aspects, the techniques described herein relate to a system, wherein each of the at least one object corresponds to an individual person.


In some aspects, the techniques described herein relate to a system, wherein the display object count is rendered by displaying the display object count followed by a ‘+’.


In some aspects, the techniques described herein relate to a system, wherein the user interface is a mobile device.


In some aspects, the techniques described herein relate to a system, wherein the display object count is one less than the estimated object count.


In some aspects, the techniques described herein relate to a method of detecting objects including: receiving at least one detection data packet from a sensor, wherein the sensor is configured to detect at least one of a phase shift or an amplitude change in modulated pulses from a pulse generator, and wherein the at least one of the phase shift or the amplitude change indicates one or more objects in an area; determining an estimated object count based on the at least one detection data packet, wherein the estimated object count indicates a number of distinct objects in the area; comparing the estimated object count to an object count cap, wherein the object count cap is based on a desired confidence level; determining a display object count for the area based on the comparison of the estimated object count to the object count cap, wherein the display object count is less than or equal to the object count cap; and causing a rendering of the display object count to a user interface.


In some aspects, the techniques described herein relate to a method, wherein the object count cap is at least two objects.


In some aspects, the techniques described herein relate to a method, further including determining an occupancy type of the area based on the display object count, wherein the occupancy type is unoccupied in an instance in which the display object count is zero, wherein the occupancy type is single occupancy in an instance in which the display object count is one, and wherein the occupancy type is multiple occupancy in an instance in which the display object count is above one.


In some aspects, the techniques described herein relate to a method, further including determining, based on the comparison of the estimated object count to the object count cap, a display accuracy; and causing a rendering of the display accuracy to the user interface.


In some aspects, the techniques described herein relate to a method, wherein each of the at least one object corresponds to an individual person.


In some aspects, the techniques described herein relate to a method, wherein the display object count is rendered by displaying the display object count followed by a ‘+’.


In some aspects, the techniques described herein relate to a computer program product for processing data from object detection devices, the computer program product including at least one non-transitory computer-readable medium having one or more computer-readable program code portions embodied therein, the one or more computer-readable program code portions including at least one executable portion configured to: receive at least one detection data packet from a sensor, wherein the sensor is configured to detect at least one of a phase shift or an amplitude change in modulated pulses from a pulse generator, and wherein the at least one of the phase shift or the amplitude change indicates one or more objects in an area; determine an estimated object count based on the at least one detection data packet, wherein the estimated object count indicates a number of distinct objects in the area; compare the estimated object count to an object count cap, wherein the object count cap is based on a desired confidence level; determine a display object count for the area based on the comparison of the estimated object count to the object count cap, wherein the display object count is less than or equal to the object count cap; and cause a rendering of the display object count to a user interface.


In some aspects, the techniques described herein relate to a computer program product, wherein the at least one executable portion is further configured to: determine, based on the comparison of the estimated object count to the object count cap, a display accuracy; and cause a rendering of the display accuracy to the user interface.


In some aspects, the techniques described herein relate to a computer program product, wherein the object count cap is at least two objects.


In some aspects, the techniques described herein relate to a computer program product, wherein the at least one executable portion is further configured to determine an occupancy type of the area based on the display object count, wherein the occupancy type is unoccupied in an instance in which the display object count is zero, wherein the occupancy type is single occupancy in an instance in which the display object count is one, and wherein the occupancy type is multiple occupancy in an instance in which the display object count is above one.


In some aspects, the techniques described herein relate to a computer program product, wherein each of the at least one object corresponds to an individual person.


In some aspects, the techniques described herein relate to a computer program product, wherein the display object count is rendered by displaying the display object count followed by a ‘+’.





BRIEF DESCRIPTION OF THE DRAWINGS

The above and other features of the present disclosure, its nature and various advantages will be more apparent upon consideration of the following detailed description, taken in conjunction with the accompanying drawings in which:



FIG. 1A is a block diagram of a positional sensing system in which a positional sensing device can be used, in accordance with various embodiments of the present disclosure;



FIG. 1B depicts the placement of a positional sensing device, in accordance with various embodiments of the present disclosure;



FIG. 2A depicts the placement of a plurality of positional sensing device, in accordance with various embodiments of the present disclosure;



FIG. 2B depicts a cross sectional area monitored by a positional sensing, in accordance with various embodiments of the present disclosure;



FIG. 3 is a diagram of certain component parts of a positional sensing device and remote server, in accordance with various embodiments of the present disclosure;



FIG. 4 depicts a stored map and positions of a plurality of positional sensing devices, in accordance with various embodiments of the present disclosure;



FIG. 5 depicts a plurality of positional sensing devices in which at least a portion of the monitored area is occluded, in accordance with various embodiments of the present disclosure;



FIG. 6 depicts a plurality of positional sensing devices in which at least a portion of the monitored area is occluded from a first positional sensing device, but not a second positional sensing device, in accordance with various embodiments of the present disclosure;



FIG. 7 depicts exemplary tracking data generated by the positional sensing system, in accordance with various embodiments of the present disclosure;



FIG. 8 depicts data flow for generating tracking data by the positional sensing system, in accordance with various embodiments of the present disclosure;



FIG. 9A is a diagram of a positional sensing device, in accordance with various embodiments of the present disclosure;



FIG. 9B is a diagram of a depth sensing device in accordance with some embodiments of the present disclosure;



FIG. 10 is a flow chart that details a method of detecting objects, in accordance with various embodiments of the present disclosure;



FIG. 11 is a flow chart that details a method of detecting objects, in accordance with various embodiments of the present disclosure;



FIG. 12A is a sensing device for detecting objects, in accordance with various embodiments of the present disclosure;



FIG. 12B illustrates the sensing device of FIG. 12A connected to a network and a server, in accordance with various embodiments of the present disclosure; and



FIG. 13A-13D illustrate various positioning locations of a sensing device, in accordance with various embodiments of the present disclosure.





The use of the same reference numbers in different figures indicates similar or identical items or features. Moreover, multiple instances of the same part are designated by a common prefix separated from the instance number by a dash. The drawings are not to scale.


DETAILED DESCRIPTION

The presently disclosed subject matter now will be described more fully hereinafter with reference to the accompanying drawings, in which some, but not all embodiments of the presently disclosed subject matter are shown. Like numbers refer to like elements throughout. The presently disclosed subject matter may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will satisfy applicable legal requirements.


Indeed, many modifications and other embodiments of the presently disclosed subject matter set forth herein will come to mind to one skilled in the art to which the presently disclosed subject matter pertains having the benefit of the teachings presented in the foregoing descriptions and the associated drawings. Therefore, it is to be understood that the presently disclosed subject matter is not to be limited to the specific embodiments disclosed and that modifications and other embodiments are intended to be included within the scope of the appended claims.


Throughout this specification and the claims, the terms “comprise,” “comprises”, and “comprising” are used in a non-exclusive sense, except where the context requires otherwise. Likewise, the term “includes” and its grammatical variants are intended to be non-limiting, such that recitation of items in a list is not to the exclusion of other like items that can be substituted or added to the listed items.


I. Example Use Case

The reported count and reported accuracy of radar-based object-counters can be hard to express. As more objects are detected by many radar-based object-counters, the accuracy of the count decreases. As such, currently there is a limitation to how radar-based object-counter systems express their count and accuracy to users. Therefore, there exists a need for a system that can express its reported count and accuracy in way that is easier for users to understand.


Existing people counting solutions are insufficient to address this conflict. Human-performed, manual solutions, such as observational studies or tally-counting (with a clicker) require a dedicated human observer, cannot be performed at all times, and may be prone to error. Therefore, those solutions lack accuracy and scalability. Solutions implemented through other types of existing technology are similarly inadequate. While increased accuracy of counting can be obtained through, e.g., the use of optical cameras or badge/fob data (typically RFID), such methods of data collection create or rely upon repositories of personally-identifiable information, thereby sacrificing anonymity. Some technical solutions may offer increased privacy, through the use of, e.g., thermal cameras, motion sensors (passive infrared), break beam sensors, and the like, but once again sacrifice accuracy of results. For example, those existing anonymous solutions may have limited range of detection or may be unable to classify or identify objects as human (as compared to, e.g., animals or inorganic objects), leading to false positives. In some cases, these solutions may suffer from problems relating to depth of field, occlusion, and/or stereoscopic vision. Solutions implemented by third-party proxies, such as the aggregation of point-of-sale data, energy consumption tracking, or Wi-Fi MAC address tracking may be insufficiently imprecise, as they track only data tangential to people count and may also collect personally-identifiable information (device data). Further, solutions such as Wi-Fi MAC address tracking may be rendered inaccurate by MAC address randomization or other privacy protecting efforts used by device vendors.


Therefore, additional solutions to provide anonymous, accurate, real-time people counting and trajectory are generally desired.


II. With Reference to the FIGS

Reference will now be made in detail to aspects of the disclosure, examples of which are illustrated in the accompanying drawings. The following description refers to the accompanying drawings in which the same numbers in different drawings represent the same or similar elements unless otherwise represented. The implementations set forth in the following description do not represent all implementations consistent with the disclosure. Instead, they are merely examples of apparatuses and methods consistent with aspects related to the disclosure as recited in the appended claims. Particular aspects of the present disclosure are described in greater detail below. The terms and definitions provided herein control, if in conflict with terms and/or definitions incorporated by reference.


Systems, methods, and apparatuses are described herein which relate generally to processing and rendering large-scale data from disparate sources. In the following description, for purposes of explanation, numerous specific details are set forth to provide a thorough understanding of the present disclosure. It will be evident, however, to one skilled in the art that the present disclosure may be practiced without these specific details and/or with any combination of these details.


A depth sensing device may be used to recognize the movement of objects through a doorway or within an open space. In one embodiment, a plurality of devices are positioned throughout a floorplan of a building. Each device can be configured to emit a pulses, such as Doppler pulses. The pulses reflect off the various surfaces of the environment, and a phase shift of the emitted pulses is detected by sensors of each device. The changes in the phase shift data over time can be used to generate privacy-friendly positional data for moving objects within the environment. The sensors of each device can also detect a change in amplitude in the emitted pulses reflected off the various surfaces of the environment. The plurality of sensors can collect timestamp data that identifies the time that each data point of the phase shift and/or amplitude change data was collected. The phase data and the timestamp data from the plurality of devices can be sent to a server to generate tracking data that identifies the trajectory of objects within the environment over time. Typically, the tracking data is accompanied by information sufficient to uniquely identify the device, such as a device name or ID, a network ID, a MAC address, or the like. This tracking data can additionally be used to determine occupancy within the environment in real time.


In some embodiments, moving objects identified in the positional data may be classified as one or more human subjects, while retaining the anonymity of the subjects' identity. In some embodiments, additional sensors may be used, such as the depth sensor previously described in U.S. Non-Provisional application Ser. Nos. 17/551,560 and 16/844,749 to identify when objects cross a threshold, such as a doorway of an environment.


In some embodiments, the positional data from the plurality of sensors is aggregated by the server. The server can include a first module that is configured to cluster the positional data into one or more clusters for each point of time as indicated by the timestamp data. Each cluster can be identified as a unique object within the environment by the first module of the server. The server can also include a second module. The second module can include logic that is configured to generate what is referred to herein as “tracklets” that track the change in position of the clusters over time based on the positional data. The server can also include a third module that includes logic that is configured to determine a trajectory for each of the one or more detected objects indicated by the clusters by connecting tracklets together that are associated with the same detected object.


In some embodiments, a processor of the server may utilize one or more algorithms to determine which clusters of positional data to connect together to form trajectories for unique objects detected within the environment. For example, the processor may include logic configured to connect tracklets together based on a reward function that selects the tracklets that are most likely associated with the same object.


In some embodiments, the server may utilize different algorithms for tracking trajectories of objects depending on the objects speed of travel. For example, the server can utilize a first reward function that is optimized for tracking moving objects and a second reward function optimized for tracking relatively static objects. Accordingly, the second module may utilize the first reward function to generate tracklets, and the third module may utilize the second reward function to connect tracklets that the server determines are associated with the same object.


In some embodiments, the server may be configured to store a map of the environment and the one or more proximity sensors within the environment. For example, each of the one or more proximity sensors may be assigned respective coordinates within the map of the environment. Based upon the coordinates, the server may include logic that merges the captured positional data from each of the plurality of positional sensors to form the tracklets that track the trajectory of objects within the environment.


In some embodiments, the server may be configured to generate trajectories for the detected objects in substantially real-time. In other embodiments, the server may be configured to use historical data in order to increase the accuracy of the determined trajectories of objects within the environment. For example, the server may aggregate positional data and associated timestamp data. The positional data can be chunked into discrete time portions, which can be times portions on the order of several seconds, a minute, several minutes, an hour, etc. The reward function logic can be configured to select for trajectories that align the known positions of an object across the discrete time portions to increase the accuracy of the determined trajectories.


In some embodiments, the trajectories may be used by the system to determine occupancy metrics. The occupancy metrics are made available for inspection through an API. As described above, data from several devices, positioned at different locations may be aggregated together to determine an accurate people count within the environment.


In another embodiment, in addition to positional data, the positional sensing device may collect and transmit data about the health or status of the device. In some embodiments, the device may also collect external ambient data. For example, the device may include an accelerometer that tracks vibrations (such as door slams) even where no visual effect can be seen. In another embodiment, the device may include an ambient light sensor to track lighting within or of the space. The various collected information may be provided to an external server for analysis.


In one embodiment, the positional data is processed by the server so as to be analyzed at various granularities of physical and logical space. These may be understood as virtual spaces that exist within a hierarchy of perception, such that positions of objects (e.g., people) may be tracked within a nested set of geographic spaces, such as a room, a floor, a building, or a campus, and/or logical spaces, such as an organizational grouping (e.g. a department or set of people) or a non-contiguous subset of rooms or geographic spaces. In one embodiment, the count data is distributed to one more users via an API so as to be accessible from a mobile or other computing device, and may be filtered upon or otherwise manipulated at the level of different virtual spaces.



FIG. 1A depicts an illustrative block diagram of a positional sensing system 1 in accordance with some embodiments. As illustrated, a positional sensing system 1 includes one or more positional sensing devices (e.g. sensing device(s) 10), each monitoring a separate or overlapping physical space, one or more remote servers 20, a network 30, and one or more mobile devices (such as a mobile phone or iPad) or alternate computing devices (such as a mobile device or PC) 25. As used herein, a positional sensing device and a sensing device may be used interchangeably unless otherwise noted. Network 30 may comprise one or more network types, e.g., a wide area network (such as the Internet), a local area network (such as an intranet), a cellular network or another type of wireless network, such as Wi-Fi, Bluetooth, Bluetooth Low Energy, and/or other close-range wireless communications, a wired network, such as fiber optics and Ethernet, or any other such network or any combination thereof. In some embodiments, the network 30 may be the Internet and information may be communicated between system components in an encrypted format via a transport layer security (TLS) or secure socket layer (SSL) protocol. In addition, when the network 130 is the Internet, the components of the positional sensing system 1 may use the transmission control protocol/Internet protocol (TCP/IP) for communication. In an exemplary embodiment, remote server 20 may include one or more servers operated and managed by a single entity, however, in other embodiments, processing may be distributed between multiple entities at different geographic locations. In still other embodiments, remote server 20 need not actually be physically or logically “remote” to the positional sensing devices 10, and the processing may instead be performed at a server or share (whether dedicated or not) connected by a local area network.


In an exemplary embodiment, the components of the positional sensing system 1 facilitate the collection of positional data based on a Doppler shift of microwave radiation that is emitted from a pulse emitter of each positional sensing device 10 and reflected from objects within the environment. The positional data is then provided to the remote server 20, and remote server 20 aggregates the positional data from all the positional sensing devices and converts the positional data into trajectory data for one or more objects within the environment. In some examples, the server identifies which of the one or more detected objects are associated with humans within the environment prior to converting the positional data into trajectory data. The trajectory data can be used to determine anonymous people count data over time within the environment. The components of the positional sensing system may also facilitate the access and display of the trajectory data and anonymous people count data by mobile device 25.



FIG. 1B is a diagram showing the placement of an exemplary depth sensing device 10. Device 10 is typically configured for indoor installation, however, in alternate implementations, the device may be used in outdoor or partially outdoor conditions (e.g., over a gate or open air tent). In general, however, the device 10 may be understood to be situated in regular intervals within an indoor space such as rooms, offices, cafes, bars, restaurants, bathrooms, retail entrances, book stores, fitness centers, libraries, museums, and churches, among many other types of facilities. The positioning of the device 10 may be selected to balance the need for a sufficiently large field of view 40 while allowing for some overlap within the field of view 40 of adjacent devices 10. As illustrated in FIG. 1B, device 10 is situated so as to have a field of view 40 with a maximum width X and a maximum length Y. In some embodiments, one or more devices 10 may be arranged from the ceiling (or another elevated point) within an open space of an environment to track the change in position of moving objects (e.g., people) within the bounds of a set physical space. As one example, at a retail space or convention space, a device 10 could be positioned above an area of interest (e.g., where a new product is located) to gauge interest by the number of people who enter the bounded area, though of course many other applications are possible.


The sensing device 10 of FIGS. 1A and 1B may be replaced with a sensing device of various other embodiments (e.g., the sensing device 1200 of FIG. 12A, the phased array sensor 230 of FIG. 3, etc.). In various embodiments, any number of different sensing devices may be used in one or more areas be monitored. FIG. 13A-13D also illustrates various potential installation locations for a sensing device, such as the sensing device 1200. As noted herein, any number of different sensing devices that are capable of performing the operations discussed herein associated with sensing devices.



FIG. 2A depicts the placement of a plurality of positional sensing devices in accordance with some embodiments of the present disclosure. FIG. 2A shows a plurality of positional sensing devices 10A, 10B, and 10C placed with overlapping field of view 40 such that the field of view 40 of positional sensing device 10A overlaps with positional sensing device 10B and the field of view 40 of positional sensing device 10B overlaps with field of view 40 of positional sensing device 10C. FIG. 2A also shows a depth sensing device 11 as described in related U.S. patent application Ser. Nos. 17/551,560 and 16/844,749 (both applications hereby incorporated by reference), which may be used in tandem with positional sensing devices 10 to improve the accuracy of people count data determined by the positional sensing system 1. In some embodiments, the depth sensing device 11 may be replaced with another positional sensing device 10 which can be configured to track the movement of objects through doorways or thresholds. As shown field of view 40 of positional sensing device 10A can include a cross section area 12 that is monitored by positional sensing device 10A. The cross sectional area 12 is better seen in FIG. 2B, which shows that the cross sectional area 12, as viewed from the perspective of the positional sensing device 10A, can take the form of an ellipse or circular shape. In this respect, a positional sensing device can accurately monitor the movement of an object anywhere within the cross sectional area 12 depicted in FIG. 2B. While each positional sensing device 10A can monitor objects anywhere within the field of view 40, tracking is most accurate within the cross section area 12. For this reason, in certain embodiments, the positional sensing devices 10 are placed such that the fields of view 40 are at least partially overlapping, as shown in FIG. 2A.



FIG. 3 illustrates an example schematic diagram of components of an exemplary positional sensing device 10 and server 20. Positional sensing device 10 includes a phased array system 210 and a phased array sensor 230. The server can include a compute module 340. The various components of modules 210, 230, and 340 may be interconnected and may communicate with and/or drive other modules via one or more local interfaces (not shown), which may include at least one communication bus. However, any practical configuration of the illustrated components may be used, and the components need not fall into the particular logical groupings illustrated in FIG. 3. Further, It will be understood that the architectures described below and illustrated in FIG. 3 are not limited to the components discussed herein, and may include other hardware and software components. Moreover, the architecture described in reference to FIG. 3 may be used to carry out the operations discussed in reference to FIG. 10. Rather, for ease of explanation, only the components and functionalities most relevant to the subject systems and methods are discussed herein.


Device 10 and server 20 can include a number of processors that may execute instructions stored in a corresponding memory to control the functionalities of the respective device or server. Typically, these processors (positional processor 234 sensor app processor 236, application processor 342, and AI processor 344, described below) may include, for example, one or more of central processing units (CPU), digital signal processors (DSP), graphics processing units (GPU), application-specific integrated circuits (ASICs), field-programmable gate arrays (FPGAs), and/or microprocessors programmed with software or firmware, or other types of circuits for performing the described functionalities (described further herein), or any combination thereof. Any of the processors, alone or in combination, described herein may be used to carry out the operations of the various embodiments, including the method of FIG. 10. As used herein, memory may refer to any suitable storage medium such as disks, thumb drives, etc., both volatile and non-volatile. Examples of such media include RAM, ROM, EEPROM, SRAM, flash memory, or any other tangible or non-transitory medium that stores information that is accessible by a processor. Different embodiments may have components with differing capabilities, so long as the amount of RAM is sufficient to support reading sensor data and running analysis algorithms as described herein, and running all necessary supporting software.



FIG. 3 illustrates four phased arrays, labeled as elements 210-1, 210-2, 210-3, and 210-N, which make up an exemplary phased array system 210. These phased arrays are, in an exemplary embodiment, phased Doppler radar arrays that monitor an area within the environment. Although only four phased arrays are shown, it should be understood that each positional sensing device 10 can have any number of phase arrays, up to an arbitrary number “N” of arrays within the phased array system 210. The phased array system 210 can be configured to such that each element 210-1, 210-2, 210-3, etc. is spaced apart in regular intervals such that the pulses emitted by these elements superimpose to form beams that increase power radiated in desired directions and suppress radiation in undesired directions. In certain embodiments, the phase array system 210 can be implemented as a sparse phased array, in which certain phased array elements are omitted from the phased array system 210. For example, element 210-2 may be omitted such that the spacing between element 210-1 and element 210-3 is approximately twice the spacing between element 210-3 and 210-4. Sparse phased arrays may be used in order to reduce cost and/or energy requirements of the phased array system 210. As the beams transmitted from these phase arrays is reflected, the reflections are sensed by sensor 232. Sensor 232, in some embodiments, may be a Doppler radar sensor. Phased array sensor 230 may also include a positional processor 234 and a sensor app processor 236 that generate positional data from the sensed data and may perform a variety of processing, such as noise reduction. These processing elements may execute instructions stored in, and may variously read/write to, a memory 238, which may include a combination of temporary storage (for sensed data) and permanent storage (for operational software and the like).


Compute module 340 of server 20 generally handles the processing of data generated by the phased array sensor 230. In addition to the application processor 342, compute module 340 includes an AI processor 344 for analysis and classification of the positional data, and map processor 346 for storing map data associated with the environment of the positional sensing devices 10 and the respective positions of each positional sensing devices 10 within positional sensing system 1. The processors 342, 344, and 346 may respectively execute instructions, stored in, and read/write to, memories 343, 345, and 347, respectively, which may include a combination of temporary storage and permanent storage.


While the terms “positional sensor” and “positional sensing” are used in this disclosure, the devices 10 are not meant to be so limited, and other embodiments may exist where a device 10 uses sensing methods other than positional sensing to determine the movement of objects through a monitored space. For instance, in alternative embodiments, device 10 may have one or more other types of sensors capable of imaging or monitoring an area within an enclosed space, these sensors being used to in addition to or as an alternate to phased array sensor 230 and/or positional sensor. By way of example, in some embodiments, device 10 may utilize a LIDAR sensor and/or any other known type of sensor(s) so long as the sensor(s) are capable of fitting and operating within the device 10. The sensed data from these various sensors may in various embodiments be collected, additionally or alternatively to, the data from the phased array sensor 230. The general principles described herein are agnostic to the particular technique used to collect data about the monitored area. While embodiments may exist where device 10 does not collect (or is not limited to collecting) positional data or convert sensed positional data into trajectories of objects, an exemplary device may still be referred to herein as a “positional sensing device” for ease of explanation.


Positional sensing device 10 may additionally include a communication interface 256 with one or more interfaces for wireless communication (e.g., Wi-Fi or Bluetooth antennas) and/or a wired communication interface. In addition, the device may have a power supply 254 providing a physical connection to AC power or DC power (including power conversion circuitry). While FIG. 3 illustrates communication interface 256 and power supply 254 as two separate components, in exemplary embodiments, a single Power Over Ethernet (POE) system 258 (e.g., PoE 802.3af-2003 or 802.3at-2009, though any appropriate standard may be used) may be used to provide both functions, the PoE system receiving electric power along with data through a physical interface, such as an RJ45 connector or similar. Device 10 is designed to be constantly powered, so as to always be in a ready state to recognize and capture event data, and therefore, the device relies primarily on a continuous power connection (PoE). However, in some embodiments, device 10 may have a battery (not specifically shown) to provide backup power, in the case of a power outage or mechanical failure of the PoE/power supply component. Device 10 may include a USB interface into which a peripheral (e.g., to provide Bluetooth or network connection or other functionality) may be inserted. In addition, the device may include one or more LEDs (not specifically shown) that indicate a power and/or functional status of the device. Communication through the communication interface 256 may be managed by the sensor app processor 236.


Similarly as described with respect to positional sensing device 10, sever 20 can include a communications interface 356 and power supply 354. The communication interface 356 can include one or more interfaces for wireless communication (e.g., Wi-Fi or Bluetooth antennas) and/or a wired communication interface. Power supply 354 may provide a physical connection to AC power or DC power (including power conversion circuitry).


Device 10 may also include a variety of components configured to capture operation and/or telemetry data about the device 10. The device 10 may include one or more temperature sensors 251 capable of sensing an internal temperature and/or an internal humidity measurement of the device to ensure that such conditions are within functional bounds. In addition, the device 10 may include a clock component 252 that may be used to measure a time (timestamp) of data capture and may also be used in the scheduling of operations by the positional processor 234, e.g., reporting, resetting, and/or data capture operations. In an exemplary embodiment, a timestamp of data capture is collected with a high degree of specificity, typically a fraction of a second.


Similarly as described with respect to positional sensing device 10, sever 20 can include a clock component 360 that may be used to measure a time (timestamp) of data capture and may also be used by the server when aggregating positional data received from one or more positional sensing devices 10 and determining trajectories of objects detected by the one or more positional sensing devices 10.


While FIG. 3 illustrates application processor 342, AI processor 344, and map processor 346 as being separate processing elements, in alternative embodiments, the functionalities of these processors may be implemented within a single processing element or distributed over several components. Similarly, positional processor 234 and sensor app processor 236 may be implemented as separate processors or within one or more processing elements, in any appropriate configuration. Processors 234, 236, 342, 344, and 346 may be respectively implemented by any type of suitable processor and may include hardware, software, memory, and circuitry (or any combination thereof). In one embodiment, these processors may be implemented as two logical parts on a single chip, such as a custom ASIC or field-programmed gate array (FPGA). In one exemplary implementation, positional processor 234 is a Doppler ASIC, while the remaining processors are implemented as one or more CPUs. In some embodiments, application processor 342 is a High-Level OS processor (e.g., Linux). Other configurations are possible in other embodiments. Positional sensing device 10 may have other types of suitable processor(s) and may include hardware, software, memory, and circuitry (or any combination thereof) as is necessary to perform and control the functions of positional sensing device 10. In some embodiments, positional sensing device 10 may have multiple independent processing units, for example a multi-core processor or other similar component.


As described above, device 10 can be installed at regular intervals throughout the environment such that the field of view 40 of the devices 10 in aggregate cover all or nearly all the desired area to be monitored, although in some embodiments, device 10 can be placed at irregular intervals so long as at least one device 10 has a field of view 40 that covers the desired area to be monitored. Each device 10 can include a phased array system 210, which can be a Doppler array, and a phased array sensor 230. The phased array sensor 230 is configured to detect pulses that are reflected off the environment and objects, such as humans, moving throughout the environment. Using the reflected pulses, the positional processor 234 can determine, for each point in time, phase data identifying features of objects within the environment.


The sensor 232 passes its collected data to positional processor 234. Positional processor 234 uses Doppler technology to measure, from the collected data, the phase shift and/or the amplitude change of modulated Doppler pulses reflected from the object back to the sensor 232. The process from generating pulses to the generation of positional data is referred to herein as the data capture, the data capture process resulting from a single frame of data. Scheduling of a data capture is controlled by the positional processor 234. Once the positional processor 234 initiates a data capture and the Doppler pulse is reflected back to the sensor 232, the positional processor 234 collects the captured data from the sensor, correlates the collected data to the timing of the capture (with reference to clock component 252), and calculates the positional data. The positional data will then be transmitted to the compute module 340, which aggregates positional data from each positional sensing device 10 for detected objects throughout the entire environment being monitored. While, in the exemplary embodiment of FIG. 3, the hardware of the positional processor 234 performs the positional data calculations, other embodiments may offload such computation to a dedicated processor or chip (not shown), or to server 20.


Sensor app processor 236 may be, in one embodiment, implemented as a microprocessor. The sensor app processor 236 performs a variety of tasks in support of the depth processor as well as the entire device 10. Initially, the sensor app processor may control power management and firmware/software management (e.g., firmware updates). In addition, sensor app processor 236 may convert the phase data generated by the positional processor 234 so that it may be further processed in the compute module 340. For instance, the Doppler data (phase data) transmitted from the depth processor may be converted from a low-voltage differential signal (LVDS) to a wireless signal or USB signal, and additional processing may be done to, e.g., reduce noise in the data. The sensor app processor 236 may also be configured to transmit the positional data generated based on the phase data to the compute module 340 for further processing.


In some embodiments, the sensor app processor 236 may control management of the environmental conditions of the device 10, to ensure that the temperature and power conditions of the device are within the boundaries of acceptable limitations. Temperature sensor(s) 251 may be used to measure ambient and operating temperatures. In an exemplary embodiment, the operating temperatures may range, e.g., from approximately 0° C. to 35° C. (ambient), 0° C. to 60° C. (maximum at enclosure case), 0° C. to 85° C. (internal), and −10° C. to 85° C. (storage), though other ranges may be possible in other embodiments. Similarly, humidity conditions may range, in one embodiment, from approximately 5% to 95% non-condensing and from 20% to 80% non-condensing at minimum.


The positional data and associated timestamp data is sent by the sensor app processor 236 to the compute module 340, and particularly, to an application processor 342. Compute module 340 may include application processor 342, AI processor 344, and map processor 346. In general, compute module converts the positional data to a cluster of points associated with a features of an object, herein referred to as “point cloud data.” For example, the point cloud data could be associated with the motion of an arm while a human subject is sitting within a chair, or the motion of the legs or torso of a human as the human walks through the environment. Points that appear close in coordinate (X, Y) space and in time can be associated with the same point cloud. Point clouds can be understood to be associated with a particular object (e.g., human) moving throughout the environment. Rather than continuous capture (as a video or timed image capture would do), the phase data may captured asynchronously by sensor 232 as objects are sensed. Put another way, only “movement data” of the person or object is tracked. While different frame rates and/or resolutions may be used in different embodiments, it will be generally understood that the frame rate should be fast enough to allow tracking of a person or object, rather than a single frame in which their direction of movement cannot be determined. Alternatively, in some embodiments, data streamed from the sensor to the application processor 342 can take the form of a 3-D point cloud with a time integration component, such that the 3-D point cloud is streamed over time. The streamed point cloud data may be considered cumulatively, with a time constant for integration of data across frames or sub-frames. Data streamed to the AI processor 344 for classification may include a point cloud stream along with Doppler data and/or other signs of life metrics. The AI processor can implement an AI-model based on one or more sets of point cloud image training data. The output of the AI processor 344 may, in one embodiment, be infused with output from a Bayesian inference engine by a Kahlman filter.


Compute module 340 can aggregate point cloud data from each positional sensing device 10 in order to determine positions of objects for a given point in time, as indicated by the timestamp data. As objects move throughout the environment, compute module 340 can utilize AI algorithms that determine the trajectory of each object by connecting point clouds together over time (e.g., for each collection of point cloud data associated with a respective timestamp), which may be referred to herein as determining a trajectory for an object. For example, the compute module 340 can utilize one or more reward functions that can determine which point clouds across different timestamps are associated with a particular object. In some embodiments, a first reward functions can be configured to connect point clouds that are moving over time (e.g., a walking person) and a second reward function can be configured to connect point clouds that remain relatively stationary over time (e.g., a person sitting down at a desk). The process of connecting point cloud data together will be discussed in more detail with respect to FIG. 7, below. Compute module 340 can also use the determined trajectories to determine count data that represents the number of people within a given area or space being monitored within the environment (e.g., such as a specific room, a collection of rooms, or a predefined area of interest). This data retains anonymity of identity, as it is not personally identifiable to any person, and instead, directed merely to their movement into or out of a space.


Application processor 342 receives the positional data of the monitored area from the sensor app processor 236 and converts that data to point cloud data. The conversion of positional data to point cloud data may be done through any known calculation. Application processor 342 then sends that generated point cloud data to the AI processor 344. The AI processor 344 algorithmically discerns people, and their direction of movement, from other objects in the depth data. In one embodiment, the AI processor uses an on-board machine learning algorithm to classify objects within a frame as human. By combining different clusters of points, each with respective heights, AI processor 344 is able to identify the shape of a detected object and can classify these objects as people. In one embodiment, the algorithm implemented by the AI processor may recognize a cluster of points as a head, or shoulders. By tracking the movement of that group of pixels within a sequence of frames, the AI processor may track the position of the human subject. In other embodiments, the AI processor may be additionally or alternately capable of identifying other objects, such as animals, objects, furniture or barriers, or other organic and non-organic movement. The AI processor 344 also includes logic for connecting point clouds together over time into a trajectory for a detected moving object. In some examples the AI processor 344 can include a first reward functions that can be configured to connect point clouds that are moving over time (e.g., a walking person) and a second reward function can be configured to connect point clouds that remain relatively stationary over time (e.g., a person sitting down at a desk). As such, the AI processor 344 is able to determine when an object remains relatively stationary for long periods of time within a monitored area, and the same object transitions to moving across the monitored area by using both the first reward function for detecting moving objects and the second reward function for detecting relatively stationary objects.


In the exemplary embodiment, the identification of humans is performed on top of the generated point cloud data, and is not based on image classification from an optical camera (e.g., facial recognition), thermal camera, or other similar means. However, in alternative embodiments, data from optical/thermal cameras, RFID, other sensors, and/or other techniques for detecting humans may be considered in addition to, or alternate to, the depth data in the identification of people. In some embodiments, the AI processor 344 improves the point cloud data before classification, for example by processing the point cloud data to improve the signal to noise ratio. In other embodiments, these activities may be performed by the application processor 342, or not at all. In some embodiments, the classification of objects is split between the application processor 342 and the AI processor 344. This may be most useful in embodiments where one of the processors is configured to be particularly efficient at a certain type of task. As one example, AI processor 344 may be structured to expediently perform matrix multiplication, while application processor 342 may expediently perform tracking of a shape. The strengths of the relative components of compute module 340 are therefore exploited through distribution of processing to enhance the speed of computation and reduce latency in generating count data. Because the phase data, positional data, and point cloud data do not reveal the identity of people being monitored, no personally-identifiable data is captured or stored by the positional sensing system 1.


The positional data generated by the AI processor 344 is aggregated for each positional sensing device 10 by the map processor 346, which stores a map 400 (discussed below) which includes data regarding the XY coordinate position of each positional sensing device 10, as well as features within the environment, such as objects that may occlude a tracked object from being monitored by a respective positional sensing device 10. The positional data may be correlated to XY coordinates which are associated with the map 400 stored by the processor 346. Additionally, features stored in map 400 may be utilized in order to filter out spurious positional data, for example when a positional sensor incorrectly detects an object that is attributable to a reflection of a pulse from an occluding wall (as described in more detail with respect to FIGS. 5 and 6). In an environment utilizing multiple positional sensing devices 10, map processor 346 may be configured to aggregate and reconcile data collected from each of the multiple positional sensing devices. That is, for an environment with multiple positional sensing devices 10, the server 20 will consolidate information from the multiple positional sensing devices 10 to generate aggregated positional data that accurately identifies each object within the monitored area. This aggregation and reconciliation is performed by map processor 346 at the remote server 20. It is generally noted that while the server 20 is referred to as a “remote” server, the functions of the server 20 need not necessarily be performed on a system physically remote to the device 10. Rather, in alternative embodiments, the functions described herein with regard to the server 20 may be performed locally by one or more devices 10 or by another device within a local network that includes one or more devices 10.


In some embodiments, remote server 20 may contain logic to analyze point cloud data at various granularities of space. This concept may be understood as a virtual space—a digital representation of a physical space—with different virtual spaces existing within a hierarchy of perception. To illustrate, trajectories of objects or people within any of a number of geographic spaces may be determined, such as a campus, a building, a floor, a room, or a cubicle, each subsequent space being a subset of the larger physical space before it so as to fit within in. Additionally, based on the determined trajectories, an account occupancy count for each defined virtual space can be determined. A virtual space may be defined for each of these physical spaces, creating a set of “nested” virtual spaces. A user (such as a business owner) interested in tracking occupancy and trajectories through any or all of those geographical spaces may then be able to access real-time data thereof by selecting the corresponding virtual space, after which the trajectories of objects and associated timestamp is displayed/transmitted. If desired, the user may also display the occupancy count for a given virtual space in a similar manner. Similarly, in addition to particular physical spaces, remote server 20 may contain logic to generate occupancy and trajectory data within defined logical spaces, such as an organizational grouping of offices/cubicles (e.g. a department or team space), or a subset of rooms not necessarily contiguous or located within a single physical space. In one embodiment, the data is distributed by the remote server 20 via an API so as to be accessible from a mobile or other computing device 25. Any given device 10 is typically not aware of any grouping or classification it may belong to, and meaningful grouping of any of devices 10 may be performed by the remote server 20.


The aggregated count data and/or trajectory data may be presented, with low latency (e.g., typically less than a few seconds latency), to a user via an API so as to be accessible via an application, software, or other user interface. The information may be presented to a user interface at various hierarchical slices of virtual spaces. In some embodiments, a user of device 25 may request, from server 20, aggregated count data for a particular virtual space for a defined period of time (e.g., one day, one week, one month) and may receive, in response, an interface displaying a total count for the defined period of time. Similarly, a user of device 25 may request, from server 20, trajectory data for a particular virtual space for a defined period of time (e.g., one day, one week, one month) and may receive, in response, an interface displaying a total each identified object and its associated trajectory for the defined period of time (for example, as shown in FIG. 7).


In some embodiments, the user may obtain from the server trending or hierarchical people count statistics. For example, a user may be able to access a trend of occupancy data over the course of a day on an hourly basis. In one embodiment, the server 20 may have one or more repositories of historical occupancy data collected for one or more devices 10 from which analysis and/or reporting may be done in response to a user request.


Remote server 20 may in some embodiments communicate bi-directionally with one or more devices 10. For instance, remote server 20 may receive periodic updates from a device 10 with status information, such as a MAC address (or other network information) or other information regarding the devices' health and connectivity. The remote server 20 may respond thereto, and may also be capable of querying a device 10 as to that same type of data, or providing operational instructions such as, e.g., instructions to reboot, to update its software, or perform network commissioning process (e.g., blink a light or communicate its network information via Bluetooth or wireless communication), or to kick off/stop data capture operation.


As described above, data capture can be performed asynchronously, with event data being captured and processed at cyclical or irregular times. For instance, in retail establishments, there may be little or no data captured after closing hours of the business or when the doors are locked. As a result, there may be predictable times of day at which time the computing capabilities of the device 10 are expected to be unused or underutilized. In this regard, application processor 342, AI processor 344, and map processor 346 may only have processing tasks to perform when phase data is being captured by the sensor 232. Accordingly, in one embodiment, spare computing resources of the device 10 and server 20 are identified, and during periods of relative inactivity, the spare computing resources are used for tasks unrelated to the capture and processing of depth data. For example, the spare computing resources of application processor 342, AI processor 344, and map processor 346 may be used as additional compute for training of the machine learning elements of the AI processor 344, or the update of related algorithms and/or software/firmware. Additionally, spare resources may be used for wholly unrelated tasks to serve the needs of other devices connected to the wireless network. In support of these functions, cached data may be stored, for example, in any of memories 343, 345, and 347. By these means, all components of positional sensing system 1 are network-enabled and may be taken together or separately to act as a data center. This may reduce bandwidth and latency requirements for other devices, and may improve security where data processing performed by devices other than the positional sensing devices 10 and server 20 should be restricted to a premise on which device 10 is located.


In some embodiments, in addition to the phase data, the sensor app processor 236 may also transmit telemetry data to the application processor 342, including, e.g., temperature data, CPU/memory/disk status, commands executed, and the like. In some embodiments, the telemetry data is typically sent at periodic intervals (e.g., every 10 seconds), however, in other embodiments, it may be sent only upon request from the server 20, or with every instance of data capture.



FIG. 4 depicts a stored map 400 and positions of a plurality of positional sensing devices, in accordance with some embodiments of the present disclosure. The primary function of device 10 is to monitor the positions of people as they travel though the environment being monitored. This includes the ability to determine directionality of movement, differentiate between multiple moving objects and correctly determine trajectories of each object, and to disregard non-human subjects, such as animals, objects, door swings, shadows, and the like. The margin of error in performing these tasks should be relatively low. In some embodiments, the margin of error may be less than 1%. In various embodiments, the margin of error may be managed via the operations discussed in reference to FIG. 10 below. For example, the margin of error may be managed by using an object count cap to avoid an object count that is greater than a desired margin of error.


As shown in FIG. 4, stored map 400 may represent the environment which the position sensing system 1 is configured to monitor. The map 400 may include data including the position (e.g., using X, Y coordinates) of features of the environment, such as doors, walls, and other objects that may occlude the monitored area of one or more positional sensing devices 10. Additionally, the map 400 may include positional data (e.g., X, Y coordinates) for each positional sensing device 10. Using the coordinates of each positional sensing device 10 stored in map 400, the server 20 may aggregate the positional data captured by each positional sensing device 10 to track objects that move between the field of view of various positional sensing devices 10. Additionally, server 20 may be configured to identify the position and trajectory of an object even when it is occluded from a positional sensing device 10 by a feature of the environment by utilizing data captured from an adjacent positional sensing device 10 and/or the data stored on map 400. For example, if an adjacent positional sensing device 10 has an overlapping field of view that captures the object, the server 20 can continue to identify the position and trajectory of such an object as it passes the occluding feature. In some embodiments, even when no positional sensing device 10 can detect an object being an occluding feature, the server can use AI techniques to inference the position and trajectory of the object using the stored position of the occluding feature within map 400 and by using a reward function to connect the position and trajectory of the object prior to becoming occluded and after the object is no longer occluded by the feature. Similarly, data stored in map 400 can be used to filter out spurious signals. For example, map 400 can store the position of walls within the environment, and if a respective positional sensor identifies an object that is beyond an occluding wall, such data can be ignored (e.g., filtered out) by server 20 when aggregating and clustering positional data from the various positional sensing devices 10.



FIG. 5 depicts a plurality of positional sensing devices 10 in which at least a portion of the monitored area is occluded, in accordance with some embodiments of the present disclosure. As shown, positional sensing devices 10A, 10B, and 10C are deployed to monitor an area and having respective field of views 40A, 40B, and 40C. An occluding feature 502 partially occludes field of view 40A of positional sensor 10A. Note that Field of view 40A and field of view 40B have an overlapping area 506. Similarly, field of view 40B and field of view 40C have a similar overlapping area 506. Each of the positional sensing devices 10 are connected to server 20 via network 30. In the case that positional sensor device 10A generates positional data that indicates a position within occluding area 504, the server 20 may filter out such positional data. Positional data may be transmitted to sever 20 with included XY coordinates, which can be cross referenced with coordinates of occluding feature 502 stored by map processor 346. If map processor 346 determines that the XY coordinates of the occluding feature is located between the XY coordinates of the positional sensing device 10A and the XY coordinates associated with the positional data, the map processor 346 may determine that such positional data is spurious and may filter it out from the aggregated positional data. In other words such positional data may not be used to determine aggregated positional data of all monitored objects within the environment.



FIG. 6 depicts a plurality of positional sensing devices in which at least a portion of the monitored area is occluded from a first positional sensing device, but not a second positional sensing device, in accordance with some embodiments of the present disclosure. As shown, positional sensing devices 10A, 10B, and 10C are deployed to monitor an area and having respective field of views 40A, 40B, and 40C. An occluding feature 602 partially occludes field of view 40A of positional sensor 10A and field of view 40B of positional sensor 10B. Within the overlapping area between field of view 40A and 40B, there is a partially occluded area 604 which is not hidden from field of view 40A but is hidden from field of view 40B. Conversely, partially occluded area 606 is not hidden from field of view 40B but is hidden from field of view 40A. In one example, a person 608 can be positioned within partially occluded area 604 such that person 608 is visible to positional sensor 10A, but not positional sensor 10B. Accordingly, positional data associated with person 608 can be measured by positional sensing device 10A and person 608 can be tracked even though positional sensor 10B is occluded from observing person 608. In another example, map data stored by map processor 346 can be used to enhance positional data determined by one or more positional sensing device 10. Map 400 can include XY coordinates for doorways between a first room and a second room of a monitored environment. Should positional data be generated that shows person 608 moving between the first and second room, but not through a doorway, the map processor 346 may adjust the XY coordinates associated with the positional data of person 608 such that person 608 is determined to be moving between the first room and the second room through XY coordinates associated with the doorway stored by map processor 346. Additionally, overlapping area 610 may be visible via the field of view 40B and the field of view 40C. In various embodiments, in an instance in the overlapping area 610 is visible via both the field of view 40B and the field of view 40C, no object may be present within the overlapping area 610.



FIG. 7 depicts exemplary tracking data generated by the positional sensing system, in accordance with some embodiments of the present disclosure. FIG. 7 depicts an XYZ coordinate system that includes positional data over time for objects monitored by the positional sensor system 1. The X and Y axes correlate to XY coordinates stored as part of map 400 associated with the area being monitored. The Z axis represents time, such that the XYZ coordinate system shows the change in position of objects being monitored over time.


Positional data from the positional sensing devices 10 is aggregated by the compute module 340 of sever 20. For each timestamp, the compute module 340 determines whether a positional data along the Z axis is associated with the same object. Compute module 340 utilizes a first algorithm (e.g., a reward function) to connect point clouds together that are associated with a respective object to form tracklets 702. Tracklets 702 are associated with a respective object (e.g., a person) moving throughout the monitored environment. For example, the tracklet 702 represents an object (person) moving along the Y dimension over time, as measured by the Z axis. The first reward function can be optimized to identify moving objects. For example, the first reward function can be configured to identify point clouds having a threshold number of associated points. It should be understood that point clouds include more associated points when the monitored object is in motion. Therefore the first reward function can be configured to identify point clouds of a sufficient size, which are correlated to objects in motion.


However, in certain situations, a monitored person may temporarily cease moving. For example, a person may travel to a conference room and subsequently take a seat within the conference room for a meeting. After a specified time sitting, the same person may stand and leave the conference room for another location within the environment. The first reward function that is optimized to identify point clouds with more than a threshold number of points may not be effective in monitoring the position of a relatively stationary person, such as a person sitting down in a conference room. Accordingly, compute module 340 may utilize a second algorithm (e.g., reward function) which may be optimized to identify relatively stationary objects. The second reward function may be optimized for point clouds with less than a threshold number of points within the point cloud, which represents objects that are associated with little to no movement. For example, phased array sensor may only detect small movements of a person's arms while the person remains sitting/relatively stationary, and the second reward function may be optimized to detect such small movements which are correlated to point clouds having less than a threshold number of points. Lines 704 can represent objects identified by the second reward function. As shown in FIG. 7, lines 704 can connect tracklets 702 thereby forming full trajectories for each detected object within the monitored area over time. It should be understood that the second algorithm (e.g., reward function) is also configured to connect tracklets when there are no detected points within the point cloud. For example, a person seated still and standing in place with little to no movement may result in no positional data being generated corresponding to the position and trajectory of that person. The second reward function can connect tracklets associated with a respective object generated by the first reward function even when separated by periods of no positional data being detected for the non-moving object.



FIG. 8 depicts data flow for generating tracking data by the positional sensing system, in accordance with some embodiments of the present disclosure. As shown, server 20 may be configured to generate tracking data using either real time pipeline 810 or historical pipeline 820. Real time pipeline 810 allows for a user of device 25 to request real time tracking data collected from the positional sensing devices 10 in substantially real time. As used herein, substantially real time means approximately 5 minutes or less after the data is being collected, the tracking data becomes available for review and can be transmitted to the device 25 via an API request to the server 20. The real time pipeline 810 may not provide tracking data that is as accurate as the data that can be provided via historical pipeline 820. In some embodiments, real time pipeline 810 may utilize the first reward function but not the second reward function in order to provide tracking (e.g., positional data over time) data in substantially real time at the cost of being less optimized to connect tracklets for a respective object when separated by periods of the object having relatively little to no motion.


In an exemplary embodiment, tracking data from historical pipeline 820 can become available to be transmitted to device 25 approximately 1 hour after being collected. In another example embodiment, the tracking data from historical pipeline 820 can become available to device 25 approximately 24 hours after being collected. In contrast to tracking data generated as part of real time pipeline 810, tracking data that is generated as part of historical pipeline 820 can have improved accuracy, because positional data is chunked into discrete time portions. Accordingly, the positional data associated with any given time portion can be compared to an immediately previous time portion and immediately subsequent time portion, and compute module 340 may utilize one or more algorithms to efficiently match the positions of identified objects such that their trajectories are continuous over the chunked time portions. In this manner, tracking data generated via the historical pipeline 820 can have a greater accuracy than the tracking data generated by the real time pipeline 810, at the sacrifice of being delayed in its availability to be transmitted to a device 25 for review by a user of the positional sensing system 1. In some embodiments, historical pipeline 820 can utilize both first reward function for tracking moving objects and second reward function for tracking relatively static objects, which increases the accuracy of the historical pipeline 820 with respect to real time pipeline 810. For example, the historical pipeline 820 is able to detect an object (e.g., a person) enter a space clearly and then at a later time detect the object leaving the space. The historical pipeline 820 can connect the tracklets for the object across the time the object remained relatively static in the space with a high confidence even though there is little to no positional data for that object during the time the object remains relatively static.



FIG. 9A is a diagram of a positional sensing device 10 and FIG. 9B is a diagram of a depth sensing device 11. In contrast with depth sensing device 11, positional sensing device 10 may be positioned on a ceiling surface parallel to the ceiling surface. In some embodiments, depth sensing device can be configured to be positioned at an angle relative to the wall against which the depth sensing device 10 is mounted, e.g., at a 5° angle or a value approximate thereto, though other angles may be used as appropriate. In contrast to the positional sensing device 10, depth sensing device 11 can be placed on a wall surface approximate an entryway or threshold. In some embodiments, the depth sensing device can be placed at a distance of at least 110 mm from the wall, although other distances are possible in other embodiments.


Referring now to FIG. 10, a flowchart 1000 is provided for detecting objects in accordance with various embodiments. In various embodiments, the method of FIG. 10 may be carried out using processing device(s), such as any of the processors discussed herein. As discussed herein, the processing device(s) may be referred to as a counting processor. However, the counting processor does not have to be a singular processor and does not necessarily have to be distinct from other processors in the system. As such, the operations herein may be carried out by any of the embodiments herein unless otherwise stated. Unless otherwise stated, the operations of FIG. 10 may be carried out by the same system, such as the systems of various embodiments discussed herein. The operations discussed herein may be carried out in part or completely via any of the sensing devices shown in FIGS. 12A-13D. Additionally, FIGS. 13A-13D illustrate example positioning locations for a sensing device for use in various areas.


Referring now to Block 1010 of FIG. 10, the method includes receiving at least one detection data packet from the sensor. The at least one detection data packet may include data obtained via a sensing device (e.g., the positional sensing device 10, the depth sensing device 11, etc.). The at least one detection data packet may include the numerical, graphical, and/or other data associated with an area being monitored by the given sensing device (e.g. analog readings from the sensing device, an integer variable holding the number of determined detections if the detection determination has already been made by the sensing device, the point cloud data, the tracklets 702, the determined trajectories of for the detected objects, and/or the like). The data in the detection data packet may be based on a modulated pulse received by the sensor. The modulated pulse received by one or more of the sensor may be at least one of the pulses previously described (e.g., Doppler pulses). As such, the detection data packet may be processed herein for phase shifts and/or amplitude changes.


In various embodiments, a detection data packet may be sent from the sensing device via the network 30. The detection data packet may also be sent from a sensor (e.g., the sensor 232) in an instance in which the sensor is separate from the rest of the positional sensing device and connected to the network 30. A counting processor for processing the detection data packet may be part of the server 20, the sensing device, sensor 232, or part of any of the other computing devices described above. The detection data packet may contain data that needs to be processed before a detection is determined (e.g. analog readings from the sensing device, the point cloud data, the tracklets 702, etc.). Alternatively, a detection determination may have already been made, and the detection data packet may contain information relating to the detection determination, such as an initial estimated object count. As such, the detection data packet may include the raw data obtained via sensing device(s) and/or the detection data packet may include partially and/or full processed data (e.g., the raw data obtained via sensing device(s) may be processed or otherwise analyzed to determine an estimated object count, determine a quality of the data, and/or the like).


While the operations of FIG. 10 are discussed in reference to a sensing device, in various embodiments, multiple sensing devices may be used for the same area and/or different areas. For example, in an instance in which more than one sensing device is used in the same area, the system may use the data from each sensing device to confirm the data obtained (e.g., a first sensing device may obtain a reading and may be compared to a reading of a second sensing device in the same area to confirm the data). In various embodiments, a detection data packet may include data from a single sensing device (e.g., the system may receive a detection data packet from each sensing device). Alternatively, a detection data packet may include data from multiple sensing devices (e.g., the data may be collected from multiple sensing devices and sent as a single detection data packet).


Referring now to Block 1020 of FIG. 10, the method includes determining an estimated object count based on the at least one detection data packet. Based on the form of data within the detection data packet, (e.g., analog readings from the sensing device, an integer variable holding the number of determined detections, the point cloud data, etc.), the system may process the data within the detection data packet to determine if there is a detection and subsequently the number of objects estimated (e.g., the estimated object count). In various embodiments, the estimated object count may be based on phase shifts and/or amplitude changes of the data in the detection data packet(s).


The system may determine, based on the detection data packet(s), an estimated object count. The estimated object count may indicate the number of distinct objects (e.g., people, animals, etc.) that are present within an area. In various embodiments, the detection data packet may indicate whether any objects are thought to be present (e.g., a sensing device may have a threshold reading of motion in an area, such that any detection data packet likely includes at least one object). The system may confirm the presence of an object and then determine an estimated object count therein. Alternatively, the detection data packet may not indicate whether any objects are present (e.g., a sensing device may take periodic readings that are then transmitted as detection data packets regardless of actual activity in the given area). As such, the system may determine whether any objects are in an area and the estimated object count. In various embodiments, the determination of whether any objects are in an area and the estimated object count may be combined (e.g., the estimated object count may indicate that zero objects are present in the given area).


The system may determine that one or more objects are in an area being monitored using pulse readings contained in the detection data packets (e.g., detecting certain variations in amplitude and frequency in pulse signals received by the sensing device, using a machine learning module to spot patterns consistent with the object to be detected in the data collected by the sensing device, connecting point clouds together that are associated with a respective object to form tracklets, converting the positional data into trajectory data, and/or the like). As such, the pulse readings may indicate whether any objects are present in an area and an estimated object count within the area.


The estimated object count may be determined via one or more sensor readings (e.g., one or more detection data packets). For example, the estimated object count may be determined from a first detection data packet that includes data from a first sensing device at a first time. Additionally or alternatively, the estimated object count may use a second detection data packet that includes data from the first sensing device at a second time. For example, the sensing device may be positioned to capture an instance in which an object is entering and/or exiting the given area and a data point is registered each time the sensing device detects activity.


In various embodiments, the system may be able to determine an object type for one or more objects detected. For example, the system may determine that a sensor reading indicates a human or an animal. As such, the estimated object count may be for a specific object type (e.g., number of people detected) and/or for any objects detected (e.g., the estimated object count may indicate the number of objects, regardless of object type in a given area). The system may include separate estimated object counts for each object type. For example, a person estimated object count may be determined for people in an area and an animal estimated object count may be determined for animals in the area.


Referring now to Block 1030 of FIG. 10, the method includes comparing the estimated object count to an object count cap. The object count cap may be a predetermined cap count that indicates the number of objects that the system may detect within a specific confidence level. The object count cap may be based on the type of object being detected (e.g., people, animals, etc.), the type of area being monitored (e.g., a first area may be easier to monitor objects than a second object due to area characteristics, such as area size, background color, amount of objects expected in an area, etc.), and/or the like. Generally speaking, the more objects in an area, the lower the confidence level that the object count is correct. As such, the estimated object count may be less accurate as the number of objects is increased.


In various embodiments, the object count cap may be based on a desired confidence level. The detection accuracy of the sensing device may vary based on a variety of factors (e.g., size of detection zone, type of object being detected, number of objects being detected at one time). To better reflect the detection accuracy of the sensing device for a particular scenario, a confidence level may be set based on the performance of the sensing device in a particular scenario.


In various embodiments, the system may have an estimated confidence level for different object counts for an area (e.g., a generic confidence level that may be independent of area conditions). Alternatively, the confidence level for different object count of an area may be determined using area characteristics and/or historical information (e.g., the system may have a testing period in which estimated object counts are compared to actual object counts provided to the system). For example, the system may have a testing period in which the system receives input(s) from a user indicating the number of objects in an area. The input(s) may be compared to the estimated object count to determine an accuracy level of the system. The accuracy level of the system may be used as the confidence levels to determine an object count cap. The confidence level as which an object count cap may depend on the use case of the system. For example, in an instance in which the object count is important to a high degree of certainty (e.g., monitoring for fire code capacity), the confidence level may be higher (e.g., resulting in the lower object count cap), while in an instance in which a higher degree of uncertainty is allowed, the confidence level may be lower (e.g., resulting in a higher object count cap).


For example, an analysis may be performed to determine how accurate the estimated object count is as more objects are detected in a particular area. In such an example, the confidence level may be determined based on the number of times the system is correct that the amount of objects is above a certain number (e.g., a potential object count cap). For example, a system may determine the confidence level for an object count cap of zero (e.g., 1+ objects may be detected with 100% accuracy in an example embodiment), an object count cap of one (e.g., 1+ objects are correctly detected with 98% accuracy in an example embodiment), an object count cap of two (e.g., 2+ objects are corrected detected 94% in an example embodiment), an object count cap of three (e.g., 3+ objects are correctly detected with 91% accuracy in an example embodiment), an object count cap of four (e.g., 4+ objects are correctly detected with 80% accuracy in an example embodiment), and an object count cap of five (e.g., 5+ objects are correctly detected with 65% accuracy in an example embodiment). The number of different object count caps tested may differ based on results (e.g., the system may test until a confidence level is below a threshold value). As such, the object count cap used for the system may be based on the confidence level measured (e.g., the number of times reported correctly). The desired confidence level may differ based on use case.


In various embodiments, the desired confidence level may be adjusted, by the system and/or manually by a user. For example, a system may monitor performance during operation and change the object count cap based on the monitored performance (e.g., the system may determine the accuracy of the initial testing using actual data during operation).


The comparison of the estimated object count to an object count cap may determine whether the estimated object count is less than, equal to, or greater than the object count cap. In an instance in which the estimated object count is greater than the object count cap, the estimated object count would have a confidence level that is less than desired. As detailed in reference to Block 1040, the comparison of the estimated object count to the object count cap affects the display object count. Namely, the display object count cap is always less than or equal to the object count cap. For example, in an instance in which the estimated object count is four and the object count cap is three, the display object count may be three.


Referring now to Block 1040 of FIG. 10, the method includes determining a display object count for the area based on the comparison of the estimated object count to the object count cap. In various embodiments, the display object count may be either less than or equal to the object count cap. For example, in an instance in which the estimated object count is four and the object count cap is three, the display object count would be three. As such, in an instance in which the estimated object count is less than or equal to the object count cap, the display object count may be the same as the estimated object count. Alternatively, in an instance in which the estimated object count is greater than the object count cap, the display object count may be the same as the object count cap. As discussed herein, a “+” may be added to the display object count to indicate that the estimated object count is greater than or equal to the object count cap. Continuing the previous example in which the confidence threshold was set to 90% and the object count cap was set to three, the display object count may be zero, one, two, or three (with the three displayed as 3+) based on the estimated object count cap.


Referring now to optional Block 1050 of FIG. 10, the method includes determining an occupancy type of the area based on the display object count. In various embodiments, an occupancy type may be determined based on the display object count. For example, in an instance in which the display object count is zero, the occupancy type could be unoccupied; in an instance in which the display count is one, the occupancy type could be single occupancy; and in an instance in which the display count is above one, the occupancy type could be multiple occupancy.


The occupancy type may provide additional information based on the use of the area being monitored. For example, in an instance in which a conference room is being monitored, a single occupancy may indicate that a person is squatting in the conference room, while a multiple occupancy may indicate that a meeting is being conducted. As such, the occupancy type may be used to determine a usage type of the area being monitored. In another example in which a hotel or rental property is being monitored, the occupancy type may indicate whether any potential issues may be occurring (e.g., the occupancy type may indicate that a party is occurring or more than expected people are present in the area).


In various embodiments, the system may only determine the occupancy type to be displayed in place of the display object count. For example, the occupancy type may be based on the display object count and provided to users instead of the display object count. For example, in the conference room example, the system may notify users that a meeting is in process in an instance in which the display object count is greater than one.


Referring now to optional Block 1060 of FIG. 10, the method includes determining, based on the comparison of the estimated object count to the object count cap, a display accuracy. In various embodiments, the display accuracy may be determined based on the estimated object count and the object count cap. The display accuracy can be a fraction, ratio, or percentage. The display accuracy can be predetermined for each value of the estimated object count (e.g., 98% for one detection, 94% for two detections, 91% for three detections as discussed above in reference to the analysis may be performed to determine how accurate the estimated object count is as more objects are detected in a particular area).


Referring now to Block 1070 of FIG. 10, the method includes causing a rendering of the display object count to the user interface. The rendering may be to a computing device associated with the area being monitored. The rendering may be part of an application or other viewable dashboard that provides information to a user. The rendering may include the display object count, display accuracy, the object count cap, the occupancy type, and/or the like. The rendering via a user interface may also provide the user with the ability to update the desired confidence level (e.g., the system may receive an input received via the computing device associated with the area being monitored.


In various embodiments, the display object count may be displayed on one or more user interfaces. The one or more user interfaces may be part of the mobile device 25 or any device capable of displaying a rendering (e.g., displaying text and/or visual representations). The display object count may be distributed to the one or more user interfaces via the API previously described.


In various embodiments, the display object count can be combined with a ‘+’ symbol when displayed on the user interface. Continuing the previous example where the confidence threshold was set to 90% and the object count cap was set to 3, the + symbol may appear only after a fourth detection is made (e.g., the display shows ‘2’ at 2 detections, ‘3’ at 3 detections, and ‘3+’ for the fourth detection and for any detection after that).


Referring now to FIG. 11, a flowchart 1100 is provided for detecting objects in accordance with various embodiments. In various embodiments, the method of FIG. 11 may be carried out using processing device(s), such as any of the processors discussed herein. As discussed herein, the processing device(s) may be referred to as a counting processor. However, the counting processor does not have to be a singular processor and does not necessarily have to be distinct from other processors in the system. As such, the operations herein may be carried out by any of the embodiments herein unless otherwise stated. Unless otherwise stated, the operations of FIG. 11 may be carried out by the same system, such as the systems of various embodiments discussed herein. Additionally, the operations of FIG. 11 may also be part of the operations discussed in reference to FIG. 10 and are not exclusory, unless specifically noted.


Referring now to Block 1110 and Block 1120 of FIG. 11, the method includes receiving at least one first detection data packet from a first sensing device and receiving at least one second detection data packet from a second sensing device. As discussed herein, a monitored area may have multiple sensing devices (also referred to as sensors) positioned to detect objects. The operations of receiving the at least one first detection data packet from a first sensing device and receiving the at least one second detection data packet from a second sensing device may be the same as discussed in reference to Block 1010 of FIG. 10.


While the operations of FIG. 11 include a first sensing device and a second sensing device, any number of sensing devices may be used with the operations discussed herein. The first sensing device and the second sensing device may be positioned to monitor the same area. Alternatively, the first sensing device and the second sensing device may be positioned to monitor different areas (e.g., the first sensing device and the second sensing device may be positioned to monitor adjacent areas). The first sensing device and the second sensing device may be any of the sensing devices or sensor discussed herein (e.g., the sensing device 10, the sensing device 1200, the phased array sensor 230, etc.).


Referring now to Block 1130 of FIG. 11, the method includes determining an estimated object count based on the first detection data packet. The operations of determining an estimated object count may be the same as discussed in reference to Block 1020 of FIG. 10. As such, the system may determine, based on the first detection data packet(s), an estimated object count. The estimated object count may indicate the number of distinct objects (e.g., people, animals, etc.) that are present within an area


Referring now to Block 1140 of FIG. 11, the method includes verifying the estimated object count based on the at least one second detection data packet. In various embodiments, the sensor readings of the first sensing device (e.g., within the first detection data packet(s)) are compared to the sensor readings of the second sensing device (e.g., within the second detection data packet(s)) to verify an accurate reading from the first sending device.


In an instance in which the first sending device and the second sensing device are monitoring the same area, an estimated object count may also be determined based on the at least one second detection data packet and said estimated object count may be compared to the estimated object count determined based on the at least one first detection data packet. In such an example, in an instance in which the estimated object count determined based on the at least one second detection data packet matches (or is within a margin of error) the estimated object count determined based on the at least one first detection data packet, the estimated object count determined in Block 1130 may be verified. In an instance in which the estimated object count determined based on the at least one second detection data packet does not matches (or is not within a margin of error) the estimated object count determined based on the at least one first detection data packet, the estimated object count determined in Block 1130 may flagged (e.g., flagged as potentially inaccurate). In an instance in which an estimated object count is flagged, additional operations may be completed to ensure an accurate object count. For example, additional first detection data packet(s) and/or second detection data packet(s) may be requested and/or otherwise obtained to determine a


In an instance in which the first sending device and the second sensing device are monitoring the adjacent areas, the at least one second detection data packet may be used to determine whether any objects in the first area (monitored by the first sensing device) have left the first area (and entered the second area monitored by the second sensing device).


In some embodiments in which a first sensing device and a second sending device are monitoring adjacent areas, the first detection data packet(s) and the second detection data packet(s) may be combined to determine an estimated object count for the combined area. As such, the second detection data packet(s) may not be used to verify the estimated object count, but instead used in conjunction with the first detection data packet(s).


Additional sensing devices may be used for a space to provide an estimated object count for a combined space. For example, the sensing device 10A, 10B, 10C of FIG. 5 are positioned to monitor adjacent areas (adjacent areas may be distinct or slightly overlapping as shown in FIG. 5). In such an example, the detection data packet(s) from each of three sensing devices may be used to determine the estimated object count for the combined area. Additionally or alternatively, the detection data packet(s) from different sensing devices may be used to determine clustering within an area. For example, a room with a video screen on one side of the room may result in clustering near the video screen. As such, the estimated object count may be divided within an area (e.g., a sub-area within a larger area).


Referring now to Block 1150 of FIG. 11, the method includes comparing the estimated object count to an object count cap. The comparison may be the same as the operations discussed in reference to Block 1030 of FIG. 10. In various embodiments, the number of sensing devices and the positioning of the sensing device(s) may affect the object count cap. Additionally, the verification discussed in reference to Block 1140 may increase the confidence level (thereby increasing the object count cap). For example, the object count cap may be higher in an instance in which the estimated object count is verified, as opposed to a lower object count cap in an instance in which the estimated object count is not verified.


Referring now to Block 1160 of FIG. 11, the method includes determining a display object count for the area based on the comparison of the estimated object count to the object count cap. The determination of the display object count may be the same as the operations discussed in reference to Block 1040 of FIG. 10.


Referring now to Block 1170 of FIG. 11, the method includes causing a rendering of the display object count to a user interface. The causing a rendering of the display object count to a user interface may be the same as the operations discussed in reference to Block 1070 of FIG. 10.


Referring now to FIG. 12A, an example sensing device 1200 (or positional sensing device) is provided. The sensing device 1200 may include the sensing capabilities discussed herein in reference to any of the sensing devices. For example, the sensing device 1200 may use lasers, radar, and/or the like to detect objects. The objects may be detected without gathering any personal information (e.g., the sensing device is not a camera and only detects objects based on the sensing capabilities of the sensing device).


As shown, the sensing device 1200 may be a self-contained product that allows for placement within an area to be monitored (e.g., such as an area discussed above in reference to FIG. 10). The sensing device 1200 may be positioned within a housing 1205 and have one or more sensing apertures 1210. The sensing device 1200 may be structured to emit one or more pulses (e.g., microwave radiation pulses) through the one or more sensing apertures 1210 via a pulse emitter within the housing 1205. As such, the system may monitor for a Doppler shift of microwave radiation that is emitted from a pulse emitter of each positional sensing device and reflected from objects within the environment.


Referring now to FIG. 12B, the sensing device 1200 may be connected to a network 30 and subsequently to a server, such as the remote server 20 discussed herein. In various embodiments, the remote server 20 may carry out any number of operations discussed herein, such as the operations of FIGS. 10 and/or 11. Additionally or alternatively, some or all of the operations may be performed by the sensing device 1200 (e.g., the sensing device 1200 may have processing device(s) and/or other components necessary to carry out the operations herein).


The sensing device 1200 may include various different communication interfaces capable of communicating with the network 30. For example, the sensing device 1200 may connectivity via Wi-Fi (e.g., Wi-Fi 6:2.4 GHz and/or 5 GHZ), Bluetooth, and/or cellular communication. The sensing device 1200 may be powered via an internal battery and/or an external power supply (e.g., the sensing device 1200 may have a plug that connects the sensing device 1200 to electricity).


As shown in the various use cases of FIGS. 13A-13D, the sensing device 1200 may be positioned at various locations within an area to detect objects within the area. For example, the sensing device 1200 may be wall mounted (e.g., as shown in FIG. 13A). Alternatively, the sensing device 1200 may have a stand to sit on a flat surface (e.g., as shown in FIGS. 13B and 13C). Moreover, the sensing device 1200 may have a mounting to attach to a surface, such as a television shown in FIG. 13D. In various embodiments, the sensing device 1200 may be structured to fit into one or more different mountings and/or stands to allow for different installations (e.g., as shown in FIGS. 13A-13D). Additional mounting locations may be contemplated.


Referring now to FIGS. 13A-13D, various areas are shown with a sensing device (e.g., sensing device 1200) installed. For example, FIG. 13A illustrates a sensing device 1200 positioned within a workspace 1300, FIG. 13B illustrates a sensing device 1200 positioned on a flat surface (e.g., a cabinet 1310), FIG. 13C illustrates a sensing device 1200 positioned on another flat surface (e.g., a desk 1320), and FIG. 13D illustrates a sensing device 1200 positioned within a conference room 1330 (e.g., mounted to a television 1335).


While the sensing device 1200 of FIG. 12A is shown in FIGS. 13A-13D, any number of different sensing devices discussed herein may be used in the use cases shown. Additionally, the sensing device(s) may be used in different environments, as discussed herein.


In various embodiments, the position of the sensing device 1200 may be based on the expected location of objects within an area. For example, the sensing device 1200 shown in FIG. 13D is positioned adjacent to a television screen that is likely to attract people during a conference call. An installer may select an area in which the best reading will be obtained. Additionally or alternatively, the sensing device may be positioned to avoid any unintended objects. For example, in a house, the sensing device 1200 may be positioned at a height that will not pick up objects shorter than a certain height. For example, a sensing device 1200 may be positioned at a height tall enough to detect people and not animals (i.e., animals, such as dogs or cats may not be detected when moving around a space due to the height of the sensing device).


By means of the methods and systems described above, a real-time, accurate, and highly-scalable solution for tracking people's trajectories and determining occupancy counts can be implemented, while still remaining conscious of privacy and retaining anonymity of the people it monitors. Unlike optical cameras that collect images that must later be processed and/or anonymized, the systems and methods herein are anonymized from the start, as they do not store personally-identifiable information. The positional sensing devices track objects within its field of view in an anonymous manner such that stored data cannot be correlated to the identity of any specific person being monitored. The system gathers anonymous data, meaning the system has no way to determine the identity, gender, facial features, or other recognizable information of individual people. Accordingly, an accurate and anonymous trajectories of people can be provided to be accessible via a cloud-based interface. Businesses and customers may have access to real-time, historical, trajectory data, which can also be used to determine occupancy of monitored areas, which may allow businesses and customers to optimize their management and schedules in view of that data. Further, the data can be viewed at different levels of granularity, providing for highly-flexible analysis thereof.


The foregoing is merely illustrative of the principles of this disclosure and various modifications may be made by those skilled in the art without departing from the scope of this disclosure. The above described embodiments are presented for purposes of illustration and not of limitation. The present disclosure also can take many forms other than those explicitly described herein. Accordingly, it is emphasized that this disclosure is not limited to the explicitly disclosed methods, systems, and apparatuses, but is intended to include variations to and modifications thereof, which are within the spirit of the following claims.


As a further example, variations of apparatus or process parameters (e.g., dimensions, configurations, components, process step order, etc.) may be made to further optimize the provided structures, devices and methods, as shown and described herein. In any event, the structures and devices, as well as the associated methods, described herein have many applications. Therefore, the disclosed subject matter should not be limited to any single embodiment described herein, but rather should be construed in breadth and scope in accordance with the appended claims.


III. Claim Clauses

Clause 1. system for detecting objects comprising: a sensor for detecting at least one of a phase shift or an amplitude change in modulated pulses from a pulse generator, wherein the at least one of a phase shift or an amplitude change indicates one or more objects in an area; and a count determination device, wherein the count determination device comprises at least one non-transitory storage device; and at least one processing device coupled to the at least one non-transitory storage device, wherein the at least one processing device is configured to: receive at least one detection data packet from the sensor, wherein the at least one detection data packet comprises the at least one of the phase shift or the amplitude change; determine an estimated object count based on the at least one detection data packet, wherein the estimated object count indicates a number of distinct objects in the area; compare the estimated object count to an object count cap, wherein the object count cap is based on a desired confidence level of the system; based on the comparison of the estimated object count to the object count cap, determine a display object count for the area, wherein the display object count is less than or equal to the object count cap; and cause a rendering of the display object count to a user interface.


Clause 2. The system of Clause 1, wherein the object count cap is at least two objects.


Clause 3. The system of Clause 2, wherein the at least one processing device is also configured to determine an occupancy type of the area based on the display object count, wherein the occupancy type is unoccupied in an instance in which the display object count is zero, wherein the occupancy type is single occupancy in an instance in which the display object count is one, and wherein the occupancy type is multiple occupancy in an instance in which the display object count is above one.


Clause 4. The system of Clause 1, wherein the count determination device is further configured to: determine, based on the comparison of the estimated object count to the object count cap, a display accuracy; and cause a rendering of the display accuracy to the user interface.


Clause 5. The system of Clause 1, wherein each of the at least one object corresponds to an individual person.


Clause 6. The system of Clause 1, wherein the display object count is rendered by displaying the display object count followed by a ‘+’.


Clause 7. The system of Clause 1, wherein the user interface is a mobile device.


Clause 8. The system of Clause 1, wherein the display object count is one less than the estimated object count.


Clause 9. A method of detecting objects comprising: receiving at least one detection data packet from a sensor, wherein the sensor is configured to detect at least one of a phase shift or an amplitude change in modulated pulses from a pulse generator, and wherein the at least one of the phase shift or the amplitude change indicates one or more objects in an area; determining an estimated object count based on the at least one detection data packet, wherein the estimated object count indicates a number of distinct objects in the area; comparing the estimated object count to an object count cap, wherein the object count cap is based on a desired confidence level; determining a display object count for the area based on the comparison of the estimated object count to the object count cap, wherein the display object count is less than or equal to the object count cap; and causing a rendering of the display object count to a user interface.


Clause 10. The method of Clause 9, wherein the object count cap is at least two objects.


Clause 11. The method of Clause 10, further comprising determining an occupancy type of the area based on the display object count, wherein the occupancy type is unoccupied in an instance in which the display object count is zero, wherein the occupancy type is single occupancy in an instance in which the display object count is one, and wherein the occupancy type is multiple occupancy in an instance in which the display object count is above one.


Clause 12. The method of Clause 9, further comprising determining, based on the comparison of the estimated object count to the object count cap, a display accuracy; and causing a rendering of the display accuracy to the user interface.


Clause 13. The method of Clause 9, wherein each of the at least one object corresponds to an individual person.


Clause 14. The method of Clause 9, wherein the display object count is rendered by displaying the display object count followed by a ‘+’.


Clause 15. A computer program product for processing data from object detection devices, the computer program product comprising at least one non-transitory computer-readable medium having one or more computer-readable program code portions embodied therein, the one or more computer-readable program code portions comprising at least one executable portion configured to: receive at least one detection data packet from a sensor, wherein the sensor is configured to detect at least one of a phase shift or an amplitude change in modulated pulses from a pulse generator, and wherein the at least one of the phase shift or the amplitude change indicates one or more objects in an area; determine an estimated object count based on the at least one detection data packet, wherein the estimated object count indicates a number of distinct objects in the area; compare the estimated object count to an object count cap, wherein the object count cap is based on a desired confidence level; determine a display object count for the area based on the comparison of the estimated object count to the object count cap, wherein the display object count is less than or equal to the object count cap; and cause a rendering of the display object count to a user interface.


Clause 16. The computer program product of Clause 15, wherein the at least one executable portion is further configured to: determine, based on the comparison of the estimated object count to the object count cap, a display accuracy; and cause a rendering of the display accuracy to the user interface.


Clause 17. The computer program product of Clause 15, wherein the object count cap is at least two objects.


Clause 18. The computer program product of Clause 17, wherein the at least one executable portion is further configured to determine an occupancy type of the area based on the display object count, wherein the occupancy type is unoccupied in an instance in which the display object count is zero, wherein the occupancy type is single occupancy in an instance in which the display object count is one, and wherein the occupancy type is multiple occupancy in an instance in which the display object count is above one.


Clause 19. The computer program product of Clause 15, wherein each of the at least one object corresponds to an individual person.


Clause 20. The computer program product of Clause 15, wherein the display object count is rendered by displaying the display object count followed by a ‘+’.

Claims
  • 1. A system for detecting objects, the system comprising: a sensor for detecting at least one of a phase shift or an amplitude change in modulated pulses from a pulse generator, wherein the at least one of a phase shift or an amplitude change indicates one or more objects in an area; anda count determination device, wherein the count determination device comprises at least one non-transitory storage device; and at least one processing device coupled to the at least one non-transitory storage device, wherein the at least one processing device is configured to:receive at least one detection data packet from the sensor, wherein the at least one detection data packet comprises the at least one of the phase shift or the amplitude change;determine an estimated object count based on the at least one detection data packet, wherein the estimated object count indicates a number of distinct objects in the area;compare the estimated object count to an object count cap, wherein the object count cap is based on a desired confidence level of the system;based on the comparison of the estimated object count to the object count cap, determine a display object count for the area, wherein the display object count is less than or equal to the object count cap; andcause a rendering of the display object count to a user interface.
  • 2. The system of claim 1, wherein the object count cap is at least two objects.
  • 3. The system of claim 2, wherein the at least one processing device is also configured to determine an occupancy type of the area based on the display object count, wherein the occupancy type is unoccupied in an instance in which the display object count is zero, wherein the occupancy type is single occupancy in an instance in which the display object count is one, and wherein the occupancy type is multiple occupancy in an instance in which the display object count is above one.
  • 4. The system of claim 1, wherein the count determination device is further configured to: determine, based on the comparison of the estimated object count to the object count cap, a display accuracy; and
  • 5. The system of claim 1, wherein each of the at least one object corresponds to an individual person.
  • 6. The system of claim 1, wherein the display object count is rendered by displaying the display object count followed by a ‘+’.
  • 7. The system of claim 1, wherein the user interface is a mobile device.
  • 8. The system of claim 1, wherein the display object count is one less than the estimated object count.
  • 9. A method of detecting objects, the method comprising: receiving at least one detection data packet from a sensor, wherein the sensor is configured to detect at least one of a phase shift or an amplitude change in modulated pulses from a pulse generator, and wherein the at least one of the phase shift or the amplitude change indicates one or more objects in an area;determining an estimated object count based on the at least one detection data packet, wherein the estimated object count indicates a number of distinct objects in the area;comparing the estimated object count to an object count cap, wherein the object count cap is based on a desired confidence level;determining a display object count for the area based on the comparison of the estimated object count to the object count cap, wherein the display object count is less than or equal to the object count cap; andcausing a rendering of the display object count to a user interface.
  • 10. The method of claim 9, wherein the object count cap is at least two objects.
  • 11. The method of claim 10, further comprising determining an occupancy type of the area based on the display object count, wherein the occupancy type is unoccupied in an instance in which the display object count is zero, wherein the occupancy type is single occupancy in an instance in which the display object count is one, and wherein the occupancy type is multiple occupancy in an instance in which the display object count is above one.
  • 12. The method of claim 9, further comprising determining, based on the comparison of the estimated object count to the object count cap, a display accuracy; and causing a rendering of the display accuracy to the user interface.
  • 13. The method of claim 9, wherein each of the at least one object corresponds to an individual person.
  • 14. The method of claim 9, wherein the display object count is rendered by displaying the display object count followed by a ‘+’.
  • 15. A computer program product for processing data from object detection devices, the computer program product comprising at least one non-transitory computer-readable medium having one or more computer-readable program code portions embodied therein, the one or more computer-readable program code portions comprising at least one executable portion configured to: receive at least one detection data packet from a sensor, wherein the sensor is configured to detect at least one of a phase shift or an amplitude change in modulated pulses from a pulse generator, and wherein the at least one of the phase shift or the amplitude change indicates one or more objects in an area;determine an estimated object count based on the at least one detection data packet, wherein the estimated object count indicates a number of distinct objects in the area;compare the estimated object count to an object count cap, wherein the object count cap is based on a desired confidence level;determine a display object count for the area based on the comparison of the estimated object count to the object count cap, wherein the display object count is less than or equal to the object count cap; andcause a rendering of the display object count to a user interface.
  • 16. The computer program product of claim 15, wherein the at least one executable portion is further configured to: determine, based on the comparison of the estimated object count to the object count cap, a display accuracy; and
  • 17. The computer program product of claim 15, wherein the object count cap is at least two objects.
  • 18. The computer program product of claim 17, wherein the at least one executable portion is further configured to determine an occupancy type of the area based on the display object count, wherein the occupancy type is unoccupied in an instance in which the display object count is zero, wherein the occupancy type is single occupancy in an instance in which the display object count is one, and wherein the occupancy type is multiple occupancy in an instance in which the display object count is above one.
  • 19. The computer program product of claim 15, wherein each of the at least one object corresponds to an individual person.
  • 20. The computer program product of claim 15, wherein the display object count is rendered by displaying the display object count followed by a ‘+’.
RELATED APPLICATIONS

This application is a continuation in part of U.S. Non-Provisional application Ser. No. 18/656,521, entitled “TRAJECTORY DETERMINATION SYSTEM USING POSITIONAL SENSING TO DETERMINE THE MOVEMENT OF PEOPLE OR OBJECTS” and filed on May 6, 2024, which is a continuation of U.S. Non-Provisional application Ser. No. 18/503,060 entitled “TRAJECTORY DETERMINATION SYSTEM USING POSITIONAL SENSING TO DETERMINE THE MOVEMENT OF PEOPLE OR OBJECTS” and filed on Nov. 6, 2023, which is a continuation of application Ser. No. 18/656,521 claims priority to U.S. Non-Provisional application Ser. No. 18/365,823, entitled “TRAJECTORY DETERMINATION SYSTEM USING POSITIONAL SENSING TO DETERMINE THE MOVEMENT OF PEOPLE OR OBJECTS” and filed on Aug. 4, 2023, the contents of all of the applications are incorporated herein by reference.

Continuations (3)
Number Date Country
Parent 18656521 May 2024 US
Child 19014487 US
Parent 18503060 Nov 2023 US
Child 18656521 US
Parent 18365823 Aug 2023 US
Child 18503060 US