Room labeling drawing interface for activity tracking and detection

Information

  • Patent Grant
  • 12125137
  • Patent Number
    12,125,137
  • Date Filed
    Tuesday, May 11, 2021
    3 years ago
  • Date Issued
    Tuesday, October 22, 2024
    2 months ago
Abstract
Exemplary embodiments include an intelligent secure networked architecture configured by at least one processor to execute instructions stored in memory, the architecture comprising a data retention system and a machine learning system, a web services layer providing access to the data retention and machine learning systems, an application server layer that provides a user-facing application that accesses the data retention and machine learning systems through the web services layer and performs processing based on user interaction with an interactive graphical user interface provided by the user-facing application, the user-facing application configured to execute instructions for a method for room labeling for activity tracking and detection, the method including making a 2D sketch of a first room on an interactive graphical user interface, and using machine learning to turn the 2D sketch of the first room into a 3D model of the first room.
Description
FIELD OF TECHNOLOGY

Exemplary systems and methods create a three-dimensional (3D) model of a dwelling using a two-dimensional (2D) sketch and real-time user feedback to create an accurate map with location tagging. In particular but not by way of limitation, exemplary embodiments provide the ability for a 3D map to be created and have its location automatically communicated through internet connectivity or cellular network access.


SUMMARY OF EXEMPLARY EMBODIMENTS

Exemplary embodiments include an intelligent secure networked architecture configured by at least one processor to execute instructions stored in memory, the architecture comprising a data retention system and a machine learning system, a web services layer providing access to the data retention and machine learning systems, an application server layer that provides a user-facing application that accesses the data retention and machine learning systems through the web services layer and performs processing based on user interaction with an interactive graphical user interface provided by the user-facing application, the user-facing application configured to execute instructions for a method for room labeling for activity tracking and detection, the method including making a 2D sketch of a first room on an interactive graphical user interface, and using machine learning to turn the 2D sketch of the first room into a 3D model of the first room.


Additionally, exemplary methods include transmitting the 2D sketch of the first room using an internet or cellular network to a series of cloud-based services, using input data from the 2D sketch of the first room to generate the 3D model of the first room with an estimated dimension, making a 2D sketch of a second room on an interactive graphical user interface, using machine learning to turn the 2D sketch of the second room into a 3D model of the second room, using machine learning to combine the 3D model of the first room and the 3D model of the second room, updating a dimension of the 3D model of the first room and a dimension of the 3D model of the second room and using machine learning to create a 3D model of a dwelling.


Various exemplary methods include placing a device having an interactive graphical user interface, an integrated camera and a geolocator in one or more rooms of the dwelling, associating a physical address with the dwelling, tracking activity in the one or more rooms of the dwelling, and transmitting the tracking activity in the one or more rooms of the dwelling using an internet or cellular network to a series of cloud-based services.


Further exemplary methods include the machine learning utilizing a convolutional neural network and using backpropagation to train the convolutional neural network. The 2D sketch of the first room may be received by an input layer of the trained convolutional neural network, the 2D sketch of the first room being processed through an additional layer of the trained convolutional neural network, and the 2D sketch of the first room being processed through an output layer of the trained convolutional neural network, resulting in the 3D model of the first room.





BRIEF DESCRIPTION OF THE DRAWINGS

The accompanying drawings, where like reference numerals refer to identical or functionally similar elements throughout the separate views, together with the detailed description below, are incorporated in and form part of the specification, and serve to further illustrate embodiments of concepts that include the claimed disclosure, and explain various principles and advantages of those embodiments.


The methods and systems disclosed herein have been represented where appropriate by conventional symbols in the drawings, showing only those specific details that are pertinent to understanding the embodiments of the present disclosure so as not to obscure the disclosure with details that will be readily apparent to those of ordinary skill in the art having the benefit of the description herein.


Summary of the figures: FIG. 1 shows the process of a user interacting with the touchscreen device by using their finger to draw a square that represents a room in their dwelling. FIG. 2 then shows the 2D images being run through ML processes to be transformed into a 3D model that can be labeled and displayed back to the user, as displayed in FIG. 3.



FIG. 1—Touchscreen display of the user interface, where a user makes a 2D sketch.



FIG. 2—Algorithm for input data using Machine Learning to turn the 2D sketch into a 3D model.



FIG. 3—Finished 3D model of the environment from the user's collective 2D sketches.



FIG. 4—Exemplary architecture that can be used to practice aspects of the present technology.





DETAILED DESCRIPTION

While the present technology is susceptible of embodiment in many different forms, there is shown in the drawings and will herein be described in detail several specific embodiments with the understanding that the present disclosure is to be considered as an exemplification of the principles of the present technology and is not intended to limit the technology to the embodiments illustrated.


Various exemplary embodiments allow for quick creation of a 3D model of a dwelling that would immediately communicate its location upon creation. This allows for a number of uses, including, but not limited to, the ability to track patients as they move about a residence. Tracking the location of a person within their dwelling is often necessary as part of a plan of care for patients with diseases such as dementia. Having an accurate layout and location of a place of residence and an ability to track movement within the residence allows for quick response if emergency services need to be sent, or if the patient wanders out of the house.


Provided herein are exemplary systems for creating a three-dimensional (3D) model of a dwelling using a touch interface with integrated camera, a two-dimensional (2D) sketch, and a labeling system in order to immediately communicate the location of the model through internet connectivity or cellular access. This system would also allow for remote tracking of location and activity within the dwelling. By drawing on the device in the home, the location and map would be collected at the same time, allowing for geo-tagging within a small environment. The device would inherently know its location within a room based on the drawing that the user will provide. This will then give users and authorized personnel access to real-time feedback of activity within the dwelling.


Using a touch interface, the user will provide a sketch of their residence through guided interaction. The interface will prompt the user to draw a 2D template of the room they are currently occupying. The interface would then prompt the user to draw adjacent and successive rooms throughout the residence in order to create a floor plan of the dwelling. Machine Learning (ML) models will then interpret the drawings and adjust for real-world dimensions in order to produce a more accurate model of the residence. Given a multi-device interface mesh-system, the interface will prompt the user to identify any other rooms in the 3D model that additional devices reside in. As the model is created, it will also be uploaded through internet connectivity or cellular networks in order to broadcast the location. This provides a way for the integrated cameras within the devices to remotely monitor activity and traffic throughout the dwelling in order to respond to any emergencies that might arise.



FIG. 1 demonstrates the user interacting with the touchscreen device. In this figure, the user is drawing an initial sketch of the room they are in. This device maps this into 3D and does so for each consecutive room, as it is in the room it is mapping.


Referring to FIG. 1, various exemplary embodiments use a touchscreen interface (101) located within a user's (102) dwelling to collect a two-dimensional drawing (103) representing the dwelling. After the user completes 103, the data are transmitted from 101 using either the internet or cellular network to a series of cloud-based services. Throughout the processes, all data are secured using AES-256 bit encryption, whether data are in transit or at rest.


In FIG. 2, input data from the user's 2D sketch on the interface becomes a 3D model from the first room with estimated dimensions. Then the user continues to the second room and draws a 2D sketch of the second room. From there, ML algorithms add this room to the room previously sketched and modeled, with updated dimensions of both.


In some exemplary embodiments, a convolutional neural network, CNN, may be used as a machine learning model. Additionally, in some exemplary embodiments, backpropagation may be used to train the convolutional neural network. In fitting a neural network, backpropagation computes the gradient of the loss function with respect to the weights of the network for a single input-output example, and does so efficiently, unlike a naive direct computation of the gradient with respect to each weight individually. This efficiency makes it feasible to use gradient methods for training multilayer networks, updating weights to minimize loss; gradient descent, or variants such as stochastic gradient descent, are commonly used. The backpropagation algorithm works by computing the gradient of the loss function with respect to each weight by the chain rule, computing the gradient one layer at a time, iterating backward from the last layer to avoid redundant calculations of intermediate terms in the chain rule.


A trained convolutional neural network is used to pass the 2D sketch—represented as a matrix of numbers where each number represents the value of an individual pixel—through an input layer. The specifications of the input layer are dependent on the format and number of channels contained within the initial sketch. For instance, a grayscale image would consist of a matrix of two dimensions (width and height), whereas a color image would consist of three dimensions (width, height, and channel). This data is then processed through a number of convolutional layers—eventually leading to an output layer wherein the 3D model is provided in the form of a matrix of numbers representing the dimensions of the dwelling. As the rooms are mapped, machine learning models are used to continuously adjust the 3D specifications of the rooms to better match the learned representations of how rooms connect to each other—which were acquired during model training.


As the user sketches each room, they see a 3D view in real-time. This includes the first room, although the ML algorithm improves the real-world dimensions of each room being added to the map consecutively. Illustrated here as an example, once the user sketches the second room, the length of the first room has increased. Then by the time the user has sketched the third room, the first two rooms mapped are shorter in height, and have decreased in length. This process continues until the last room (the nth room) is mapped.


In FIG. 2, 103 is then processed through Machine Learning (ML) (104) to create a three-dimensional model (105) of the dwelling. The physical location of the dwelling is then tied to 105, so that tracking within the model can be accomplished through the use of cameras attached to 101. 105 is then displayed on 101 for verification by 102, as seen in FIG. 3. Using 101, 102 can correct any errors immediately, with those updated data being transmitted through the internet or cellular network simultaneously. 102 can then select any segments of 105 that contain 101, allowing for tracking throughout multiple rooms. This tracking can then be transmitted by 105, to the internet or cellular network and communicated to any third parties previously selected by 102.



FIG. 3 is a complete 3D model from the 2D sketches made by the user. At this point, the user has made 2D sketches of rooms one through “n”, where n is the last room, and the real-world dimensions of the environment have become more accurate.



FIG. 4 is a schematic diagram of an exemplary intelligent secure networked architecture (hereinafter architecture 400) for practicing aspects of the present disclosure. The architecture 400 comprises batch engine 405, data retention and machine learning systems 435, a web services layer 425, message bus 430, network communication link 345, security token, cookie, internet service provider (“ISP”), geolocator on web browser and/or computing device 410, cloud 440, and an application server layer 415.


In some embodiments, the data retention and machine learning systems 435 are in secure isolation from a remainder of the intelligent secure networked architecture 400 through a security protocol or layer. The data retention and machine learning systems 435 can also provide (in addition to machine learning) additional services such as logic, data analysis, risk model analysis, security, data privacy controls, data access controls, disaster recovery for data and web services—just to name a few.


The web services layer 425 generally provides access to the data retention and machine learning systems 435. According to some embodiments, the application server layer 415 is configured to provide a user-facing application with an interactive graphical user interface (also called user-facing application) 420 that accesses the data retention and machine learning systems 435 through the web services layer 425. In some embodiments, the user-facing application with an interactive graphical user interface 420 is secured through use of a security token and/or cookie cached on the user-facing application with an interactive graphical user interface 420.


In one or more embodiments, the application server layer 415 performs asynchronous processing based on user interaction with the user-facing application and/or the interactive graphical user interface. The user-facing application may reside and execute on the application server layer 415. In other embodiments, the user-facing application may reside with the data retention and machine learning systems 435. In another embodiment, the user-facing application can be a client-side, downloadable application.


The architecture of the present disclosure implement security features that involve the use of multiple security tokens to provide security in the architecture 400. Security tokens are used between the web services layer 425 and application server layer 415. In some embodiments, security features are not continuous to the web browser 410. Thus, a second security layer or link is established between the web browser 410 and application server layer, 415. In one or more embodiments, a first security token is cached in the application server layer 415 between the web browser 410 and the application server layer 415.


In some embodiments, the architecture 400 implements an architected message bus 430. In an example usage, a user requests a refresh of their data and interactive graphical user interface 420 through their web browser 410. Rather than performing the refresh, which could involve data intensive and/or compute or operational intensive procedures by the architecture 400, the message bus 430 allows the request for refresh to be processed asynchronously by a batching process and provides a means for allowing the web browser 410 to continue to display a user-facing application 420 to the user, allowing the user to continue to access data without waiting on the architecture 400 to complete its refresh.


In some exemplary embodiments, latency may be remediated at the user-facing application based on the manner with which the user-facing application is created and how the data that is displayed through the user-facing application is stored and updated. For example, data displayed on the user-facing application that changes frequently can cause frequent and unwanted refreshing of the entire user-facing application and Graphical User Interfaces (“GUIs”). The present disclosure provides a solution to this issue by separating what is displayed on the GUI with the actual underlying data. The underlying data displayed on the GUI of the user-facing application 420 can be updated, as needed, on a segment-by-segment basis (could be defined as a zone of pixels on the display) at a granular level, rather than updating the entire GUI. That is, the GUI that renders the underlying data is programmatically separate from the underlying data cached by the client (e.g., device rendering the GUIs of the user-facing application). Due to this separation, when data being displayed on the GUI changes, re-rendering of the data is performed at a granular level, rather than at the page level. This process represents another example solution that remedies latency and improves user experiences with the user-facing application.


To facilitate these features, the web browser 410 will listen on the message bus 430 for an acknowledgement or other confirmation that the background processes update the user account and/or the user-facing application has been completed by the application server layer 415. The user-facing application (or even part thereof) is updated as the architecture 400 completes its processing. This allows the user-facing application 420 provided through the web browser 410 to be usable, but heavy lifting is being done transparently to the user by the application server layer 415. In sum, these features prevent or reduce latency issues even when an application provided through the web browser 410 is “busy.” For example, a re-balance request is executed transparently by the application server layer 415 and batch engine 405. This type of transparent computing behavior by the architecture 400 allows for asynchronous operation (initiated from the application server layer 415 or message bus 430).


In some embodiments, a batch engine 405 is included in the architecture 400 and works in the background to process re-balance requests and to coordinate a number of services. The batch engine 405 will transparently orchestrate the necessary operations required by the application server layer 415 to obtain data.


According to some embodiments, the batch engine 405 is configured to process requests transparently to a user so that the user can continue to use the user-facing application 420 without disruption. For example, this transparent processing can occur when the application server layer 415 transmits a request to the web services layer 425 for data, and a time required for updating or retrieving the data meets or exceeds a threshold. For example, the threshold might specify that if the request will take more than five seconds to complete, then the batch engine 405 can process the request transparently. The selected threshold can be system configured.


In some embodiments, security of data transmission through the architecture 400 is improved by use of multiple security tokens. In one embodiment, a security token cached on the web browser 410 is different from a security protocol or security token utilized between the application server layer 415 and the web services layer 425.


The architecture 400 may communicatively couple with the user facing application with interactive graphical user interface 420 (or client) via a public or private network, such as network. Suitable networks may include or interface with any one or more of, for instance, a local intranet, a PAN (Personal Area Network), a LAN (Local Area Network), a WAN (Wide Area Network), a MAN (Metropolitan Area Network), a virtual private network (VPN), a storage area network (SAN), a frame relay connection, an Advanced Intelligent Network (AIN) connection, a synchronous optical network (SONET) connection, a digital T1, T3, E1 or E3 line, Digital Data Service (DDS) connection, DSL (Digital Subscriber Line) connection, an Ethernet connection, an ISDN (Integrated Services Digital Network) line, a dial-up port such as a V.90, V.34 or V.34bis analog modem connection, a cable modem, an ATM (Asynchronous Transfer Mode) connection, or an FDDI (Fiber Distributed Data Interface) or CDDI (Copper Distributed Data Interface) connection. Furthermore, communications may also include links to any of a variety of wireless networks, including WAP (Wireless Application Protocol), GPRS (General Packet Radio Service), GSM (Global System for Mobile Communication), CDMA (Code Division Multiple Access) or TDMA (Time Division Multiple Access), cellular phone networks, GPS (Global Positioning System), CDPD (cellular digital packet data), RIM (Research in Motion, Limited) duplex paging network, Bluetooth radio, or an IEEE 802.11-based radio frequency network. The network can further include or interface with any one or more of an RS-232 serial connection, an IEEE-1394 (Firewire) connection, a Fiber Channel connection, an IrDA (infrared) port, a SCSI (Small Computer Systems Interface) connection, a USB (Universal Serial Bus) connection or other wired or wireless, digital or analog interface or connection, mesh or Digi® networking.


It will be understood that the functionalities described herein, which are attributed to the architecture and user facing application may also be executed within the client. That is, the client may be programmed to execute the functionalities described herein. In other instances, the architecture and client may cooperate to provide the functionalities described herein, such that the client is provided with a client-side application that interacts with the architecture such that the architecture and client operate in a client/server relationship. Complex computational features may be executed by the architecture, while simple operations that require fewer computational resources may be executed by the client, such as data gathering and data display.


In general, a user interface module may be executed by the architecture to provide various graphical user interfaces (GUIs) that allow users to interact with the architecture. In some instances, GUIs are generated by execution of the user facing application itself. Users may interact with the architecture using, for example, a client. The architecture may generate web-based interfaces for the client.


In the description, for purposes of explanation and not limitation, specific details are set forth, such as particular embodiments, procedures, techniques, etc. in order to provide a thorough understanding of the present technology. However, it will be apparent to one skilled in the art that the present technology may be practiced in other embodiments that depart from these specific details.


While specific embodiments of, and examples for, the system are described above for illustrative purposes, various equivalent modifications are possible within the scope of the system, as those skilled in the relevant art will recognize. For example, while processes or steps are presented in a given order, alternative embodiments may perform routines having steps in a different order, and some processes or steps may be deleted, moved, added, subdivided, combined, and/or modified to provide alternative or sub-combinations. Each of these processes or steps may be implemented in a variety of different ways. Also, while processes or steps are at times shown as being performed in series, these processes or steps may instead be performed in parallel, or may be performed at different times.


While various embodiments have been described above, it should be understood that they have been presented by way of example only, and not limitation. The descriptions are not intended to limit the scope of the present technology to the particular forms set forth herein. To the contrary, the present descriptions are intended to cover such alternatives, modifications, and equivalents as may be included within the spirit and scope of the present technology as appreciated by one of ordinary skill in the art. Thus, the breadth and scope of a preferred embodiment should not be limited by any of the above-described exemplary embodiments.

Claims
  • 1. An intelligent secure networked architecture configured by at least one processor to execute instructions stored in memory, the architecture comprising: a data retention system and a machine learning system;a web services layer providing access to the data retention and the machine learning systems; andan application server layer that: provides a user-facing application that accesses the data retention and the machine learning systems through the web services layer; andperforms processing based on user interaction with an interactive graphical user interface provided by the user-facing application, the user-facing application configured to execute instructions for a method for room labeling for activity tracking and detection, the method comprising:making a 2D sketch of a first room on the interactive graphical user interface, comprising:prompting a user, via guided interaction, to draw the 2D sketch of the first room;using machine learning to turn the 2D sketch of the first room represented as a matrix of numbers where each number represents a value of an individual pixel into a 3D model in a form of a matrix of numbers representing dimensions of the first room;correcting an error within the 3D model;performing the processing transparently to the user without disruption with an exceeded threshold triggering the processing transparently; andtracking a location and movement of a dementia patient within the 3D model in real-time for an emergency response.
  • 2. The intelligent secure networked architecture of claim 1, the method further comprising transmitting the 2D sketch of the first room using an internet or cellular network to a series of cloud-based services.
  • 3. The intelligent secure networked architecture of claim 1, the method further comprising using input data from the 2D sketch of the first room to generate the 3D model of the first room with an estimated dimension.
  • 4. The intelligent secure networked architecture of claim 1, the method further comprising making a 2D sketch of a second room on the interactive graphical user interface.
  • 5. The intelligent secure networked architecture of claim 4, the method further comprising using the machine learning to turn the 2D sketch of the second room into a 3D model of the second room.
  • 6. The intelligent secure networked architecture of claim 5, the method further comprising using the machine learning to combine the 3D model of the first room and the 3D model of the second room.
  • 7. The intelligent secure networked architecture of claim 6, the method further comprising updating a dimension of the 3D model of the first room and a dimension of the 3D model of the second room.
  • 8. The intelligent secure networked architecture of claim 7, the method further comprising using the machine learning to create a 3D model of a dwelling.
  • 9. The intelligent secure networked architecture of claim 8, the method further comprising placing a device having the interactive graphical user interface, an integrated camera and a geolocator in each room of the dwelling.
  • 10. The intelligent secure networked architecture of claim 9, the method further comprising associating a physical address with the dwelling.
  • 11. The intelligent secure networked architecture of claim 10, the method further comprising tracking activity in each room of the dwelling.
  • 12. The intelligent secure networked architecture of claim 11, the method further comprising transmitting the tracking activity in each room of the dwelling using an internet or cellular network to a series of cloud-based services.
  • 13. The intelligent secure networked architecture of claim 1, wherein the machine learning utilizes a convolutional neural network.
  • 14. The intelligent secure networked architecture of claim 13, further comprising using backpropagation to train the convolutional neural network.
  • 15. The intelligent secure networked architecture of claim 14, wherein the 2D sketch of the first room is received by an input layer of the trained convolutional neural network.
  • 16. The intelligent secure networked architecture of claim 15, further comprising the 2D sketch of the first room being processed through an additional layer of the trained convolutional neural network.
  • 17. The intelligent secure networked architecture of claim 16, further comprising the 2D sketch of the first room being processed through an output layer of the trained convolutional neural network, resulting in the 3D model of the first room.
  • 18. The intelligent secure networked architecture of claim 1, further comprising using multiple security tokens.
  • 19. The intelligent secure networked architecture of claim 1, further comprising using a security token cached on a web browser.
  • 20. The intelligent secure networked architecture of claim 1, further comprising using a security token between the application server layer and the web services layer.
CROSS REFERENCE TO RELATED APPLICATIONS

The present application claims the priority benefit of U.S. Provisional Patent Application Ser. No. 63/024,375 filed on May 13, 2020 and titled “Room Labeling Drawing Interface for Activity Tracking and Detection,” which is hereby incorporated by reference in its entirety.

US Referenced Citations (230)
Number Name Date Kind
5211642 Clendenning May 1993 A
5475953 Greenfield Dec 1995 A
6665647 Haudenschild Dec 2003 B1
7233872 Shibasaki et al. Jun 2007 B2
7445086 Sizemore Nov 2008 B1
7612681 Azzaro et al. Nov 2009 B2
7971141 Quinn et al. Jun 2011 B1
8206325 Najafi et al. Jun 2012 B1
8771206 Gettelman et al. Jul 2014 B2
9317916 Hanina et al. Apr 2016 B1
9591996 Chang et al. Mar 2017 B2
9972187 Srinivasan et al. May 2018 B1
10073612 Hale Sep 2018 B1
10078956 Kusens Sep 2018 B1
10147184 Kusens Dec 2018 B2
10225522 Kusens Mar 2019 B1
10347052 Hemani Jul 2019 B2
10387963 Leise et al. Aug 2019 B1
10628635 Carpenter, II et al. Apr 2020 B1
10761691 Anzures et al. Sep 2020 B2
10769848 Wang Sep 2020 B1
10813572 Dohrmann et al. Oct 2020 B2
11113943 Wright et al. Sep 2021 B2
11213224 Dohrmann et al. Jan 2022 B2
11410362 Soltani Aug 2022 B1
11514633 Cetintas et al. Nov 2022 B1
11733861 Tadros Aug 2023 B2
11823308 Yun Nov 2023 B2
20020062342 Sidles May 2002 A1
20020196944 Davis et al. Dec 2002 A1
20040109470 Derechin et al. Jun 2004 A1
20050035862 Wildman et al. Feb 2005 A1
20050055942 Maelzer et al. Mar 2005 A1
20050264416 Maurer Dec 2005 A1
20070032929 Yoshioka Feb 2007 A1
20070040692 Smith Feb 2007 A1
20070136102 Rodgers Jun 2007 A1
20070194939 Alvarez Aug 2007 A1
20070238936 Becker Oct 2007 A1
20080010293 Zpevak et al. Jan 2008 A1
20080021731 Rodgers Jan 2008 A1
20080186189 Azzaro et al. Aug 2008 A1
20090094285 Mackle et al. Apr 2009 A1
20090138113 Hoguet May 2009 A1
20090160856 Hoguet Jun 2009 A1
20100124737 Panzer May 2010 A1
20100217565 Wayne Aug 2010 A1
20100217639 Wayne Aug 2010 A1
20110126207 Wipfel et al. May 2011 A1
20110145018 Fotsch et al. Jun 2011 A1
20110208541 Wilson Aug 2011 A1
20110232708 Kemp Sep 2011 A1
20120025989 Cuddihy et al. Feb 2012 A1
20120026308 Johnson Feb 2012 A1
20120075464 Derenne et al. Mar 2012 A1
20120120184 Fornell et al. May 2012 A1
20120121849 Nojima May 2012 A1
20120154582 Johnson et al. Jun 2012 A1
20120165618 Algoo et al. Jun 2012 A1
20120179067 Wekell Jul 2012 A1
20120179916 Staker et al. Jul 2012 A1
20120229634 Laett et al. Sep 2012 A1
20120253233 Greene et al. Oct 2012 A1
20120323090 Bechtel Dec 2012 A1
20130000228 Ovaert Jan 2013 A1
20130016126 Wang Jan 2013 A1
20130060167 Dracup Feb 2013 A1
20130127620 Siebers May 2013 A1
20130145449 Busser et al. Jun 2013 A1
20130167025 Patri et al. Jun 2013 A1
20130204545 Solinsky Aug 2013 A1
20130212501 Anderson et al. Aug 2013 A1
20130237395 Hjelt et al. Sep 2013 A1
20130289449 Stone et al. Oct 2013 A1
20130303860 Bender et al. Nov 2013 A1
20140128691 Olivier May 2014 A1
20140148733 Stone et al. May 2014 A1
20140171039 Bjontegard Jun 2014 A1
20140171834 DeGoede et al. Jun 2014 A1
20140180725 Ton-That Jun 2014 A1
20140184592 Belcher Jul 2014 A1
20140232600 Larose et al. Aug 2014 A1
20140243686 Kimmel Aug 2014 A1
20140257852 Walker et al. Sep 2014 A1
20140267582 Beutter et al. Sep 2014 A1
20140278605 Borucki et al. Sep 2014 A1
20140330172 Jovanov et al. Nov 2014 A1
20140337048 Brown et al. Nov 2014 A1
20140358828 Phillipps et al. Dec 2014 A1
20140368601 deCharms Dec 2014 A1
20150019250 Goodman et al. Jan 2015 A1
20150109442 Derenne et al. Apr 2015 A1
20150169835 Hamdan et al. Jun 2015 A1
20150359467 Tran Dec 2015 A1
20160026354 McIntosh et al. Jan 2016 A1
20160117470 Welsh et al. Apr 2016 A1
20160117484 Hanina et al. Apr 2016 A1
20160154977 Jagadish et al. Jun 2016 A1
20160180668 Kusens Jun 2016 A1
20160183864 Kusens Jun 2016 A1
20160217264 Sanford Jul 2016 A1
20160249241 Barmettler Aug 2016 A1
20160253890 Rabinowitz et al. Sep 2016 A1
20160267327 Franz et al. Sep 2016 A1
20160284178 Cushwa, Jr. Sep 2016 A1
20160314255 Cook et al. Oct 2016 A1
20170000387 Forth et al. Jan 2017 A1
20170000422 Moturu et al. Jan 2017 A1
20170024531 Malaviya Jan 2017 A1
20170055917 Stone et al. Mar 2017 A1
20170068419 Sundermeyer Mar 2017 A1
20170091885 Randolph Mar 2017 A1
20170116484 Johnson Apr 2017 A1
20170140631 Pietrocola et al. May 2017 A1
20170147154 Steiner et al. May 2017 A1
20170185278 Sundermeyer Jun 2017 A1
20170192950 Gaither et al. Jul 2017 A1
20170193163 Melle et al. Jul 2017 A1
20170195637 Kusens Jul 2017 A1
20170197115 Cook et al. Jul 2017 A1
20170212661 Ito Jul 2017 A1
20170213145 Pathak et al. Jul 2017 A1
20170263034 Kenoff Sep 2017 A1
20170273601 Wang et al. Sep 2017 A1
20170337274 Ly et al. Nov 2017 A1
20170344706 Torres et al. Nov 2017 A1
20170344832 Leung et al. Nov 2017 A1
20170358195 Bobda Dec 2017 A1
20180005448 Choukroun et al. Jan 2018 A1
20180018057 Bushnell Jan 2018 A1
20180033279 Chong Feb 2018 A1
20180075558 Hill, Sr. et al. Mar 2018 A1
20180154514 Angle et al. Jun 2018 A1
20180165938 Honda et al. Jun 2018 A1
20180182472 Preston et al. Jun 2018 A1
20180189756 Purves et al. Jul 2018 A1
20180225885 Dishno Aug 2018 A1
20180322405 Fadell et al. Nov 2018 A1
20180342081 Kim Nov 2018 A1
20180360349 Dohrmann et al. Dec 2018 A9
20180368780 Bruno et al. Dec 2018 A1
20190013960 Sadwick Jan 2019 A1
20190029900 Walton et al. Jan 2019 A1
20190042700 Alotaibi Feb 2019 A1
20190057320 Docherty et al. Feb 2019 A1
20190090786 Kim Mar 2019 A1
20190116212 Spinella-Mamo Apr 2019 A1
20190130110 Lee et al. May 2019 A1
20190164015 Jones, Jr. et al. May 2019 A1
20190164340 Pejic May 2019 A1
20190196888 Anderson et al. Jun 2019 A1
20190205630 Kusens Jul 2019 A1
20190214146 Dunias Jul 2019 A1
20190220727 Dohrmann et al. Jul 2019 A1
20190221315 Weffers-Albu Jul 2019 A1
20190228866 Weffers-Albu Jul 2019 A1
20190243928 Rejeb Sfar Aug 2019 A1
20190259475 Dohrmann Aug 2019 A1
20190282130 Dohrmann et al. Sep 2019 A1
20190286942 Abhiram et al. Sep 2019 A1
20190307405 Terry Oct 2019 A1
20190311792 Dohrmann et al. Oct 2019 A1
20190318165 Shah et al. Oct 2019 A1
20190323823 Atchison Oct 2019 A1
20190362545 Pejic Nov 2019 A1
20190370617 Singh Dec 2019 A1
20190385749 Dohrmann Dec 2019 A1
20200004237 Kim Jan 2020 A1
20200057824 Yeh Feb 2020 A1
20200066415 Hettig Feb 2020 A1
20200085382 Taerum Mar 2020 A1
20200101969 Natroshvili et al. Apr 2020 A1
20200151923 Bergin May 2020 A1
20200175889 Delson Jun 2020 A1
20200251220 Chasko Aug 2020 A1
20200279364 Sarkisian Sep 2020 A1
20200327261 Sawaguchi Oct 2020 A1
20200357256 Wright et al. Nov 2020 A1
20200357511 Sanford Nov 2020 A1
20200394058 Delson Dec 2020 A1
20200402245 Keraudren Dec 2020 A1
20210007631 Dohrmann et al. Jan 2021 A1
20210035468 Das Feb 2021 A1
20210049812 Ganihar Feb 2021 A1
20210052757 Baarman Feb 2021 A1
20210073449 Segev Mar 2021 A1
20210150088 Gallo May 2021 A1
20210165561 Wei Jun 2021 A1
20210217236 Bavastro Jul 2021 A1
20210232719 Ganihar Jul 2021 A1
20210256177 Voss Aug 2021 A1
20210273962 Dohrmann et al. Sep 2021 A1
20210333554 Ohno Oct 2021 A1
20210358202 Tveito et al. Nov 2021 A1
20210365602 Gifford Nov 2021 A1
20210398410 Wright et al. Dec 2021 A1
20220022760 Salcido et al. Jan 2022 A1
20220026920 Ebrahimi Afrouzi Jan 2022 A1
20220036656 Garcia Feb 2022 A1
20220058865 Beltrand Feb 2022 A1
20220058866 Beltrand Feb 2022 A1
20220108561 Groß Apr 2022 A1
20220117515 Dohrmann et al. Apr 2022 A1
20220156428 Myers May 2022 A1
20220164097 Tadros May 2022 A1
20220207198 Mantraratnam Jun 2022 A1
20220274019 Delmonico Sep 2022 A1
20220277518 Bavastro Sep 2022 A1
20220292230 Murphy Sep 2022 A1
20220292421 Murphy Sep 2022 A1
20220329973 Karmanov Oct 2022 A1
20220331028 Sternitzke Oct 2022 A1
20220337972 Karmanov Oct 2022 A1
20220351470 Enthed Nov 2022 A1
20220358258 Sica Nov 2022 A1
20220358739 Enthed Nov 2022 A1
20220366813 Shaw Nov 2022 A1
20220383572 Hu Dec 2022 A1
20220399113 Dohrmann Dec 2022 A1
20230014580 Zhu Jan 2023 A1
20230093571 Aoki Mar 2023 A1
20230162413 Batra May 2023 A1
20230162704 Li May 2023 A1
20230222887 Muhsin Jul 2023 A1
20230243649 Pershing Aug 2023 A1
20230301522 Tan Sep 2023 A1
20230368900 Dontsova Nov 2023 A1
20230376639 Lambourne Nov 2023 A1
20230410452 Beltrand Dec 2023 A1
20240049991 Chronis Feb 2024 A1
Foreign Referenced Citations (44)
Number Date Country
2019240484 Nov 2021 AU
2949449 Nov 2015 CA
104361321 Feb 2015 CN
106056035 Oct 2016 CN
107411515 Dec 2017 CN
111801645 Oct 2020 CN
111801939 Oct 2020 CN
111867467 Oct 2020 CN
113795808 Dec 2021 CN
3740856 Nov 2020 EP
3756344 Dec 2020 EP
3768164 Jan 2021 EP
3773174 Feb 2021 EP
3815108 May 2021 EP
3920797 Dec 2021 EP
3944258 Jan 2022 EP
3966657 Mar 2022 EP
202027033318 Oct 2020 IN
202027035634 Oct 2020 IN
202127033278 Aug 2022 IN
2002304362 Oct 2002 JP
2005228305 Aug 2005 JP
2010172481 Aug 2010 JP
2012232652 Nov 2012 JP
2016137226 Aug 2016 JP
2016525383 Aug 2016 JP
2021510881 Apr 2021 JP
2021524075 Sep 2021 JP
2022519283 Mar 2022 JP
1020160040078 Apr 2016 KR
1020200105519 Sep 2020 KR
1020200121832 Oct 2020 KR
1020200130713 Nov 2020 KR
WO2000005639 Feb 2000 WO
WO2014043757 Mar 2014 WO
WO2017118908 Jul 2017 WO
WO2018032089 Feb 2018 WO
WO2019143397 Jul 2019 WO
WO2019164585 Aug 2019 WO
WO2019182792 Sep 2019 WO
WO2019199549 Oct 2019 WO
WO2019245713 Dec 2019 WO
WO2020163180 Aug 2020 WO
WO2020227303 Nov 2020 WO
Non-Patent Literature Citations (57)
Entry
C. Sardianos et al., “A model for predicting room occupancy based on motion sensor data,” 2020 IEEE International Conference on Informatics, loT, and Enabling Technologies (ICIoT), Doha, Qatar, 2020, pp. 394-399, doi: 10.1109/ICloT48696.2020.9089624. ( Year: 2020).
S. Kuwabara, R. Ohbuchi and T. Furuya, “Query by Partially-Drawn Sketches for 3D Shape Retrieval,” 2019 International Conference on Cyberworlds (CW), Kyoto, Japan, 2019, pp. 69-76, doi: 10.1109/CW.2019.00020. (Year: 2019).
Wanchao Su, Dong Du, Xin Yang, Shizhe Zhou, and Hongbo Fu. 2018. Interactive Sketch-Based Normal Map Generation with Deep Neural Networks. Proc. ACM Comput. Graph. Interact. Tech. 1, 1, Article 22 (Jul. 2018), 17 pages. https://doi.org/10.1145/3203186 (Year: 2018).
Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A. Efros, “Image-to-Image Translation with Conditional Adversarial Networks” 2018, arXiv:1611.07004 [cs.CV] (or arXiv:1611.07004v3 [cs.CV] for this version) https://doi.org/10.48550/arXiv.1611.07004 (Year: 2018).
Mathias Eitz, Ronald Richter, Tamy Boubekeur, Kristian Hildebrand, and Marc Alexa. 2012. Sketch-based shape retrieval. ACM Trans. Graph. 31, 4, Article 31 (Jul. 2012), 10 pages. https://doi.org/10.1145/2185520.2185527 (Year: 2012).
T. Furuya and R. Ohbuchi, “Ranking on Cross-Domain Manifold for Sketch-Based 3D Model Retrieval,” 2013 International Conference on Cyberworlds, Yokohama, Japan, 2013, pp. 274-281, doi: 10.1109/CW.2013.60. (Year: 2013).
Fang Wang, Le Kang and Yi Li, “Sketch-based 3D shape retrieval using Convolutional Neural Networks,” 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, 2015, pp. 1875-1883, doi: 10.1109/CVPR.2015.7298797. (Year: 2015).
“Extended European Search Report”, European Patent Application No. 19772545.0, Nov. 16, 2021, 8 pages.
“Office Action”, India Patent Application No. 202027033318, Nov. 18, 2021, 6 pages.
“Office Action”, Australia Patent Application No. 2018409860, Nov. 30, 2021, 4 pages.
“Office Action”, Australia Patent Application No. 2018403182, Dec. 1, 2021, 3 pages.
“Ofice Action”, Korea Patent Application No. 10-2020-7028606, Oct. 29, 2021, 7 pages [14 pages with translation].
“Office Action”, Japan Patent Application No. 2020-543924, Nov. 24, 2021, 3 pages [6 pages with translation].
“Extended European Search Report”, European Patent Application No. EP19785057, Dec. 6, 2021, 8 pages.
“Office Action”, Australia Patent Application No. 2020218172, Dec. 21, 2021, 4 pages.
“Extended European Search Report”, European Patent Application No. 21187314.6, Dec. 10, 2021, 10 pages.
“Notice of Allowance”, Australia Patent Application No. 2018403182, Jan. 20, 2022, 4 pages.
“Office Action”, Australia Patent Application No. 2018409860, Jan. 24, 2022, 5 pages.
“Office Action”, China Patent Application No. 201880089608.2, Feb. 8, 2022, 6 pages (15 pages with translation).
“International Search Report” and “Written Opinion of the International Searching Authority,” Patent Cooperation Treaty Application No. PCT/US2021/056060, Jan. 28, 2022, 8 pages.
“Extended European Search Report”, European Patent Application No. 19822930.4, Feb. 15, 2022, 9 pages.
“Office Action”, Japan Patent Application No. 2020-550657, Feb. 8, 2022, 8 pages.
“Office Action”, Singapore Patent Application No. 11202008201P, Apr. 4, 2022, 200 pages.
“Office Action”, India Patent Application No. 202127033278, Apr. 20, 2022, 7 pages.
“Office Action”, Canada Patent Application No. 3088396, May 6, 2022, 4 pages.
“International Search Report” and “Written Opinion of the International Searching Authority,” Patent Cooperation Treaty Application No. PCT/US2018/057814, Jan. 11, 2019, 9 pages.
“International Search Report” and “Written Opinion of the International Searching Authority,” Patent Cooperation Treaty Application No. PCT/US2018/068210, Apr. 12, 2019, 9 pages.
“International Search Report” and “Written Opinion of the International Searching Authority,” Patent Cooperation Treaty Application No. PCT/US2019/021678, May 24, 2019, 12 pages.
“International Search Report” and “Written Opinion of the International Searching Authority,” Patent Cooperation Treaty Application No. PCT/US2019/025652, Jul. 18, 2019, 11 pages.
“International Search Report” and “Written Opinion of the International Searching Authority,” Patent Cooperation Treaty Application No. PCT/US2019/034206, Aug. 1, 2019, 11 pages.
Rosen et al., “Slipping and Tripping: Fall Injuries in Adults Associated with Rugs and Carpets,” Journal of Injury & Violence Research, 5(1), (2013), pp. 61-69.
Bajaj, Prateek, “Reinforcement Learning”, GeeksForGeeks.org [online], [retrieved on Mar. 4, 2020], Retrieved from the Internet:<URL:https://www.geeksforgeeks.org/what-is-reinforcement-learning/>, 7 pages.
Kung-Hsiang, Huang (Steeve), “Introduction to Various RL Algorithms. Part I (Q-Learning, SARSA, DQN, DDPG)”, Towards Data Science, [online], [retrieved on Mar. 4, 2020], Retrieved from the Internet:<URL:https://towardsdatascience.com/introduction-to-various-reinforcement-learning-algorithms-i-q-learning-sarsa-dqn-ddpg-72a5e0cb6287>, 5 pages.
Bellemare et al., A Distributional Perspective on Reinforcement Learning:, Proceedings of the 34th International Conference on Machine Learning, Sydney, Australia, Jul. 21, 2017, 19 pages.
Friston et al., “Reinforcement Learning or Active Inference?” Jul. 29, 2009, [online], [retrieved on Mar. 4, 2020], Retrieved from the Internet:<URL:https://doi.org/10.1371/journal.pone.0006421 PLOS ONE 4(7): e6421>, 13 pages.
Zhang et al., “DQ Scheduler: Deep Reinforcement Learning Based Controller Synchronization in Distributed SDN” ICC 2019—2019 IEEE International Conference on Communications (ICC), Shanghai, China, doi: 10.1109/ICC.2019.8761183, pp. 1-7.
“International Search Report” and “Written Opinion of the International Searching Authority,” Patent Cooperation Treaty Application No. PCT/US2020/031486, Aug. 3, 2020, 7 pages.
“International Search Report” and “Written Opinion of the International Searching Authority,” Patent Cooperation Treaty Application No. PCT/US2020/016248, May 11, 2020, 7 pages.
“Office Action”, Australia Patent Application No. 2019240484, Nov. 13, 2020, 4 pages.
“Office Action”, Australia Patent Application No. 2018403182, Feb. 5, 2021, 5 pages.
“Office Action”, Australia Patent Application No. 2018409860, Feb. 10, 2021, 4 pages.
Leber, Jessica, “The Avatar Will See You Now”, MIT Technology Review, Sep. 17, 2013, 4 pages.
“Office Action”, India Patent Application No. 202027035634, Jun. 30, 2021, 10 pages.
“Office Action”, India Patent Application No. 202027033121, Jul. 29, 2021, 7 pages.
“Office Action”, Canada Patent Application No. 3088396, Aug. 6, 2021, 7 pages.
“Office Action”, China Patent Application No. 201880089608.2, Aug. 3, 2021, 8 pages [17 pages with translation].
“Office Action”, Japan Patent Application No. 2020-543924, Jul. 27, 2021, 3 pages [6 pages with translation].
“Office Action”, Australia Patent Application No. 2019240484, Aug. 2, 2021, 3 pages.
“Office Action”, Canada Patent Application No. 3089312, Aug. 19, 2021, 3 pages.
“Extended European Search Report”, European Patent Application No. 18901139.8, Sep. 9, 2021, 6 pages.
“Office Action”, Canada Patent Application No. 3091957, Sep. 14, 2021, 4 pages.
“Office Action”, Japan Patent Application No. 2020-540382, Aug. 24, 2021, 7 pages [13 pages with translation].
“Extended European Search Report”, European Patent Application No. 18907032.9, Oct. 15, 2021, 12 pages.
Marston et al., “The design of a purpose-built exergame for fall prediction and prevention for older people”, European Review of Aging and Physical Activity 12:13, <URL:https://eurapa.biomedcentral.com/track/pdf/10.1186/s11556-015-0157-4.pdf>, Dec. 8, 2015, 12 pages.
Ejupi et al., “Kinect-Based Five-Times-Sit-to-Stand Test for Clinical and In-Home Assessment of Fall Risk in Older People”, Gerontology (vol. 62), (May 28, 2015), <URL:https://www.karger.com/Article/PDF/381804>, May 28, 2015, 7 pages.
Festl et al., “iStoppFalls: A Tutorial Concept and prototype Contents”, <URL:https://hcisiegen.de/wp-uploads/2014/05/isCtutoriaLdoku.pdf>, Mar. 30, 2013, 36 pages.
“Notice of Allowance”, Australia Patent Application No. 2019240484, Oct. 27, 2021, 4 pages.
Related Publications (1)
Number Date Country
20210358202 A1 Nov 2021 US
Provisional Applications (1)
Number Date Country
63024375 May 2020 US