Computing devices with improved interactive animated conversational interface systems

Information

  • Patent Grant
  • 12265900
  • Patent Number
    12,265,900
  • Date Filed
    Wednesday, October 24, 2018
    6 years ago
  • Date Issued
    Tuesday, April 1, 2025
    a month ago
Abstract
A conversational interface system including an interactive virtual avatar and method for completing and updating fillable forms and database entries. The conversational interface provides a user with the option of inputting data in either text or voice form, and logs user response data and populates fields within form documents. As the user progresses through the system, instructions and guidance are consistently provided via the interactive avatar presented within the system web browser. Without exiting the system, the conversational interface validates user input data types and data while updating entries within cloud-based databases.
Description
FIELD OF THE TECHNOLOGY

Embodiments of the disclosure relate to computing devices with improved interactive animated conversational interface systems.


SUMMARY

Provided herein are exemplary systems and methods including an interactive conversational text-based interaction (ECG_Forms) and a three-dimensional Electronic Caregiver Image (ECI) avatar that allows a user to complete various forms using voice conversation and cloud-based talk-to-text technology. Through the system, the ECI avatar may communicate in multiple languages. The system provides a user with the option of selecting methods for data input comprising either traditional type based data entry or voice communication data entry. Following the user input of data, the system uses cloud-based database connectivity to review user input and provide redundancy against data input errors. When errors are discovered by the system, feedback is provided to the user for correction of errors. To assess data for accuracy in real-time, the system utilizes a catalogue of inputs to determine whether a data type input by the user matches a defined catalogue data type. As such, through the use of cloud-based applications, the system completes data assessment, executes the continuation decision process and provides a response to the user in less than 1.0 second. Once data has been assessed for accuracy and all user data are entered into the system, the system encrypts user input data and proceeds with transmitting the data to a cloud-based primary key design database for storage. The system also provides a company web browser comprising the three-dimensional Electronic Caregiver Image (ECI) avatar for interactive communication with the user. This ECI avatar provides the user with an interactive experience during which the user is guided through the completion of the process. As the process is completed by the user, the ECI avatar provides real-time feedback in conversational form in an effort to simplify and streamline the form completion by the user.





BRIEF DESCRIPTION OF THE DRAWINGS

The accompanying drawings, where like reference numerals refer to identical or functionally similar elements throughout the separate views, together with the detailed description below, are incorporated in and form part of the specification, and serve to further illustrate embodiments of concepts that include the claimed disclosure, and explain various principles and advantages of those embodiments.



FIG. 1 details the connectivity and processes associated with a ECG_Forms text-based conversational interface.



FIG. 2A depicts successful completion of a data form.



FIG. 2B shows an exemplary specific, structured interactive animated conversational graphical interface including an avatar, depicting the result of the input of invalid data.



FIG. 3 depicts an exemplary architecture for further validating user input.



FIG. 4 shows an exemplary architecture for the conversion of input from a user to speech configured for an ECI Avatar.



FIGS. 5-19 show exemplary specific, structured interactive animated conversational graphical interfaces with the ECI avatar.





DETAILED DESCRIPTION

In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the disclosure. It will be apparent, however, to one skilled in the art, that the disclosure may be practiced without these specific details. In other instances, structures and devices may be shown in block diagram form only in order to avoid obscuring the disclosure.


Various exemplary embodiments described and illustrated herein relate to a computing device comprising a display screen, the computing device being configured to dynamically display a specific, structured interactive animated conversational graphical interface paired with a prescribed functionality directly related to the interactive animated conversational graphical user interface's structure. Accordingly, a user is provided with an interactive conversational interface comprising the Electronic Caregiver Forms (ECG_Forms), text-based conversation, and the Electronic Caregiver Image (ECI), which comprises a three-dimensional avatar paired with voice-driven interaction, all of which may be presented within a web browser.


User data input into document fields is typically tedious and boring for the user. It is also highly prone to human error. As such, text-based conversational “chatbots” have become an increasingly popular interactive option for the replacement of simple keystroke text entry by user paradigms.


As chatbot programs have developed in recent years, they have been incorporated in the effective simulation of logical conversation during human/computer interaction. The implementation of these chatbots has occurred via textual and/or auditory methods effectively providing human users with practical functionality in information acquisition activities. In most cases today, chatbots function simply to provide a conversational experience during the obtaining of data from a user.


As chatbot programs have progressed, the knowledge bases associated with their capabilities have become increasingly complex, but the ability to validate user responses in real-time remains limited. Additionally, the capability of chatbot programs to be functionally incorporated across vast networks is significantly lacking. As such, most chatbot programs cannot be incorporated across multiple systems in a manner that allows them to collect user data while simultaneously verifying the type of data input by the user, transmit data input by the user to various storage sites for further data validation, store data offsite in cloud-based storage solutions and overwrite existing stored data based on new user inputs, all while providing a virtual avatar which guides the user through the data entry process.



FIG. 1 illustrates an exemplary system in which a user 2 utilizes a computing device or connected device 3 to connect to the internet 4 to access relevant services necessary to complete various forms. Upon connection to the internet 4, the user 2 is provided access to cloud-based applications 5 which comprise conversational decision trees providing the capability of communicating through both voice and text as illustrated by ECG_Forms Conversational Interface 6. According to various exemplary embodiments, this allows for conversational speech communication to be carried out between user 2 and computing device 3.


In FIG. 1, according to various exemplary embodiments, ECG_Forms Conversational Interface 6 functions to request data input from user 2. Following this request, ECG_Forms Conversational Interface 6 waits for a response from user 2. Upon receiving a response, Data Intake System 7 intakes this data into the system. Once data from user 2 is taken into ECG_Forms Conversational Interface 6, the system compares the data type (for example, words, numbers, email, etc.) of the input to the data found in Defined Data Input Type 8 to assess the validity of user input types. Once the type of user input is determined to be valid, Progression Decision Program 9 is activated, resulting in ECG_Forms Conversational Interface 6 moving on to the next item to be inquired of user 2.



FIG. 2A shows the result of the successful completion of a form.



FIG. 2B shows an exemplary specific, structured interactive animated conversational graphical interface including an avatar, depicting what occurs when data input by user 2 is deemed invalid by Defined Data Input Type 8 (FIG. 1), resulting in a “no” decision from Progression Decision Program 9 (FIG. 1).



FIG. 3 depicts an exemplary architecture for further validating user input. This is achieved as user 2 inputs data into computing device 3. This data is transmitted to ECG_Forms Conversational Interface 6. Cloud-Based Applications 5 are communicatively coupled to Database Storage Solutions 10, which comprises defined data specifications and previously stored inputs from user 2. As ECG_Forms Conversational Interface 6 processes data input into the system by user 2, it also compares the data to data stored in Database Storage Solutions 10 for validation. Upon ECG_Forms Conversational Interface 6 determining that input data from user 2 is valid, the input data progresses across the entirety of the form being completed, and Cloud-Based Applications 5 and Compute 11 functions (as housed in Cloud-Based Applications 5) are called and result in ECG_Forms Conversational Interface 6 transmitting the completed data form for storage in Database Storage Solutions 10.



FIG. 4 shows an exemplary architecture for the conversion of input from user 2 to speech configured for an ECI Avatar (FIG. 5). This occurs through Cloud-Based Applications 5 (FIG. 3). The data input by user 2 into Computing Device 3 and transmitted to ECG_Forms Conversational Interface 6 is further processed by Cloud-Based Text-to-Speech Application and converted into an audio file. Once the conversion to audio has been completed, the newly created audio file is transmitted to Database Storage Solutions 10 for storage and for recall by the ECI Avatar (FIG. 5) when needed.



FIGS. 5-19 show exemplary specific, structured interactive animated conversational graphical interfaces with the ECI avatar.


According to various exemplary embodiments, a three-dimensional Electronic Caregiver Image (ECI) avatar as depicted in FIG. 5 functions to guide the user (such as user 2 in FIGS. 1, 3 and 4) through the data entry process in an effort to reduce user errors in completing documents. This is achieved through the utilization of multiple cloud-based resources (such as Cloud-Based Applications 5 in FIG. 3) connected to the conversational interface system. For the provision of ECI responses from the avatar to user inquiries, either Speech Synthesis Markup Language (SSML) or basic text files are read into the system and an audio file is produced in response. As such, the aspects of the avatar's response settings such as voice, pitch and speed are controlled to provide unique voice characteristics associated with the avatar during its response to user inquiries.


While various embodiments have been described above, it should be understood that they have been presented by way of example only, and not limitation. The descriptions are not intended to limit the scope of the technology to the particular forms set forth herein. Thus, the breadth and scope of a preferred embodiment should not be limited by any of the above-described exemplary embodiments. It should be understood that the above description is illustrative and not restrictive. To the contrary, the present descriptions are intended to cover such alternatives, modifications, and equivalents as may be included within the spirit and scope of the technology as defined by the appended claims and otherwise appreciated by one of ordinary skill in the art. The scope of the technology should, therefore, be determined not with reference to the above description, but instead should be determined with reference to the appended claims along with their full scope of equivalents.

Claims
  • 1. A computing device comprising: a display screen configured to dynamically display within a web browser of the computing device a specific, structured interactive animated conversational graphical interface paired with a prescribed functionality directly related to the interactive animated conversational graphical interface's structure, the specific, structured interactive animated conversational graphical interface configured to:request response data from a human user by the computing device;receive the response data from the human user by the computing device;encrypt the response data prior to transmission to a cloud-based storage;convert the response data into an audio file using a cloud-based text-to-speech application capable of being integrated into a web browser-based avatar, the web browser-based avatar being displayed on the display screen within the web browser of the computing device as a three-dimensional electronic image of a human health caregiver for the human user;transmit the response data to a cloud-based storage for validation, retention, and processing, the validation, the retention, and the processing being performed by a cloud-based application communicatively coupled to the cloud-based storage; andproduce a responsive audio file by the processing of the response data, the processing comprising applying a conversational decision tree to the response data to produce output data, reading in the output data from the cloud-based application in Speech Synthesis Markup Language to produce the responsive audio file, and further comprising using the Speech Synthesis Markup Language to control at least one aspect of the responsive audio file as delivered by the web browser-based avatar;further comprising the three-dimensional electronic image of the human health caregiver providing step-by-step verbal health care instructions to the human user;the three-dimensional electronic image of the human health caregiver providing the human user with a survey comprising questions directed to eliciting information regarding assessing falls risk; andthe three-dimensional electronic image of the human health caregiver receiving the elicited information comprising answers to the questions that are indicative of a risk of falling of the human user.
  • 2. The computing device of claim 1, being any form of computing device, including a personal computer, laptop, tablet, or mobile device.
  • 3. The computing device of claim 1, wherein the display screen is further configured to display a plurality of data entry options to the human user, the plurality of data entry options comprising two or more of: voice, type, touch, or combinations thereof.
  • 4. The computing device of claim 1, wherein the display screen is further configured to display a real-time updated user interface comprising a plurality of received entries from the human user within a web browser-based dialogue box.
  • 5. The computing device of claim 1, the validation based on characteristics defined within the specific, structured interactive animated conversational graphical interface.
  • 6. The computing device of claim 1, the validation comprising comparing the response data provided by the human user with external data stored in a cloud-based database.
  • 7. The computing device of claim 1, further configured to receive data from the human user, determine that the received data is valid, and display a progression to a next item within a form.
  • 8. The computing device of claim 1, further configured to receive data from the human user, determine that the received data is invalid, and determine that a progression to a next item within a form is not warranted.
  • 9. The computing device of claim 1, wherein the specific, structured interactive animated conversational graphical interface is further configured to complete and update a database entry in a database that is in communication with the computing device.
  • 10. The computing device of claim 1, wherein the display screen is further configured to display at least one form within the web browser, the at least one form immediately transmitted by the computing device to the cloud-based storage upon completion of all requested data input by the human user.
  • 11. The computing device of claim 10, wherein any previous entries to the at least one form are updated within the cloud-based storage based on new data input by the human user.
  • 12. The computing device of claim 1, wherein the specific, structured interactive animated conversational graphical interface is further configured to convert text data received from the human user into voice data for storage and also for use in conversation with the human user.
  • 13. The computing device of claim 1, wherein the web browser-based avatar is configured to provide guidance and feedback to assist the human user during utilization of the specific, structured interactive animated conversational graphical interface.
  • 14. The computing device of claim 1, wherein the step-by-step verbal health care instructions from the three-dimensional electronic image of the human health caregiver are converted into text that is displayed within the web browser in realtime.
  • 15. The computing device of claim 1, wherein the three-dimensional electronic image of the human health caregiver is configured to receive at least one inquiry from the human user regarding at least one form presented on the display screen to the human user.
  • 16. The computing device of claim 1, wherein the three-dimensional electronic image of the human health caregiver is further configured to converse with the human user in at least two conversational languages.
  • 17. The computing device of claim 1, wherein the validation is further based on a determination that the response data provided by the human user matches a predefined expected data type.
  • 18. The computing device of claim 1, the at least one aspect of the responsive audio file being any one of: a voice, a pitch, and a speed of the audio file as delivered by the web browser-based avatar.
  • 19. The computing device of claim 1, further comprising reading a basic text file into the cloud-based application to further control the at least one aspect of the responsive audio file.
  • 20. The computing device of claim 1, the specific, structured interactive animated conversational graphical interface configured to overwrite existing stored data based on new user inputs.
CROSS REFERENCE TO RELATED APPLICATIONS

This application claims the priority benefit of U.S. Provisional Patent Application Ser. No. 62/618,550 filed on Jan. 17, 2018 and titled “Interactive Animated Conversational Interface System,” which is hereby incorporated by reference in its entirety.

US Referenced Citations (145)
Number Name Date Kind
5211642 Clendenning May 1993 A
5475953 Greenfield Dec 1995 A
6665647 Haudenschild Dec 2003 B1
7233872 Shibasaki et al. Jun 2007 B2
7445086 Sizemore Nov 2008 B1
7612681 Azzaro et al. Nov 2009 B2
7971141 Quinn Jun 2011 B1
8206325 Najafi et al. Jun 2012 B1
8771206 Gettelman et al. Jul 2014 B2
9317916 Hanina et al. Apr 2016 B1
9591996 Chang et al. Mar 2017 B2
9972187 Srinivasan et al. May 2018 B1
10387963 Leise Aug 2019 B1
10417388 Han et al. Sep 2019 B2
10628635 Carpenter, II Apr 2020 B1
10761691 Anzures et al. Sep 2020 B2
10813572 Dohrmann et al. Oct 2020 B2
10943407 Morgan et al. Mar 2021 B1
11113943 Wright et al. Sep 2021 B2
11213224 Dohrmann et al. Jan 2022 B2
20020062342 Sidles May 2002 A1
20020196944 Davis et al. Dec 2002 A1
20040109470 Derechin Jun 2004 A1
20040189708 Larcheveque Sep 2004 A1
20050035862 Wildman et al. Feb 2005 A1
20050055942 Maelzer et al. Mar 2005 A1
20070032929 Yoshioka Feb 2007 A1
20070238936 Becker Oct 2007 A1
20080010293 Zpevak Jan 2008 A1
20080186189 Azzaro et al. Aug 2008 A1
20090094285 Mackle Apr 2009 A1
20100124737 Panzer May 2010 A1
20110126207 Wipfel et al. May 2011 A1
20110145018 Fotsch et al. Jun 2011 A1
20110232708 Kemp Sep 2011 A1
20120025989 Cuddihy et al. Feb 2012 A1
20120075464 Derenne et al. Mar 2012 A1
20120120184 Fornell et al. May 2012 A1
20120121849 Nojima May 2012 A1
20120154582 Johnson et al. Jun 2012 A1
20120165618 Algoo Jun 2012 A1
20120179067 Wekell Jul 2012 A1
20120179916 Staker et al. Jul 2012 A1
20120229634 Laett et al. Sep 2012 A1
20120253233 Greene et al. Oct 2012 A1
20130000228 Ovaert Jan 2013 A1
20130060167 Dracup Feb 2013 A1
20130123667 Komatireddy May 2013 A1
20130127620 Siebers et al. May 2013 A1
20130145449 Busser et al. Jun 2013 A1
20130167025 Patri Jun 2013 A1
20130204545 Solinsky Aug 2013 A1
20130212501 Anderson Aug 2013 A1
20130237395 Hjelt et al. Sep 2013 A1
20130289449 Stone et al. Oct 2013 A1
20130303860 Bender et al. Nov 2013 A1
20140074454 Brown et al. Mar 2014 A1
20140128691 Olivier May 2014 A1
20140148733 Stone et al. May 2014 A1
20140171039 Bjontegard Jun 2014 A1
20140171834 DeGoede et al. Jun 2014 A1
20140214441 Young Jul 2014 A1
20140232600 Larose et al. Aug 2014 A1
20140243686 Kimmel Aug 2014 A1
20140257852 Walker et al. Sep 2014 A1
20140267582 Beutter et al. Sep 2014 A1
20140278605 Borucki Sep 2014 A1
20140317502 Brown et al. Oct 2014 A1
20140330172 Jovanov et al. Nov 2014 A1
20140337048 Brown et al. Nov 2014 A1
20140358828 Phillipps et al. Dec 2014 A1
20140368601 deCharms Dec 2014 A1
20150005674 Schindler Jan 2015 A1
20150019250 Goodman et al. Jan 2015 A1
20150109442 Derenne et al. Apr 2015 A1
20150142704 London May 2015 A1
20150169835 Hamdan et al. Jun 2015 A1
20150359467 Tran Dec 2015 A1
20160026354 McIntosh et al. Jan 2016 A1
20160117470 Welsh et al. Apr 2016 A1
20160117484 Hanina et al. Apr 2016 A1
20160154977 Jagadish Jun 2016 A1
20160217264 Sanford Jul 2016 A1
20160253890 Rabinowitz et al. Sep 2016 A1
20160267327 Franz et al. Sep 2016 A1
20160314255 Cook et al. Oct 2016 A1
20170000387 Forth et al. Jan 2017 A1
20170000422 MIoturu et al. Jan 2017 A1
20170024531 Malaviya Jan 2017 A1
20170055917 Stone et al. Mar 2017 A1
20170140631 Pietrocola et al. May 2017 A1
20170147154 Steiner et al. May 2017 A1
20170192950 Gaither et al. Jul 2017 A1
20170193163 Melle et al. Jul 2017 A1
20170197115 Cook et al. Jul 2017 A1
20170213145 Pathak et al. Jul 2017 A1
20170223176 Anzures et al. Aug 2017 A1
20170273601 Wang et al. Sep 2017 A1
20170336933 Hassel Nov 2017 A1
20170337274 Ly Nov 2017 A1
20170344706 Torres et al. Nov 2017 A1
20170344832 Leung et al. Nov 2017 A1
20180005448 Choukroun et al. Jan 2018 A1
20180075558 Hill, Sr. et al. Mar 2018 A1
20180096504 Valdivia et al. Apr 2018 A1
20180154514 Angle et al. Jun 2018 A1
20180165938 Honda et al. Jun 2018 A1
20180182472 Preston et al. Jun 2018 A1
20180189756 Purves Jul 2018 A1
20180322405 Fadell et al. Nov 2018 A1
20180360349 Dohrmann et al. Dec 2018 A9
20180365383 Bates Dec 2018 A1
20180368780 Bruno et al. Dec 2018 A1
20190029900 Walton et al. Jan 2019 A1
20190042700 Alotaibi Feb 2019 A1
20190043474 Kingsbury Feb 2019 A1
20190057320 Docherty et al. Feb 2019 A1
20190090786 Kim et al. Mar 2019 A1
20190116212 Spinella-Mamo Apr 2019 A1
20190130110 Lee et al. May 2019 A1
20190156575 Korhonen May 2019 A1
20190164015 Jones, Jr. et al. May 2019 A1
20190176043 Gosine et al. Jun 2019 A1
20190196888 Anderson et al. Jun 2019 A1
20190259475 Dohrmann et al. Aug 2019 A1
20190282130 Dohrmann et al. Sep 2019 A1
20190286942 Abhiram et al. Sep 2019 A1
20190311792 Dohrmann et al. Oct 2019 A1
20190318165 Shah et al. Oct 2019 A1
20190385749 Dohrmann et al. Dec 2019 A1
20200101969 Natroshvili et al. Apr 2020 A1
20200236090 De Beer et al. Jul 2020 A1
20200251220 Chasko Aug 2020 A1
20200357256 Wright et al. Nov 2020 A1
20200357511 Sanford Nov 2020 A1
20210007631 Dohrmann et al. Jan 2021 A1
20210110894 Shriberg et al. Apr 2021 A1
20210273962 Dohrmann et al. Sep 2021 A1
20210358202 Tveito et al. Nov 2021 A1
20210398410 Wright et al. Dec 2021 A1
20220022760 Salcido et al. Jan 2022 A1
20220199252 Dohrmann et al. Jun 2022 A1
20220319696 Dohrmann et al. Oct 2022 A1
20220319713 Dohrmann et al. Oct 2022 A1
20220319714 Dohrmann et al. Oct 2022 A1
Foreign Referenced Citations (50)
Number Date Country
2019240484 Nov 2021 AU
2949449 Nov 2015 CA
104361321 Feb 2015 CN
106056035 Oct 2016 CN
106940692 Jul 2017 CN
107411515 Dec 2017 CN
111801645 Oct 2020 CN
111801939 Oct 2020 CN
111867467 Oct 2020 CN
113795808 Dec 2021 CN
3703009 Sep 2020 EP
3740856 Nov 2020 EP
3756344 Dec 2020 EP
3768164 Jan 2021 EP
3773174 Feb 2021 EP
3815108 May 2021 EP
3920797 Dec 2021 EP
3944258 Jan 2022 EP
3966657 Mar 2022 EP
202027033318 Oct 2020 IN
202027035634 Oct 2020 IN
202127033278 Aug 2022 IN
2000232963 Aug 2000 JP
2002304362 Oct 2002 JP
2005228305 Aug 2005 JP
2008062071 Mar 2008 JP
2008123318 May 2008 JP
2008229266 Oct 2008 JP
2010172481 Aug 2010 JP
2012232652 Nov 2012 JP
2016137226 Aug 2016 JP
2016525383 Aug 2016 JP
2017187914 Oct 2017 JP
1020160040078 Apr 2016 KR
20170069501 Jun 2017 KR
1020200105519 Sep 2020 KR
1020200121832 Oct 2020 KR
1020200130713 Nov 2020 KR
WO2000005639 Feb 2000 WO
WO2014043757 Mar 2014 WO
WO2014210344 Dec 2014 WO
WO2017118908 Jul 2017 WO
WO2018032089 Feb 2018 WO
WO2019143397 Jul 2019 WO
WO2019164585 Aug 2019 WO
WO2019182792 Sep 2019 WO
WO2019199549 Oct 2019 WO
WO2019245713 Dec 2019 WO
WO2020163180 Aug 2020 WO
WO2020227303 Nov 2020 WO
Non-Patent Literature Citations (49)
Entry
Leber, Jessica, “The Avatar will See You Now”, Sep. 17, 2013, MIT Technology Review (Year: 2013).
“International Search Report” and “Written Opinion of the International Searching Authority,” Patent Cooperation Treaty Application No. PCT/US2018/057814, Jan. 11, 2019, 9 pages.
“International Search Report” and “Written Opinion of the International Searching Authority,” Patent Cooperation Treaty Application No. PCT/US2018/068210, Apr. 12, 2019, 9 pages.
Bajaj, Prateek, “Reinforcement Learning”, GeeksForGeeks.org [online], [retrieved on Mar. 4, 2020], Retrieved from the Internet :<URL:https://www.geeksforgeeks.org/what-is-reinforcement-learning/>, 7 pages.
Kung-Hsiang, Huang (Steeve), “Introduction to Various RL Algorithms. Part I (Q-Learning, SARSA, DQN, DDPG)”, Towards Data Science, [online], [retrieved on Mar. 4, 2020], Retrieved from the Internet :<URL:https://towardsdatascience.com/introduction-to-various-reinforcement-learning-algorithms-i-q-learning-sarsa-dqn-ddpg-72a5e0cb6287>, 5 pages.
Bellemare et al., A Distributional Perspective on Reinforcement Learning:, Proceedings of the 34th International Conference on Machine Learning, Sydney, Australia, Jul. 21, 2017, 19 pages.
Friston et al., “Reinforcement Learning or Active Inference?” Jul. 29, 2009, [online], [retrieved on Mar. 4, 2020], Retrieved from the Internet :<URL:https://doi.org/10.1371/journal.pone.0006421 PLoS One 4(7): e6421>, 13 pages.
Zhang et al., “DQ Scheduler: Deep Reinforcement Learning Based Controller Synchronization in Distributed SDN” ICC 2019—2019 IEEE International Conference on Communications (ICC), Shanghai, China, doi: 10.1109/ICC.2019.8761183, pp. 1-7.
“International Search Report” and “Written Opinion of the International Searching Authority,” Patent Cooperation Treaty Application No. PCT/US2020/016248, May 11, 2020, 7 pages.
“International Search Report” and “Written Opinion of the International Searching Authority,” Patent Cooperation Treaty Application No. PCT/US2019/021678, May 24, 2019, 12 pages.
“International Search Report” and “Written Opinion of the International Searching Authority,” Patent Cooperation Treaty Application No. PCT/US2019/025652, Jul. 18, 2019, 11 pages.
“International Search Report” and “Written Opinion of the International Searching Authority,” Patent Cooperation Treaty Application No. PCT/US2019/034206, Aug. 1, 2019, 11 pages.
Rosen et al., “Slipping and Tripping: Fall Injuries in Adults Associated with Rugs and Carpets,” Journal of Injury & Violence Research, 5(1), 61-69. (2013).
“Office Action”, India Patent Application No. 202027035634, Jun. 30, 2021, 10 pages.
“Office Action”, India Patent Application No. 202027033121, Jul. 29, 2021, 7 pages.
“Office Action”, Canada Patent Application No. 3088396, Aug. 6, 2021, 7 pages.
“Office Action”, China Patent Application No. 201880089608.2, Aug. 3, 2021, 8 pages.
“Office Action”, Japan Patent Application No. 2020-543924, Jul. 27, 2021, 3 pages [6 pages with translation].
“Office Action”, Australia Patent Application No. 2019240484, Aug. 2, 2021, 3 pages.
“Office Action”, Canada Patent Application No. 3089312, Aug. 19, 2021, 3 pages.
“Office Action”, Australia Patent Application No. 2019240484, Nov. 13, 2020, 4 pages.
“Office Action”, Australia Patent Application No. 2018403182, Feb. 5, 2021, 5 pages.
“Office Action”, Australia Patent Application No. 2018409860, Feb. 10, 2021, 4 pages.
“Extended European Search Report”, European Patent Application No. 18907032.9, Oct. 15, 2021, 12 pages.
Marston et al., “The design of a purpose-built exergame for fall prediction and prevention for older people”, European Review of Aging and Physical Activity 12:13, <URL:https://eurapa.biomedcentral.com/track/pdf/10.1186/s11556-015-0157-4.pdf>, Dec. 8, 2015, 12 pages.
Ejupi et al., “Kinect-Based Five-Times-Sit-to-Stand Test for Clinical and In-Home Assessment of Fall Risk in Older People”, Gerontology (vol. 62), (May 28, 2015), <URL:https://www.karger.com/Article/PDF/381804>, May 28, 2015, 7 pages.
Festl et al., “iStoppFalls: A Tutorial Concept and prototype Contents”, <URL:https://hcisiegen.de/wp-uploads/2014/05/isCtutorialdoku.pdf>, Mar. 30, 2013, 36 pages.
“Notice of Allowance”, Australia Patent Application No. 2019240484, Oct. 27, 2021, 4 pages.
“Extended European Search Report”, European Patent Application No. 19772545.0, Nov. 16, 2021, 8 pages.
“Office Action”, Australia Patent Application No. 2018409860, Nov. 30, 2021, 4 pages.
“Office Action”, Korea Patent Application No. 10-2020-7028606, Oct. 29, 2021, 7 pages [14 pages with translation].
“Office Action”, India Patent Application No. 202027033318, Nov. 18, 2021, 6 pages.
“Office Action”, Australia Patent Application No. 2018403182, Dec. 1, 2021, 3 pages.
Dubois et al., “A Gait Analysis Method Based on a Depth Camera for Fall Prevention,” Proc. of the 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBS), Aug. 30, 2014, pp. 4515-4518 (Abstract only).
Marston et al., “The design of a purpose-built exergame for fall prediction and prevention for older people,” European Review of Aging and Physical Activity, Dec. 8, 2015, vol. 12, pp. 1-12.
“Office Action”, Japan Patent Application No. 2020-543924, Nov. 24, 2021, 3 pages [6 pages with translation].
“Extended European Search Report”, European Patent Application No. EP19785057, Dec. 6, 2021, 8 pages.
“Office Action”, Australia Patent Application No. 2020218172, Dec. 21, 2021, 4 pages.
“Extended European Search Report”, European Patent Application No. 21187314.6, Dec. 10, 2021, 10 pages.
“Notice of Allowance”, Australia Patent Application No. 2018403182, Jan. 20, 2022, 4 pages.
“Office Action”, Australia Patent Application No. 2018409860, Jan. 24, 2022, 5 pages.
“Office Action”, China Patent Application No. 201880089608.2, Feb. 8, 2022, 6 pages (15 pages with translation).
“International Search Report” and “Written Opinion of the International Searching Authority,” Patent Cooperation Treaty Application No. PCT/US2021/056060, Jan. 28, 2022, 8 pages.
“Extended European Search Report”, European Patent Application No. 19822930.4, Feb. 15, 2022, 9 pages.
“Office Action”, Japan Patent Application No. 2020-550657, Feb. 8, 2022, 8 pages.
“Office Action”, Singapore Patent Application No. 11202008201P, Apr. 4, 2022, 200 pages.
“Office Action”, India Patent Application No. 202127033278,,Apr. 20, 2022, 7 pages.
Wasenmuller et al., “Comparison of Kinect V1 and V2 Depth Images in Terms of Accuracy and Precision”, Computer Vision—ACCV 2016 Workshops (Taipei, Taiwan, Nov. 20-24, 2016), Revised Selected Papers, Part II, Mar. 16, 2017 (Mar. 16, 2017), XP055942856, DOI: 10.1007/978-3-319-54427-4, ISBN: 978-3-319-54427-4 Retrieved from the Internet: URL: https://link.springer.com/content/pdf/10.1007/978-3-319-54427-4_3.pdf>, pp. 1-12.
Stone et al., “Evaluation of an Inexpesive Depth Camera for In-Home Gait Assessment,” Journal of Ambient Intelligence and Smart Environments Jan. 2011 3(4); pp. 349-361.
Related Publications (1)
Number Date Country
20190220727 A1 Jul 2019 US
Provisional Applications (1)
Number Date Country
62618550 Jan 2018 US