This disclosure generally relates to virtual assistants that can listen for commands or sounds and perform various control functions within an environment such as a building. More specifically, this disclosure relates to mechanisms that automatically coordinate responses and activities of multiple virtual assistant software agents being used together in or on the same premises.
A virtual assistant can be accessed through a software agent in a smart device. Examples of virtual assistants include Google Assistant™, Apple's Siri™, Amazon's Alexa™, and Microsoft's Cortana™. Deployment of multiple agents for virtual assistants, each with listening capability, is becoming increasingly common in homes and business. In some instances, these agents simultaneously record and attempt to respond user requests.
In one example, a system includes a wireless communication interface and a processor communicatively coupled to the wireless communication interface, wherein the processor is configured to perform operations. The operations include identifying at least one audio request received through two or more agents in a network and determining at which of the agents to actively process a selected audio request of the at least one audio request using at least one of location context, person interaction context, or secondary trait analysis. The audio request(s) include simultaneous audio requests received through at least two of the agents, at least two differing audio requests received from different requesters, or both.
In an additional example, a method includes identifying, by a processor, at least one audio request received through at least two agents in a network and determining, by the processor, at which of the agents to actively process a selected audio request of the at least one audio request using at least one of location context or secondary trait analysis. The audio request includes simultaneous audio requests received through at least two agents, at least two differing audio requests received from different requesters, or both.
In a further example, a non-transitory computer-readable medium includes instructions that are executable by a computing device for causing the computing device to perform operations for multi-agent input coordination. The operations include identifying at least one audio request received through two or more agents in a network and determining at which of the agents to actively process a selected audio request of the at least one audio request using at least one of location context or secondary trait analysis. The audio request includes simultaneous audio requests received through at least two agents, at least two differing audio requests received from different requesters, or both.
These and other features, aspects, and advantages of the present disclosure are better understood when the following Detailed Description is read with reference to the accompanying drawings.
Certain aspects of this disclosure relate to acoustic collaboration of multiple listening agents deployed in smart devices on a premises. This acoustic collaboration can reduce or avoid incidences of inaccurate recognition of appropriate actions and user frustration with command execution that can result when agents simultaneously record and attempt to respond to user requests. Certain aspects of this disclosure relate to improving the accuracy of identifying requests and specifying where that request should be actively processed, improving quality of detection and providing better understanding of user commands and user intent throughout the premises.
In one example, a processor carries out operations including identifying at least one audio request received through at least two agents in a network and determining at which of the agents to actively process a selected audio request using at least one of location context or secondary trait analysis. The audio request can include a simultaneous audio request received through at least two agents, at least two differing audio requests received from different requesters, or both.
In some aspects, determinations are made using secondary trait analysis including, as examples, footstep recognition, non-language-sound cadence, habit pattern analysis, or tonal context. In some aspects, determinations are made using location context, including, as examples, localization, movement, or spatial usage restrictions.
In some aspects, the processor activates an attention token at the agent at which the selected audio request is to be honored and displays an indication of the attention token. In some aspects, the processor automatically sorts ambient sounds into sound categories, and uses the sound categories to provide location context for determining where to honor the selected audio request. In aspects, actions to be taken by an agent are determined at least in part by a state machine to take into account previous audio requests and actions.
Detailed descriptions of certain examples are discussed below. These illustrative examples are given to introduce the reader to the general subject matter discussed here and are not intended to limit the scope of the disclosed concepts. The following sections describe various additional aspects and examples with reference to the drawings in which like numerals indicate like elements, and directional descriptions are used to describe the illustrative examples but, like the illustrative examples, should not be used to limit the present disclosure.
Still referring to
Still referring to
Still referring to
Audio cleanup includes improving signal quality from acoustic collaboration of multiple listeners. A MICE agent can in some aspects identify simultaneous audio requests and specify where that request should be honored for high-quality understanding. The MICE agent can use passive observation to record a feed and cancel out noise on a secondary feed. Alternatively, strong signals in some areas may cause the agent to always reject commands (e.g. no agent should listen to sounds that originate in a bedroom).
Continuing with
State machine 306 of agent 250 in
Continuing with
External network 324 of
At block 412 of
Still referring to
In certain aspects, the system can enforce spatial usage restrictions per agent and optionally restrict functions to specific smart device positioning. For example, the system can prevent a user from telling a dishwasher to start unless the user is near the dishwasher. States or actions within a single space can expire or be restricted. For example, banking requests can only be permitted at a desk, and the permission to issue valid banking requests can be set to expire. States or actions can be limited by the presence of certain individuals. The presence or lack thereof of a specified individual or specified individuals in a location can be referred to as person interaction context. Actions can be taken or restricted (which can be considered an action) based on person interaction context either alone or in combination with other factors. For example, a child can be prevented from turning on a television unless a parent is nearby.
In certain aspects, the system can allow privacy in different areas by removing conflicts between devices. Optionally, the users present near each agent can influence the personal assistant's behavior at that agent. For example, if children are near the agent, the personal assistant can adopt a friendlier or slower speaking voice than when the space is occupied solely by adults. In some aspects the need for passwords or access tokens can be eliminated by the system using user classification and ranking to allow secure access to electronic files or to other computer resources.
In certain aspects, agents can cooperator to accomplish system-wide updates of changing characteristics of users (visual, audio, etc.) to naturally progress the identification of a user to account for aging, growth, etc. Each spatialized agent can adapt different behaviors as appropriate. Smart devices with appropriate smart assistant agents can be carried on the person or embedded in clothing, and joined to the local network mesh as users walk down corridors.
Unless specifically stated otherwise, throughout this specification terms such as “processing,” “computing,” “determining,” “identifying,” or the like refer to actions or processes of a computing device, such as one or more computers or a similar electronic computing device or devices that manipulate or transform data represented as physical electronic or magnetic quantities within memories, registers, or other information storage devices, transmission devices, or display devices of the computing platform.
The system or systems discussed herein are not limited to any particular hardware architecture or configuration. A computing device can include any suitable arrangement of components that provides a result conditioned on one or more inputs. Suitable computing devices include multipurpose microprocessor-based computing systems accessing stored software that programs or configures the computing system from a general-purpose computing apparatus to a specialized computing apparatus implementing one or more aspects of the present subject matter. Any suitable programming, scripting, or other type of language or combinations of languages may be used to implement the teachings contained herein in software to be used in programming or configuring a computing device.
Aspects of the methods disclosed herein may be performed in the operation of such computing devices. The order of the blocks presented in the examples above can be varied—for example, blocks can be re-ordered, combined, or broken into sub-blocks. Certain blocks or processes can be performed in parallel.
The foregoing description of the examples, including illustrated examples, of the subject matter has been presented only for the purpose of illustration and description and is not intended to be exhaustive or to limit the subject matter to the precise forms disclosed. Numerous modifications, adaptations, and uses thereof will be apparent to those skilled in the art without departing from the scope of this subject matter. The illustrative examples described above are given to introduce the reader to the general subject matter discussed here and are not intended to limit the scope of the disclosed concepts.
Number | Name | Date | Kind |
---|---|---|---|
6519561 | Farrell | Feb 2003 | B1 |
7139716 | Gaziz | Nov 2006 | B1 |
8340975 | Rosenberger | Dec 2012 | B1 |
8825020 | Mozer et al. | Sep 2014 | B2 |
9536540 | Avendano et al. | Jan 2017 | B2 |
9749583 | Fineberg | Aug 2017 | B1 |
9818061 | Shams et al. | Nov 2017 | B1 |
9866308 | Bultan et al. | Jan 2018 | B1 |
10074364 | Wightman et al. | Sep 2018 | B1 |
10079012 | Klimanis | Sep 2018 | B2 |
10102857 | Mixter et al. | Oct 2018 | B2 |
10147425 | Yang | Dec 2018 | B2 |
10152969 | Reilly et al. | Dec 2018 | B2 |
10650829 | Kline | May 2020 | B2 |
20120232886 | Capuozzo | Sep 2012 | A1 |
20130147600 | Murray | Jun 2013 | A1 |
20150162006 | Kummer | Jun 2015 | A1 |
20150222757 | Cheatham, III | Aug 2015 | A1 |
20160155443 | Khan | Jun 2016 | A1 |
20160219416 | Cromack | Jul 2016 | A1 |
20160329051 | Rajapakse | Nov 2016 | A1 |
20160364002 | Gates | Dec 2016 | A1 |
20170025124 | Mixter | Jan 2017 | A1 |
20170116986 | Weng et al. | Apr 2017 | A1 |
20170345420 | Barnett, Jr. | Nov 2017 | A1 |
20180024811 | De Vaan et al. | Jan 2018 | A1 |
20180096696 | Mixter | Apr 2018 | A1 |
20180133900 | Breazeal | May 2018 | A1 |
20180228006 | Baker et al. | Aug 2018 | A1 |
20180240454 | Raj et al. | Aug 2018 | A1 |
20180277107 | Kim | Sep 2018 | A1 |
20180286391 | Carey et al. | Oct 2018 | A1 |
20180293981 | Ni et al. | Oct 2018 | A1 |
20190206395 | Aoki | Jul 2019 | A1 |
20190206396 | Chen | Jul 2019 | A1 |
20200211544 | Mikhailov | Jul 2020 | A1 |
Number | Date | Country |
---|---|---|
106601248 | Apr 2017 | CN |
2018009397 | Jan 2018 | WO |
Entry |
---|
Shijia Pan, Tong Yu, Mostafa Mirshekari, Jonathon Fagert, Amelie Bonde, Ole J. Mengshoel, Hae Young Noh, and Pei Zhang, “FootprintID: Indoor Pedestrian Identification through Ambient Structural Vibration Sensing,” Proceedings of the ACM on IMWU Tech., vol. 1, No. 3, Article 89. Sep. 2017 (Year: 2017). |
Antonini et al., Smart Audio Sensors in the Internet of Things Edge for Anomaly Detection, IEEE Access, vol. 6, 2018, pp. 67594-67610. |
Dokmanic, Raking the Cocktail Party, IEEE Journal of Selected Topics in Signal Processing, vol. 9, No. 5, Aug. 2015, pp. 825-836. |
Moutinho et al., Indoor Global Localisation in Anchor-Based Systems Using Audio Signals, The Journal of Navigation, vol. 69, No. 5, Sep. 2016, pp. 1024-1040. |
Navarro et al., Real-Time Distributed Architecture for Remote Acoustic Elderly Monitoring in Residential-Scale Ambient Assisted Living Scenarios, Sensors, vol. 18, No. 8, Aug. 1, 2018, pp. 1-22. |
Quintana-Suarez et al., A Low Cost Wireless Acoustic Sensor for Ambient Assisted Living Systems, Applied Science, vol. 7, No. 9, Aug. 2017, pp. 1-15. |
Shin et al., Home IoT Device Certification Through Speaker Recognition, Advanced Communication Technology (ICACT), 17th International Conference, IEEE, Jul. 1-3, 2015, pp. 600-603. |
Number | Date | Country | |
---|---|---|---|
20200335106 A1 | Oct 2020 | US |