An augmented reality computing device may display virtual objects overlaid on an environment that includes real objects. As such, a user of an augmented reality computing device may interact with both real and virtual objects during an augmented reality experience. In some instances, different objects may have different associated computing device actions that may be performed when interacting with the objects. Further, the commands available to any object may change depending upon a state of the object. As such, a user may have to learn a relatively large and complex set of interactions to navigate the augmented reality experience.
Embodiments for operating a user interface on an augmented reality computing device comprising a see-through display system are disclosed. For example, one disclosed embodiment provides a method including receiving a user input selecting an object in a field of view of the see-through display system, determining a first group of commands currently operable based on one or more of an identification of the selected object and a state of the object, and presenting the first group of commands to a user. The method may further include receiving a command from the first group of commands, changing the state of the selected object from a first state to a second state in response to the command, determining a second group of commands based on the second state, where the second group of commands is different than the first group of commands, and presenting the second group of commands to the user.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter. Furthermore, the claimed subject matter is not limited to implementations that solve any or all disadvantages noted in any part of this disclosure.
A see-through display system, e.g., a head-mounted display (HMD) device or other near-eye display device, may be used to present an augmented reality view of a background scene to a user. Such an augmented reality environment may include various virtual and real objects with which a user may interact via user inputs, such as speech inputs, gesture inputs, eye tracking inputs, motion inputs, and/or any other suitable input type. As a more specific example, a user may utilize a speech input to perform a command associated with a selected object in the augmented reality environment.
As different objects may have different associated commands, and as the commands associated with an object may change with changes in a state of the object, developing knowledge of user interface commands available for various objects may take time and effort.
Accordingly, embodiments are disclosed herein that relate to facilitating interactions with objects in an augmented reality computing device environment. Briefly, the disclosed embodiments relate to presenting, to a user, representations of commands that are available for an object based upon an object identity, and also potentially based on a state of the object. The representations are configured to present just those commands that are currently available so that a user can clearly see the available interactions at that time.
As will be described in greater detail below, display device 104 may comprise one or more outwardly facing image sensors (e.g., two-dimensional cameras and/or depth cameras) configured to acquire image data (e.g. color/grayscale images, depth images/point cloud data, etc.) representing user environment 100 as the user navigates the environment. This image data may be used to obtain information regarding the layout of the environment (e.g., three-dimensional surface map, etc.) and objects contained therein, such as bookcase 108, sofa 114, desk 117, telephony device 113, and music or media device 115.
Display device 104 may overlay one or more virtual images or objects over real objects in the user field of view 102. Example virtual objects depicted in
A user, e.g., user 106, of an augmented reality user interface may be able to perform any suitable actions on real and virtual objects in an augmented reality environment. However, as mentioned above, the interactions available to the user may depend on the object type and/or object state. For example, if the selected object is a virtual television device 111, then commands available for virtual television device 111 may depend on a state of television device 111. As a more specific example, if virtual television device 111 is playing a movie, then commands such as “pause,” “forward,” and “rewind” may be available for the user to select in that state, whereas commands such as “play” may not be available. Likewise, if virtual television device 111 is not playing a movie when selected, then commands such as “play,” “forward,” and “rewind” may be available for the user to select whereas other commands, such as “pause,” may not be available for selection.
Since the list of actions may vary between objects and also over time as an object changes state, it may be difficult for a user to know which actions are available in a current context of a selected object. Thus, according to embodiments disclosed herein, augmented reality images displayed through display device 104, potentially in combination with information regarding use environment 100, may be used to present representations of commands associated with user-selected objects on display device 104. For example, upon selection by user 106 of one or more real or virtual objects in the augmented environment, display device 104 may present one or more augmented reality features that indicate possible actions which may be performed on the selected objects.
User 106 may select an object for interaction in any suitable manner detectable by display device 104. For example, user 106 may select an object by gazing at the object, wherein a gaze direction may be detected by one or more eye tracking sensors, potentially for a threshold amount of time. In another example, user 106 may select an object by moving his or her head, hand, entire body, etc., to or towards the object, as detected by inertial motion sensors and or image sensors. A user also may issue voice commands detected by one or more microphones. Additional details regarding detecting user input commands via sensors of display device 104 are discussed below with respect to
As remarked above, since commands available for selected objects may depend on the type of objects selected, the state of the objects selected, and the number of objects selected, it may be difficult for a user to remember or keep track of possible available actions which may be performed on selected objects. Thus, as described in more detail below with regard to
A see-through display device according to the present disclosure may take any suitable form, including but not limited to near-eye devices such as the head-mounted display device 104 of
Display system 300 comprises one or more lenses 302 that form a part of a display subsystem 304, such that images may be displayed via lenses 302 (e.g. via projection onto lenses 302, waveguide system(s) incorporated into lenses 302, and/or in any other suitable manner). Display system 300 further comprises one or more outward-facing image sensors 306 configured to acquire images of a background scene and/or physical space being viewed by a user, and may include one or more microphones 308 configured to detect sounds, such as voice commands from a user. Outward-facing image sensors 306 may include one or more depth sensors and/or one or more two-dimensional image sensors.
Display system 300 may further comprise a gaze detection subsystem 310 configured to detect a direction of gaze of each eye of a user, as described above. Gaze detection subsystem 310 may be configured to determine gaze directions of each of a user's eyes in any suitable manner. For example, in the depicted embodiment, gaze detection subsystem 310 comprises one or more glint sources 312, such as infrared light sources, configured to cause a glint of light to reflect from each eyeball of a user, and one or more image sensors 314, such as inward-facing sensors, configured to capture an image of each eyeball of the user. Changes in the glints from the user's eyeballs and/or a location of a user's pupil as determined from image data gathered via the image sensor(s) 314 may be used to determine a direction of gaze. Further, a location at which gaze lines projected from the user's eyes intersect the external display may be used to determine an object at which the user is gazing (e.g. a displayed virtual object and/or real background object). Gaze detection subsystem 310 may have any suitable number and arrangement of light sources and image sensors.
Display system 300 may further comprise additional sensors. For example, display system 300 may comprise a global positioning (GPS) subsystem 316 to allow a location of the display system 300 to be determined. This may allow a user of display system 300 to be presented different groups of commands associated with selected objects at different locations, for example.
Display system 300 may further include one or more motion sensors 318 to detect movements of a user's head when the user is wearing display system 300. Motion data may be used, potentially along with eye-tracking glint data and outward-facing image data, for gaze detection, as well as for image stabilization to help correct for blur in images from the outward-facing image sensor(s) 306. The use of motion data may allow changes in gaze location to be tracked even if image data from outward-facing image sensor(s) 306 cannot be resolved. Likewise, motion sensors 318, as well as microphone(s) 308 and gaze detection subsystem 310, also may be employed as user input devices, such that a user may interact with the display system 300 via gestures of the eye, neck and/or head, as well as via verbal commands. It will be understood that sensors illustrated in
Display system 300 further comprises a controller 320 having a logic subsystem 322 and a data holding subsystem 324 (which also may be referred to as a storage system) in communication with the sensors, gaze detection subsystem 310, and display subsystem 304. Data holding subsystem 324 comprises instructions stored thereon that are executable by logic subsystem 322, for example, to receive and interpret inputs from the sensors, to identify movements of a user, to detect user selection of one or more objects, and to perform actions or commands on selected objects, among other tasks.
It will be appreciated that the depicted display devices 104 and 300 are described for the purpose of example, and thus are not meant to be limiting. It is to be understood that the display device may include additional and/or alternative sensors, cameras, microphones, input devices, output devices, etc. than those shown without departing from the scope of this disclosure. Further, the physical configuration of a display device and its various sensors and subcomponents may take a variety of different forms without departing from the scope of this disclosure.
At 402,
First object 410 may be a television device, such a virtual television device 111 shown in
As another example, first object 410 may represent a telephone device, e.g., telephony device 113 shown in
After the first object 410 is selected by the user and/or after the target region 412 is gazed upon by the user, visibility of representations of commands in a first group of commands representing currently available commands may be triggered to display to the user a current set of available actions associated with the selected object. An example of such representations is shown at 404 in
This presentation of available commands for a selected object may be triggered in a variety of ways. For example, the presentation of commands may be triggered in response to user 106 gazing or focusing on the selected object for a threshold time duration, e.g., at least one second. As another example, a menu of available commands may be activated and presented to the user in response to a degree of user stability as detected via eye tracking sensors and/or motion sensors. For example, if user 106 stands still and looks at first object 410 with minimal changes to body position and view direction for a threshold time duration, then menu visibility may be enabled. As another example, a command menu may be activated in response to user 106 gazing or focusing on a secondary embedded focus target, e.g., target region 412. For example, user 106 may look at an icon or glowing particle cluster overlaid on the object, enabling menu visibility. As another example, presentation of commands associated with a selected object may be activated by speech input received from user 106. For example, user 106 may speak a key phrase, such as “help me” to activate a display of commands associated with the selected object.
As still another example, presentation of commands associated with the selected object may be activated by object/object proximity. For example, a presentation of a group of commands for an object may be activated in response to a change in distance between the selected object and another object in the environment. As a more specific example, an “Actions” holographic object may be floating in space or overlaid on the environment. User 106 may look at first object 410, which may for example be a floating TV object, and moves it closer to the “Actions” object. As a result of this movement, a menu may emerge from the “Actions” object that enables the player to change the channel or perform other TV-specific actions.
As another example, as shown in
For example, as shown in
Various presentation methods, both non-interactive and interactive, may be used to present a group of commands associated with a selected object to the user. In some examples, presenting the group of commands to the user may include displaying, via the see-through device 104, representations of commands at a location adjacent to the selected object 410 as shown at 422 in
The representations of commands displayed to user 106 may take any suitable form. For example, the representations may include icons, floating three-dimensional signs, lists of available command options, and other indications of available commands. As a more specific example, a text menu of available actions may be wrapped onto the surfaces of real-world walls, e.g., as discovered with surface reconstruction technology. Available actions for a selected object also may be displayed on selected objects directly, and may use standardized iconography. For example, a frame around a floating TV object might contain a speaker-image icon that lights up when audio-related actions are available to the user. As another example, audio representations may be presented to the user indicating available commands for a selected object. Further, a speech audio output may be presented to the user describing which actions are available for a selected object in its current state.
After being presented the first group of available commands for selected object 410, user 106 may then provide user input selecting one of the commands from the first group of commands. Examples of user input for selecting a command include but are not limited to speech, touch, gaze, gesture, posture, or any combination of these. Where users may select commands via speech, a plurality of key-phrases may be associated with each command so that, in response to a user speech input of a key-phrase associated with a command, the command corresponding to the key-phrase is performed on the selected object. This is described in more detail below with regard to
In some examples, performing the command on the first object 410 may change the state of the first object, e.g., from a first state to a second state. For example, at 404,
When the first object changes state, a second group of commands may be provided to the user, as shown by the difference between user interfaces 422 and 426 in
In some examples, user 106 may select multiple objects, e.g., two or more objects, in an environment and perform actions on the selected objects. For example, user 106 may select both first object 410 and second object 414 via one or more inputs such as gazing at the objects, speech input, gesture input, combinations thereof, and/or any other suitable input. In this instance, specific groups of commands for multiple objects may be different than groups of commands available when one object is selected. For example, if first object 410 is a television device playing a video and second object 414 is a media device playing music, and user 106 selects both the first and second objects, then the group of commands presented to the user may include volume control commands to globally adjust the volume of both devices. Such groups of commands related to global behaviors and/or multiple selected objects may be activated by a higher-level application such as via an operating system of the augmented reality computing device, or at an application level of an application running on the augmented reality computing device. In contrast, groups of commands specialized to per-object behaviors may be activated within each object during state transitions of the object. For example, per-object groups of commands may change along with the object as its properties and capabilities change over time.
It may be desirable for applications which use speech recognition systems to register a minimal number of speech commands, in order to reduce false positives where a user accidentally triggers an undesired action, and to ensure that speech keywords are valid in a current context.
However, in other examples, for each command in a group of commands associated with selected objects, synonyms for speech keywords may be used to enable an application to map multiple phrases to a single command. In this way, a user might be more likely to successfully remember a speech command if several synonyms are available for the same command since the user simply need to remember one of them.
For example, as remarked above, in some examples, for each command in a group of commands associated with selected objects, a plurality of key phrases may be associated with the command. In response to receiving speech input from a user of a selected key phrase in the plurality of key phrases, the command corresponding to the selected key phrase may be performed. In this way, complexity and performance costs associated with an augmented reality computing device with different command groups for different types and states of objects may be reduced, leading to a more scalable system for both activating action menus for selected objects and triggering actions within a menu.
The first object type 602 may also have a second state 608 and a second group of commands 616 associated the second state 608. The second group of commands 616 may include a plurality of commands, e.g., command 1 through command N, which are recognizable by a selected object with the first object type in the second state. Each command in the second group of commands 616 may have a plurality of key phrases associated with the command. For example, command 1 has key phrase 1 through key phrase N associated with it.
A similar system may be used for a second object type 604. For example, second object type 604 may have a first state 610 and a first group of commands 618 associated the first state 610. The first group of commands 618 may include a plurality of commands, e.g., command 1 through command N, which are recognizable by a selected object with the second object type in the first state. Each command in the first group of commands 618 may have a plurality of key phrases associated with the command.
The second object type 604 may also have a second state 612 and a second group of commands 620 associated the second state 612. The second group of commands 620 may include a plurality of commands, e.g., command 1 through command N, which are recognizable by a selected object with the second object type in the second state. Each command in the second group of commands 620 may have a plurality of key phrases associated with the command.
In some examples, these key phrases may be handled by a speech recognition system at an application level for an application running on an augmented reality computing device or at an operating system level of an augmented reality computing device. For example, a user may provide speech input of one of the key phrases associated with a command in the first group of commands and the application, the operating system, or the speech recognition system may interpret the speech input of a key phrase into the associated command and send the associated command to the user selected objects.
As a more specific example, in an environment containing a floating holographic television object selected by a user, the speech recognition system may associate the key phrases “load video,” “play movie,” and “start” with a “play” command recognizable by the television object. For example, the television object may not recognize the key phrases “load video,” “play movie,” and “start” but may only recognize the command “play.” In this example, the speech recognition system may translate the key phrases “load video,” “play movie,” and “start” into the command “play” sent to the television object to perform the play command.
Further, key phrases associated with commands may be selectively activated or deactivated in response to a change in context, changes in user selections of objects, and changes in states of the selected objects. For example, individual objects may enable or disable groups of key phrases as they go through context transitions. As a more specific example, when looking at a video-playing television object, the object may send a request to the speech recognition system to activate a group of key phrases associated with a particular command based on knowledge of its own properties, such as type and state. Activating a group of key phrases enables the speech management system to register all key phrases (and synonyms) that belong to the group of key phrases so that the system may be able to recognize each key phrase and perform the corresponding command. Further, a group of key phrases may be deactivated in response to a selection of an object and/or other suitable circumstances so that an inactive action for the selected object does not have speech commands registered to it, thus helping to decrease the change of speech recognition false positives. In some examples, global key phrase groups associated with global commands, e.g., when multiple objects are selected by a user, may be activated or deactivated at higher levels of the application which have visibility across multiple objects and have a better understanding of application and current augmented environment context.
At 702, method 700 includes displaying one or more virtual images such that a field of view of a user through the see-through display system comprises a set of augmented reality features. The set of augmented reality features may be based upon a type, content of, and/or property of the virtual images and/or real objects. For example, the augmented reality features may include real or virtual objects selectable by a user. Further, the features also may include sounds or other augmented reality environment aspects.
At 704, method 700 includes receiving a user input selecting an object. Examples of user inputs may include gaze inputs, voice inputs, gesture inputs, posture inputs, body motion inputs, eye tracking inputs, and/or any other suitable inputs. As one more specific non-limiting example, an object may be selected in response to determining that the user has gazed at the selected object for a duration greater than a first time threshold. This is shown at 706, where method 700 may include determining that a user has gazed at the selected object.
At 708, method 700 includes determining a first group of commands that are operable based on one or more of an identification of the selected object and a state of the object. In some examples, for each command in the first group of commands, a plurality of key phrases may be associated with the command, so that, in response to receiving speech input of a selected key phrase in the plurality of key phrases, a command corresponding to the selected key phrase is performed.
At 712, method 700 includes presenting the first group of commands to the user. For example, presenting the first group of commands to the user may include displaying representations of commands in the first group of commands, potentially at a location adjacent to the selected object, via the see-through display system. As another example, presenting the first group of commands to the user may include displaying, via the see-through display system, representations of commands in the first group of commands on a surface of a the selected object or a second object, where the second object is different from the selected object.
As mentioned above, various triggering conditions may be used to determine whether to present the first group of commands to a user. For example, at 714, method 700 may include presenting the first group of commands to the user if an amount of user movement is less than a movement threshold. As a more specific example, the first group of commands may be presented to the user in response to an amount of user movement less than a movement threshold for threshold duration while the user focuses on the selected object. Likewise, at 716, method 700 may include presenting the first group of commands to the user if the user is gazing at a target region of the selected object, or other suitable target. As another example, the first group of commands may be presented to the user in response to determining that the user has gazed at the selected object for a time duration greater than a second time threshold, where the second time threshold is greater than a first time threshold used in the determination of whether the user is selecting the object. Further, at 718, method 700 may include presenting the first group of commands to the user in response to a speech input received from the user.
Further, as mentioned above, distance also may be used as a trigger to display commands. Thus, at 720, method 700 may include increasing a number of commands presented to the user in response to a decrease in a user distance from the selected object. For example, a number of commands in the first group of commands presented to the user may be increased in response to detecting a decrease in distance between the user and the selected object.
At 722, method 700 includes receiving a command from the first group of commands. For example, at 724, method 700 may include receiving speech input of a key phrase associated with the command. In response, method 700 comprises, at 726, performing the command.
Further, at 728, method 700 may include changing a state of the selected object. For example, the state of the selected object may be changed from a first state to a second state in response to the command. In some examples, the command received at 722 may be deactivated in response to a change of state of the selected object. In response, at 730, method 700 includes determining a second group of commands. The second group of commands may be determined based on the second state, where the second group of commands is different than the first group of commands. Further, in some examples, for each command in the second group of commands, a plurality of key phrases may be associated with the command, so that in response to receiving speech input of a selected key phrase in the plurality of key phrases, a command corresponding to the selected key phrase may be performed.
At 734, method 700 includes presenting the second group of commands to the user. The second group of commands may be presented to the user, for example, in response to changing the state of the selected object from the first state to the second state, and may be presented in any suitable manner and at any suitable location.
Thus, according to the above embodiments, a user may be presented with information regarding currently available commands operable on a selected object based upon information such as the object type and/or state. This may facilitate the learning and operation of an augmented reality user interface.
In some embodiments, the methods and processes described above may be tied to a computing system of one or more computing devices. In particular, such methods and processes may be implemented as a computer-application program or service, an application-programming interface (API), a library, and/or other computer-program product.
Computing system 800 includes a logic subsystem 802 and a storage subsystem 804. Computing system 800 may optionally include a display subsystem 806, input subsystem 808, communication subsystem 710, and/or other components not shown in
Logic subsystem 802 includes one or more physical devices configured to execute instructions. For example, the logic subsystem may be configured to execute instructions that are part of one or more applications, services, programs, routines, libraries, objects, components, data structures, or other logical constructs. Such instructions may be implemented to perform a task, implement a data type, transform the state of one or more components, or otherwise arrive at a desired result.
The logic subsystem may include one or more processors configured to execute software instructions. Additionally or alternatively, the logic subsystem may include one or more hardware or firmware logic machines configured to execute hardware or firmware instructions. The processors of the logic subsystem may be single-core or multi-core, and the programs executed thereon may be configured for sequential, parallel or distributed processing. The logic subsystem may optionally include individual components that are distributed among two or more devices, which can be remotely located and/or configured for coordinated processing. Aspects of the logic subsystem may be virtualized and executed by remotely accessible, networked computing devices configured in a cloud-computing configuration.
Storage subsystem 804 includes one or more physical devices configured to hold data and/or instructions executable by the logic subsystem to implement the methods and processes described herein. When such methods and processes are implemented, the state of storage subsystem 804 may be transformed—e.g., to hold different data.
Storage subsystem 804 may include removable media and/or built-in devices. Storage subsystem 804 may include optical memory devices (e.g., CD, DVD, HD-DVD, Blu-Ray Disc, etc.), semiconductor memory devices (e.g., RAM, EPROM, EEPROM, etc.) and/or magnetic memory devices (e.g., hard-disk drive, floppy-disk drive, tape drive, MRAM, etc.), among others. Storage subsystem 804 may include volatile, nonvolatile, dynamic, static, read/write, read-only, random-access, sequential-access, location-addressable, file-addressable, and/or content-addressable devices.
It will be appreciated that storage subsystem 804 includes one or more physical devices. However, in some embodiments, aspects of the instructions described herein may be propagated by a pure signal (e.g., an electromagnetic signal, an optical signal, etc.) via a communications medium, as opposed to being stored on a storage device. Furthermore, data and/or other forms of information pertaining to the present disclosure may be propagated by a pure signal.
In some embodiments, aspects of logic subsystem 802 and of storage subsystem 804 may be integrated together into one or more hardware-logic components through which the functionally described herein may be enacted. Such hardware-logic components may include field-programmable gate arrays (FPGAs), program- and application-specific integrated circuits (PASIC/ASICs), program- and application-specific standard products (PSSP/ASSPs), system-on-a-chip (SOC) systems, and complex programmable logic devices (CPLDs), for example.
When included, display subsystem 806 may be used to present a visual representation of data held by storage subsystem 804. This visual representation may take the form of a graphical user interface (GUI). As the herein described methods and processes change the data held by the storage subsystem, and thus transform the state of the storage subsystem, the state of display subsystem 806 may likewise be transformed to visually represent changes in the underlying data. Display subsystem 806 may include one or more display devices utilizing virtually any type of technology. Such display devices may be combined with logic subsystem 802 and/or storage subsystem 804 in a shared enclosure, or such display devices may be peripheral display devices.
When included, input subsystem 808 may comprise or interface with one or more user-input devices such as a keyboard, mouse, touch screen, or game controller. In some embodiments, the input subsystem may comprise or interface with selected natural user input (NUI) componentry. Such componentry may be integrated or peripheral, and the transduction and/or processing of input actions may be handled on- or off-board. Example NUI componentry may include a microphone for speech and/or voice recognition; an infrared, color, stereoscopic, and/or depth camera for machine vision and/or gesture recognition; a head tracker, eye tracker, accelerometer, and/or gyroscope for motion detection and/or intent recognition; as well as electric-field sensing componentry for assessing brain activity.
When included, communication subsystem 810 may be configured to communicatively couple computing system 800 with one or more other computing devices. Communication subsystem 810 may include wired and/or wireless communication devices compatible with one or more different communication protocols. As non-limiting examples, the communication subsystem may be configured for communication via a wireless telephone network, or a wired or wireless local- or wide-area network. In some embodiments, the communication subsystem may allow computing system 800 to send and/or receive messages to and/or from other devices via a network such as the Internet.
It will be understood that the configurations and/or approaches described herein are exemplary in nature, and that these specific embodiments or examples are not to be considered in a limiting sense, because numerous variations are possible. The specific routines or methods described herein may represent one or more of any number of processing strategies. As such, various acts illustrated and/or described may be performed in the sequence illustrated and/or described, in other sequences, in parallel, or omitted. Likewise, the order of the above-described processes may be changed.
The subject matter of the present disclosure includes all novel and non-obvious combinations and sub-combinations of the various processes, systems and configurations, and other features, functions, acts, and/or properties disclosed herein, as well as any and all equivalents thereof
This application is a continuation of U.S. patent application Ser. No. 13/770,725, filed Feb. 19, 2013, the entirety of which is hereby incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
8203502 | Chi et al. | Jun 2012 | B1 |
8279052 | Heubel et al. | Oct 2012 | B2 |
8427396 | Kim | Apr 2013 | B1 |
8666208 | Amirparviz | Mar 2014 | B1 |
8929954 | Jacobsen et al. | Jan 2015 | B2 |
8934015 | Chi et al. | Jan 2015 | B1 |
8994613 | Johnson et al. | Mar 2015 | B1 |
9007301 | Raffle et al. | Apr 2015 | B1 |
20020044152 | Abbott et al. | Apr 2002 | A1 |
20030005076 | Koch et al. | Jan 2003 | A1 |
20030046401 | Abbott et al. | Mar 2003 | A1 |
20050207599 | Fukumoto et al. | Sep 2005 | A1 |
20060200662 | Fulton et al. | Sep 2006 | A1 |
20060241792 | Pretlove et al. | Oct 2006 | A1 |
20080172635 | Ross et al. | Jul 2008 | A1 |
20090252345 | Le et al. | Oct 2009 | A1 |
20090254358 | Li et al. | Oct 2009 | A1 |
20100031150 | Andrew | Feb 2010 | A1 |
20100103075 | Kalaboukis et al. | Apr 2010 | A1 |
20100250253 | Shen | Sep 2010 | A1 |
20120105447 | Kim | May 2012 | A1 |
20130007668 | Liu | Jan 2013 | A1 |
20130009868 | Sako et al. | Jan 2013 | A1 |
20130050642 | Lewis et al. | Feb 2013 | A1 |
20130054576 | Karmarkar | Feb 2013 | A1 |
20130069985 | Wong et al. | Mar 2013 | A1 |
20130106674 | Wheeler et al. | May 2013 | A1 |
20130169560 | Cederlund et al. | Jul 2013 | A1 |
20130187835 | Vaught | Jul 2013 | A1 |
20130226590 | Lee | Aug 2013 | A1 |
20130246967 | Wheeler | Sep 2013 | A1 |
20130328925 | Latta et al. | Dec 2013 | A1 |
20140160001 | Kinnebrew | Jun 2014 | A1 |
20140218269 | Cazalet et al. | Aug 2014 | A1 |
20150279389 | LeBeau et al. | Oct 2015 | A1 |
20160055848 | Meruva et al. | Feb 2016 | A1 |
Number | Date | Country |
---|---|---|
1696849 | Nov 2005 | CN |
102207819 | Oct 2011 | CN |
102566756 | Jul 2012 | CN |
102667881 | Sep 2012 | CN |
1271293 | Jan 2003 | EP |
2584531 | Apr 2013 | EP |
2013043288 | Mar 2013 | WO |
Entry |
---|
Kerr et al., Evaluation of an arm-mounted augmented reality system in an outdoor environment; 2012; IEEE. |
Scholtz, Jean, “Ubiquitous Computing Goes Mobile”, In ACM Sigmobile Mobile Computing and Communications Review, vol. 05, Issue 03, Jul. 1, 2001, pp. 32-38. |
“Written Opinion Issued in PCT Application No. PCT/US2014/016158”, dated Feb. 5, 2015, 8 Pages. |
“Final Office Action Issued in U.S. Appl. No. 13/770,725”, dated Jun. 1, 2015, 21 Pages. |
“Final Office Action Issued in U.S. Appl. No. 13/770,725”, dated Mar. 29, 2016, 20 Pages. |
“Final Office Action Issued in U.S. Appl. No. 13/770,725”, dated Jan. 6, 2017, 24 Pages. |
“Non Final Office Action Issued in U.S. Appl. No. 13/770,725”, dated Nov. 19, 2014, 17 Pages. |
“Non Final Office Action Issued in U.S. Appl. No. 13/770,725”, dated Sep. 9, 2016, 20 Pages. |
“Non Final Office Action Issued in U.S. Appl. No. 13/770,725”, dated Sep. 23, 2015, 26 Pages. |
“Notice of Allowance Issued in U.S. Appl. No. 13/770,725”, dated Jun. 22, 2017, 16 Pages. |
“International Search Report & Written Opinion Issued in PCT Patent Application No. PCT/US2014/016158”, dated May 12, 2014, 10 Pages. |
“First Office Action and Search Report Issued in Chinese Patent Application No. 201480009315.0”, dated Feb. 21, 2017, 13 Pages. |
“Notice of Allowance Issued in Chinese Patent Application No. 201480009315.0”, dated Aug. 29, 2017, 4 Pages. |
Bonino, et al., “A Blueprint for Integrated Eye-Controlled Environments”, In Springer International Journal of Universal Access in the Information Society, vol. 08, Issue 04, Mar. 11, 2009, pp. 311-321. |
Felix, Low, “Context-Aware Service Selection Based on the Augmented Reality Toolkit”, Retrieved From <<http://campar.in.tum.de/twiki/pub/Students/SepLoew/SEP_loew.pdf>>, Jan. 13, 2004, 39 Pages. |
Hacker, Martin, “Context-Aware Speech Recognition in a Robot Navigation Scenario”, Retrieved From <<http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.415.3786&rep=rep1&type=pdf>>, Oct. 4, 2011, 14 Pages. |
Kevin, Matz, “Donald Norman's Design Principles for Usability”, Retrieved From <<http://architectingusability.com/2012/06/28/donald-normans-design-principles-for-usability/>>, Jun. 28, 2012, 4 Pages. |
Maimone, et al., “Computational Augmented Reality Eyeglasses”, In IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Oct. 1, 2013, 10 Pages. |
Mock, et al., “Direct 3D-collaboration with Face2Face—Implementation Details and Application Concepts”, Published in 2012 3DTV-Conference on The True Vision, Jan. 2012, 4 Pages. |
Mooser, et al., “An Augmented Reality Interface for Mobile Information Retrieval”, In IEEE International Conference on Multimedia and Expo, Jul. 2, 2007, 4 Pages. |
“International Preliminary Report on Patentability Issued in PCT Application No. PCT/US2014/016158”, dated Jun. 1, 2015, 10 Pages. |
“Office Action Issued in European Patent Application No. 14710080.4”, dated Oct. 10, 2018, 6 Pages. |
Number | Date | Country | |
---|---|---|---|
20180011534 A1 | Jan 2018 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 13770725 | Feb 2013 | US |
Child | 15714063 | US |