Unless otherwise indicated herein, the materials described in this section are not prior art to the claims in this application and are not admitted to be prior art by inclusion in this section.
Computing devices such as personal computers, laptop computers, tablet computers, cellular phones, and countless types of Internet-capable devices are becoming more and more prevalent in numerous aspects of modern life. As computers become more advanced, augmented-reality devices, which blend computer-generated information with the user's view of the physical world, are expected to become more prevalent.
To provide an augmented-reality experience, location and context-aware mobile computing devices may be used by users as they go about various aspects of their everyday life. Such computing devices are configured to sense and analyze a user's environment, and to intelligently provide information appropriate to the physical world being experienced by the user.
An augmented-reality capable device's ability to recognize a user's environment and objects within the user's environment is wholly dependent on vast databases that support the augmented-reality capable device. Currently, in order for an augmented-reality capable device to recognize objects within an environment, the augmented-capable device must know about the objects within the environment, or what databases to search for information regarding the objects within the environment. While more and more mobile computing devices are becoming augmented-reality capable, the databases upon which the mobile computing devices rely still remain limited and non-dynamic.
The methods and systems described herein help provide for the detection and recognition of devices, by a mobile computing device, within a user's pre-defined local environment. These recognition and detection techniques allow target devices within the user's pre-defined local environment to send information about themselves and their location in the pre-defined local environment. In an example embodiment, a target device in a local environment of a wearable mobile computing device having taking the form of a head-mounted display (HMD) broadcasts a local-environment message to a local WiFi router, and upon entry into the pre-defined local environment, the HMD receives the local-environment message. As such, the example methods and systems disclosed herein may help provide the user of the HMD the ability to more dynamically and efficiently determine and recognize an object in the user's pre-defined local environment.
In one aspect, an exemplary method involves: (a) receiving, at a mobile computing device, a local-environment message corresponding to a pre-defined local environment, wherein the local-environment message comprises one or more of: (i) physical-layout information for the pre-defined local environment or (ii) an indication of at least one target device that is located in the pre-defined local environment, (b) receiving image data that is indicative of a field-of-view that is associated with the mobile computing device, (c) based at least in part on the physical-layout information in the local-environment message, locating the at least one target device in the field-of-view, and (d) causing the mobile computing device to display a virtual control interface for the at least one target device in a location within the field-of-view that is associated with the location of the at least one target device in the field-of-view.
In another aspect, a second exemplary method involves: (a) receiving, at a mobile computing device, a local-environment message corresponding to a pre-defined local environment, wherein the pre-defined local environment has at least one target device, and the local-environment message comprises interaction information for the at least one target device in the pre-defined local environment; and (b) based on the local-environment message, causing the mobile computing device to update an interaction data set of the mobile computing device.
In an additional aspect, a non-transitory computer readable medium having instructions stored thereon is disclosed. According to an exemplary embodiment, the instructions include: (a) instructions for receiving a local-environment message corresponding to a pre-defined local environment, wherein the local-environment message comprises one or more of: (i) physical-layout information for the local environment or (ii) an indication of at least one target device that is located in the local environment; (b) instructions for receiving image data that is indicative of a field-of-view that is associated with the mobile computing device; (c) instructions for based at least in part on the physical-layout information in the local-environment message, locating the at least one target device in the field-of-view; and (d) instructions for displaying a virtual control interface for the at least one target device in a location within the field-of-view that is associated with the location of the at least one target device in the field-of-view.
In a further aspect, a second non-transitory computer readable medium having instructions stored thereon is disclosed. According to an exemplary embodiment, the instructions include: (a) instructions for receiving a local-environment message corresponding to a pre-defined local environment, wherein the pre-defined local environment has at least one target device, and the local-environment message comprises interaction information for the at least one target device in the pre-defined local environment; and (b) updating an interaction data set of the mobile computing device.
In yet another aspect, a system is disclosed. An exemplary system includes: (a) a mobile computing device, and (b) instructions stored on the mobile computing device executable by the mobile computing device to perform the functions of: receiving a local-environment message corresponding to a pre-defined local environment, wherein the local-environment message comprises one or more of: (a) physical-layout information for the pre-defined local environment or (b) an indication of at least one target device that is located in the pre-defined local environment, receiving image data that is indicative of a field-of-view that is associated with the mobile computing device, based at least in part on the physical-layout information in the pre-defined local-environment message, locating the at least one target device in the field-of-view, and displaying a virtual control interface for the at least one target device in a location within the field-of-view that is associated with the location of the at least one target device in the field-of-view.
These as well as other aspects, advantages, and alternatives, will become apparent to those of ordinary skill in the art by reading the following detailed description, with reference where appropriate to the accompanying drawings.
The following detailed description describes various features and functions of the disclosed systems and methods with reference to the accompanying figures. In the figures, similar symbols typically identify similar components, unless context dictates otherwise. The illustrative system and method embodiments described herein are not meant to be limiting. It will be readily understood that certain aspects of the disclosed systems and methods can be arranged and combined in a wide variety of different configurations, all of which are contemplated herein.
Furthermore, the particular arrangements shown in the Figures should not be viewed as limiting. It should be understood that other embodiments may include more or less of each element shown in a given Figure. Further, some of the illustrated elements may be combined or omitted. Yet further, an example embodiment may include elements that are not illustrated in the Figures.
Example embodiments disclosed herein relate to a mobile computing device receiving a local-environment message corresponding to a pre-defined local environment, receiving image data that is indicative of a field-of-view that is associated with the mobile computing device, and causing the mobile computing device to display a virtual control interface for a target device in a location within a field-of-view associated with the mobile computing device. Some mobile computing devices may be worn by a user. Commonly referred to as “wearable” computers, such wearable mobile computing devices are configured to sense and analyze a user's environment, and to intelligently provide information appropriate to the physical world being experienced by the user. Within the context of this disclosure, the physical world being experienced by the user wearing a wearable computer is a pre-defined local environment. Such wearable computers may sense and receive image data about the user's pre-defined local environment by, for example, determining the user's location in the environment, using cameras and/or sensors to detect objects near to the user, using microphones and/or sensors to detect what the user is hearing, and using various other sensors to collect information about the pre-defined environment surrounding the user.
In an example embodiment, the wearable computers take the form of a head-mountable display (HMD) that may capture data that is indicative of what the wearer of the HMD is looking at (or would have been looking it, in the event the HMD is not being worn). The data may take the form of or include point-of-view (POV) video from a camera mounted on an HMD. Further, an HMD may include a see-through display (either optical or video see-through), such that computer-generated graphics can be overlaid on the wearer's view of his/her real-world (i.e., physical) surroundings. The HMD may also receive a local-environment message corresponding to the pre-defined local environment of the user. The local-environment message may include physical-layout information of the pre-defined local environment and an indication of target devices (i.e., objects) in the pre-defined local environment. In this configuration, it may be beneficial to display a virtual control interface for a target device in the user's pre-defined local environment at a location in the see through-display. In one example, the virtual control interface aligns with a portion of the real-world object that is visible to the wearer. In other examples, the virtual control interface may align with any portion of the pre-defined local environment that provides a suitable background for the virtual control interface.
To place a suitable virtual control interface for a target object in an HMD, the HMD may evaluate the local-environment message and the visual characteristics of the POV video that is captured at the HMD. For instance, to evaluate a given portion of the POV video, a server system may consider a visual characteristic or characteristics such as the permanence level of real-world objects and/or features relative to the wearer's field of view, the coloration in the given portion, and/or visual pattern in the given portion, and/or the size and shape of the given portion, among other factors. The HMD may use this information along with the information that is provided in the local-environment message to locate the target devices within the pre-defined local environment.
For example, consider a user wearing a HMD that enters an office (i.e., a pre-defined local-environment). The office might include various objects including a desk, scanner, computer, copier, and lamp, for example. Within the context of the disclosure these objects may be known as target devices. Upon entering the office, the user's HMD is waiting to receive data from a broadcasting object or any target devices in the environment. The broadcasting object may be a router, for example. In one instance, the router uploads a local-environment message to the HMD. The HMD now has physical-layout information for the local-environment and/or self-describing information for the scanner, for example. The HMD now knows where to look for the scanner, and upon finding it, the HMD can place information (based on the self-describing data) about the scanner on the HMD in an augmented-reality manner. The information may include, for example, a virtual control interface that displays information about the target device. In other examples, the virtual control interface may allow the HMD to control the target device.
While the foregoing example illustrates the HMD cacheing the local-environment message (i.e., storing it on a memory device of the HMD), in another embodiment, a local WiFi router of the environment may also cache the local-environment message. Referring to the office example above, the local WiFi router has the local-environment message received from the scanner (received, for example, when the scanner connected to the WiFi network) stored. The HMD pulls this information as the user walks into the office, and uses it as explained above. Other examples are also possible. Note that in the above referenced example, receiving a local-environment message helped the HMD to identify target objects within the pre-defined local environment in a dynamic and efficient manner.
In other embodiments the mobile computing device may take the form of a smartphone or a tablet, for example. Similar to the foregoing wearable computer example, the smartphone or tablet may collect information about the environment surrounding a user, analyze that information, and determine what information, if any, should be presented to the user in an augmented-reality manner.
The mobile computing device 102 may take various forms, and as such, may incorporate various display types to provide an augmented-reality experience. In an exemplary embodiment, mobile computing device 102 is a wearable mobile computing device and includes a head-mounted display (HMD). For example, wearable mobile computing device 102 may include an HMD with a binocular display or a monocular display. Additionally, the display of the HMD may be, for example, an optical see-through display, an optical see-around display, or a video see-through display. More generally, the wearable mobile computing device 102 may include any type of HMD configured to provide an augmented-reality experience to its user.
In order to sense the environment and experiences of the user, wearable mobile computing device 102 may include or be provided with input from various types of sensing and tracking devices. Such devices may include video cameras, still cameras, Global Positioning System (GPS) receivers, infrared sensors, optical sensors, biosensors, Radio Frequency identification (RFID) systems, wireless sensors, accelerometers, gyroscopes, and/or compasses, among others.
In other example embodiments, the mobile computing device comprises a smartphone or a tablet. Similar to the previous embodiment, the smartphone or tablet enables the user to observe his/her real-world surroundings and also view a displayed image, such as a computer-generated image. The user holds the smartphone or the tablet, showing the real world combined with the overlaid computer generated images. In some cases, the displayed image may overlay a portion of the user's smartphone's or tablet's display screen. Thus, while the user of the smartphone or tablet is going about his/her daily activities, such as working, walking, reading, or playing games, the user may be able to see a displayed image generated by the smartphone or tablet at the same time that the user is looking out at his/her real-world surroundings through the display of the smartphone or tablet.
In other illustrative embodiments, the mobile computing device may take the form of a portable media device, personal digital assistant, notebook computer, or any other mobile device capable of capturing images of the real-world and generating images or other media content that is to be displayed to the user.
Access point 104 may take various forms, depending upon which protocol mobile computing device 102 uses to connect to the Internet 106. For example, in one embodiment, if mobile computing device 102 connects using 802.11 or via an Ethernet connection, access point 104 may take the form of a wireless access point (WAP) or wireless router. As another example, if mobile computing device 102 connects using a cellular air-interface protocol, such as a CDMA or GSM protocol, then access point 104 may be a base station in a cellular network, which provides Internet connectivity via the cellular network. Further, since mobile computing device 102 may be configured to connect to Internet 106 using multiple wireless protocols, it is also possible that mobile computing device 102 may be configured to connect to the Internet 106 via multiple types of access points.
Mobile computing device 102 may be further configured to communicate with a target device that is located in the user's pre-defined local environment. In order to communicate with the wireless router or the mobile computing device, the target devices 110a-c may include a communication interface that allows the target device to upload information about itself to the Internet 106. In one example, the mobile computing device 102 may receive information about the target device 110a from a local wireless router that received information from the target device 110a via WiFi. The target devices 110a-c may use other means of communication, such as Bluetooth for example. In other embodiments, the target devices 110a-c may also communicate directly with the mobile computing device 102.
The target devices 110a-c could be any electrical, optical, or mechanical device. For example, the target device 110a could be a home appliance, such as an espresso maker, a television, a garage door, an alarm system, an indoor or outdoor lighting system, or an office appliance, such as a copy machine. The target devices 110a-c may have existing user interfaces that may include, for example, buttons, a touch screen, a keypad, or other controls through which the target devices may receive control instructions or other input from a user. The target devices 110a-c's existing user interfaces may also include a display, indicator lights, a speaker, or other elements through which the target device may convey operating instructions, status information, or other output to the user. Alternatively, the target devices may have no outwardly visible user interface such as a refrigerator or a desk lamp, for example.
As shown by block 302, method 300 involves a mobile computing device receiving a local-environment message corresponding to a pre-defined local environment. The local-environment message comprises one or more of: (a) physical-layout information for the local environment or (b) an indication of at least one target device that is located in the pre-defined local environment. The mobile computing device then receives image data that is indicative of a field-of-view that is associated with the mobile computing device. Next, based at least in part on the physical-layout information in the local-environment message, the mobile computing device locates the at least one target device in the field-of-view. The mobile computing device then displays a virtual control interface for the at least one target device in a location within the field-of-view that is associated with the location of the at least one target device in the field-of-view.
For example, a user wearing a HMD may enter an office looking to make copies. The office might include a lamp 204, a computer 206, a copier 208, and a local wireless router 210 such as those illustrated in
As the user wearing the HMD enters the office (shown as 200 in
After receiving the local-environment message, the HMD may receive image data that is indicative of a field-of-view of the HMD. For example, the HMD may receive image data of the office 200. The image data may include images and video of the target devices 204, 206, and 208, for example. The image data may also be restricted to the field-of-view 202 associated with the HMD, for example. The image data may further include other things in the office that are not target devices, and do not communicate with the HMD like the desk (not numbered), for example.
Once the HMD has received image data relating to a field-of view of the HMD, the user, using the HMD, may locate the target devices in the office and in the field-of view of the HMD. For example, the target device may be located based, at least in part on the physical-layout information of the location-environment message. To do so, the HMD may use the data defining the 3D model of the pre-defined local environment, data defining the 2D view of the pre-defined local environment, and the description of the pre-defined local environment to locate an area of the target device, for example. After locating an area of the target device the HMD may locate the target device within the field-of-view of the HMD. The HMD may also use the field-of-view image data and compare it to the data (indication information of the local-environment message) defining the 3D model of the target device, data defining the 2D views of the target device, and the description of the target device to facilitate the identification and location of the target device, for example. Some or all of the information in the location-environment message may be used.
To locate (and identify) the target device, in one embodiment, the HMD may compare the field-of-view image data obtained by the HMD to the data defining the 3D model of the target device to locate and select the target device that is most similar to the 3D model. Similarity may be determined based on, for example, a number or configuration of the visual features (e.g., colors, shapes, textures, depths, brightness levels, etc.) in the target device (or located area) and in the provided data (i.e., in the 3D model representing the target device). For example, a histogram of oriented gradients technique may be used (e.g., as described in “Histogram of Oriented Gradients,” Wikipedia, (Feb. 15, 2012), http://en.wikipedia.org/wiki/Histogram_of_oriented_gradients) to identify the target device, in which the provided 3D model is described by a histogram (e.g., of intensity gradients and/or edge directions), and the image data of the target device (or the area that includes the target device) is described by a histogram. A similarity may be determined based on the histograms. Other techniques are possible as well.
Once the copier 208 is located and identified, a virtual control interface for the copier 208 may be may be displayed in a field-of-view of the HMD. The virtual control interface may be displayed in the field-of-view of the HMD and be associated with the location of the copier 208, for example. In some embodiments, the virtual control interface is superimposed over the copier (i.e., target device). The virtual control interface may include control inputs and outputs for the copier 208, as well as operating instructions for the copier 208, for example. The virtual control interface may further include status information for the copier, for example. The user may receive instructions that the copier 208 is “out of paper,” or instructions on how the user should load paper and make a copy, for example. In other examples, once the virtual control interface is displayed, the user may physically interact with the virtual control interface to operate the target device. For example, the user may interact with the virtual control interface of the copier 208 to make copies. In this example, the virtual control interface may not be superimposed over the copier 208.
It is to be understood that the virtual control interfaces illustrated in
Systems and devices in which exemplary embodiments may be implemented will now be described in greater detail. In general, an exemplary system may be implemented in or may take the form of a wearable computer. However, an exemplary system may also be implemented in or take the form of other devices, such as a mobile smartphone, among others. Further, an exemplary system may take the form of non-transitory computer readable medium, which has program instructions stored thereon that are executable by a processor to provide the functionality described herein. An exemplary system may also take the form of a device such as a wearable computer or mobile phone, or a subsystem of such a device, which includes such a non-transitory computer readable medium having such program instructions stored thereon.
Each of the frame elements 504, 506, and 508 and the extending side-arms 514, 516 may be formed of a solid structure of plastic and/or metal, or may be formed of a hollow structure of similar material so as to allow wiring and component interconnects to be internally routed through the head-mounted device 502. Other materials may be possible as well.
One or more of each of the lens elements 510, 512 may be formed of any material that can suitably display a projected image or graphic. Each of the lens elements 510, 512 may also be sufficiently transparent to allow a user to see through the lens element. Combining these two features of the lens elements may facilitate an augmented reality or heads-up display where the projected image or graphic is superimposed over a real-world view as perceived by the user through the lens elements.
The extending side-arms 514, 516 may each be projections that extend away from the lens-frames 504, 506, respectively, and may be positioned behind a user's ears to secure the head-mounted device 502 to the user. The extending side-arms 514, 516 may further secure the head-mounted device 502 to the user by extending around a rear portion of the user's head. Additionally or alternatively, for example, the HMD 502 may connect to or be affixed within a head-mounted helmet structure. Other possibilities exist as well.
The HMD 502 may also include an on-board computing system 518, a video camera 520, a sensor 522, and a finger-operable touch pad 524. The on-board computing system 518 is shown to be positioned on the extending side-arm 514 of the head-mounted device 502; however, the on-board computing system 518 may be provided on other parts of the head-mounted device 502 or may be positioned remote from the head-mounted device 502 (e.g., the on-board computing system 518 could be wire- or wirelessly-connected to the head-mounted device 502). The on-board computing system 518 may include a processor and memory, for example. The on-board computing system 518 may be configured to receive and analyze data from the video camera 520 and the finger-operable touch pad 524 (and possibly from other sensory devices, user interfaces, or both) and generate images for output by the lens elements 510 and 512.
The video camera 520 is shown positioned on the extending side-arm 514 of the head-mounted device 502; however, the video camera 520 may be provided on other parts of the head-mounted device 502. The video camera 520 may be configured to capture images at various resolutions or at different frame rates. Many video cameras with a small form-factor, such as those used in cell phones or webcams, for example, may be incorporated into an example of the HMD 502.
Further, although
The sensor 522 is shown on the extending side-arm 516 of the head-mounted device 502; however, the sensor 522 may be positioned on other parts of the head-mounted device 502. The sensor 522 may include one or more of a gyroscope or an accelerometer, for example. Other sensing devices may be included within, or in addition to, the sensor 522 or other sensing functions may be performed by the sensor 522.
The finger-operable touch pad 524 is shown on the extending side-arm 514 of the head-mounted device 502. However, the finger-operable touch pad 524 may be positioned on other parts of the head-mounted device 502. Also, more than one finger-operable touch pad may be present on the head-mounted device 502. The finger-operable touch pad 524 may be used by a user to input commands. The finger-operable touch pad 524 may sense at least one of a position and a movement of a finger via capacitive sensing, resistance sensing, or a surface acoustic wave process, among other possibilities. The finger-operable touch pad 524 may be capable of sensing finger movement in a direction parallel or planar to the pad surface, in a direction normal to the pad surface, or both, and may also be capable of sensing a level of pressure applied to the pad surface. The finger-operable touch pad 524 may be formed of one or more translucent or transparent insulating layers and one or more translucent or transparent conducting layers. Edges of the finger-operable touch pad 524 may be formed to have a raised, indented, or roughened surface, so as to provide tactile feedback to a user when the user's finger reaches the edge, or other area, of the finger-operable touch pad 524. If more than one finger-operable touch pad is present, each finger-operable touch pad may be operated independently, and may provide a different function.
The lens elements 510, 512 may act as a combiner in a light projection system and may include a coating that reflects the light projected onto them from the projectors 528, 532. In some embodiments, a reflective coating may not be used (e.g., when the projectors 528, 532 are scanning laser devices).
In alternative embodiments, other types of display elements may also be used. For example, the lens elements 510, 512 themselves may include: a transparent or semi-transparent matrix display, such as an electroluminescent display or a liquid crystal display, one or more waveguides for delivering an image to the user's eyes, or other optical elements capable of delivering an in focus near-to-eye image to the user. A corresponding display driver may be disposed within the frame elements 504, 506 for driving such a matrix display. Alternatively or additionally, a laser or LED source and scanning system could be used to draw a raster display directly onto the retina of one or more of the user's eyes. Other possibilities exist as well.
As shown in
The HMD 572 may include a single lens element 580 that may be coupled to one of the side-arms 573 or the center frame support 574. The lens element 580 may include a display such as the display described with reference to
Thus, the device 610 may include a display system 612 comprising a processor 614 and a display 616. The display 616 may be, for example, an optical see-through display, an optical see-around display, or a video see-through display. The processor 614 may receive data from the remote device 630, and configure the data for display on the display 616. The processor 614 may be any type of processor, such as a micro-processor or a digital signal processor, for example.
The device 610 may further include on-board data storage, such as memory 618 coupled to the processor 614. The memory 618 may store software that can be accessed and executed by the processor 614, for example.
The remote device 630 may be any type of computing device or transmitter including a laptop computer, a mobile telephone, or tablet computing device, etc., that is configured to transmit data to the device 610. The remote device 630 and the device 610 may contain hardware to enable the communication link 620, such as processors, transmitters, receivers, antennas, etc.
In
While various aspects and embodiments have been disclosed herein, other aspects and embodiments will be apparent to those skilled in the art. The various aspects and embodiments disclosed herein are for purposes of illustration and are not intended to be limiting, with the true scope and spirit being indicated by the following claims.