Continually gather large amounts of data to understand a user's environment via a variety of sensors can enhance mixed reality experiences and/or improve the accuracy of directions or spatial information within an environment. Current wearable devices may have limited functionality. Some wearable devices may be limited to basic audio and video capture, without the ability to process the information on the device. Other wearable devices may require stereo input to produce spatial information about the user's environment, which may make the devices prohibitively expensive.
In at least one implementation, the disclosed technology provides a spatial output device comprised of two electronics enclosures that are electrically connected by a flexible electronic connector. The two electronics enclosures are weighted to maintain a balanced position of the flexible connector against a support. The spatial output device has at least one input sensor affixed to one of the two electronics enclosures and an onboard processor affixed to one of the two electronics enclosures. The input sensor is configured to receive monocular input. The onboard processor is configured to process the monocular input to generate a spatial output, where the spatial output provides at least two-dimensional information.
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Other implementations are also described and recited herein.
A camera on the spatial output device 100 has a field of view indicated by broken lines 112 and 114. The camera on the spatial output device 100 continuously captures data about objects within its field of view. For example, in
The onboard processor continuously receives data from the camera and processes that data to generate spatial output. The spatial output is contributed to a map that is developed over time by the spatial output generated by the onboard processor. The onboard processor integrates spatial output with an existing map or other spatial output data to develop the map over time. The map may include information about a particular space (i.e., a room, warehouse, or building), such as the location of walls, doors, and other physical features in the space, objects in the space, and the location of the spatial output device 100 within the space. Similarly, the spatial output used to develop the map may include data about the location of physical features in a space, objects in the space, or the location of the spatial output device 100 relative to physical features or objects in the space. In some implementations, the map is stored on the spatial output device 100 for easy reference by the spatial output device 100. In another implementation, the map is uploaded from the spatial output device 100 to a remote computing location through a wireless (e.g., Wi-Fi) or wired connection on the spatial output device 100. The remote computing location may be, for example, the cloud or an external server.
When the map is uploaded to a remote computing location, the map may be shared between the spatial output device 100 and other spatial output devices (not shown) to generate a shared map. The shared map may further include information about the location of each of the spatial output devices relative to each other. Knowing the relative location of the spatial output device can enable communication between the spatial output devices, such as by providing remote multi-dimensional audio.
In some implementations, the user 104 may be able to access the map to receive directions to a particular object or location within the map. For example, the user 104 may leave the position shown in
The spatial output device 100 may guide the user 104 through a pair of spatial output mechanisms, where one spatial output mechanism is affixed to the left electronic enclosure 102, and another spatial output mechanism is affixed to the right electronic enclosure 103. The pair of spatial output mechanisms may be, for example, a pair of open-air speakers or a pair of haptic motors. The pair of spatial output mechanisms may convey directions to the user by, for example, vibrating or beeping to indicate what direction the user should turn. For example, if the spatial output mechanisms are a pair of haptic motors, the haptic motor affixed to the left electronic enclosure 102 may vibrate when the user 104 should turn left and the haptic motor affixed to the right electronic enclosure 103 may vibrate when the user 104 should turn right. Other combinations of vibrations or sounds may direct the user to a particular location. In some implementations, such as when headphones are used, the spatial output mechanisms may not be affixed to the left electronic enclosure 102 and the right electronic enclosure 103. For example, when headphones are used for spatial output, the headphones may be connected via an audio jack in the spatial output device 100 or through a wireless connection.
The flexible connector 110 allows the spatial output device 100 to hang relative to the ground instead of being in one fixed orientation relative to the chest of the user 104. For example, if the user 104 were bent over closer to the ground, the left electronic enclosure 102 and the right electronic enclosure 103 would still be oriented roughly perpendicular to the ground. Accordingly, a camera affixed to either the left electronic enclosure 102 or the right electronic enclosure 103 has a consistent angle of view whether the user 104 is standing straight up or is bent over.
The left electronic enclosure 102 and the right electronic enclosure 103 are weighted to maintain a balanced position of the flexible electronic connector 110. The flexible electronic connector 110 is in a balanced position when it remains in place on the user 104 and is not sliding to the right or the left of the user 104 based on the weight of the left electronic enclosure 102 or the right electronic enclosure 103. To maintain the balanced position of the flexible electronic connector 110, the left electronic enclosure 102 and the right electronic enclosure 103 are substantially the same weight. The left electronic enclosure 102 may have components that are the same weight as components in the right electronic enclosure 103. In other implementations, weights or weighted materials may be used so that the left electronic enclosure 102 and the right electronic enclosure 103 are substantially the same weight.
In some implementations, the flexible electronic connector 110 may include an adjustable section. The adjustable section may allow the user 104 to adjust the length of the flexible electronic connector for the comfort of the user 104 or to better align the left electronic enclosure 102 and the right electronic enclosure 103 based on the height and build of the user 104. The flexible electronic connector 110 may also include additional sensors, such as heart rate or other biofeedback sensors, to obtain data about the user 104.
In some implementations, the spatial output device 100 may also be a spatial input device. For example, the spatial output device 100 may also receive spatial audio through a microphone located on the left electronic enclosure 102 or the right electronic enclosure 103.
In the spatial output device 200 of
The left enclosure 202 further includes a battery 216, a charger 218, and a camera 220. The charger 218 charges the battery 216 and may have a charging input or may charge the battery through proximity charging. The battery 216 may be any type of battery suitable to power the spatial output device 200. The battery 216 powers electronics in both the left enclosure 202 and the right enclosure 204 through electrical connections that are part of the flexible connector 206.
The camera 220 provides a wide field of view through use of a wide angle or fish-eye lens, although other lenses may be employed. The camera 220 is a monocular camera. The camera 220 is angled to provide a wide field of view. The angle of the camera 220 may change depending on the anatomy of the user of the spatial output device 200. For example, the camera 220 may be at one angle for a user with a fairly flat chest and at a different angle for a user with a fuller chest. In some implementations, the user may adjust the camera 220 manually to achieve a good angle for a wide field of view. In other implementations, the spatial output device 200 may automatically adjust the camera 220 when a new user uses the spatial output device 200. For example, in one implementation, the spatial output device 200 may sense the angle of a new user's chest and adjust the angle of the camera 220 accordingly. In another implementation, the spatial output device may be able to recognize different users through, for example, a fingerprint sensor or an identifying sensor, where each user pre-sets an associated angle of the camera 220.
The right enclosure 204 further includes a processor 222 with memory 224 and an IMU 226. The processor 222 provides onboard processing for the spatial output device 200. The processor 222 may include a connection to a communication network (e.g., a cellular network or WI-FI network). The memory 224 on the processor 222 may store information relating to the spatial output device 200, including, without limitation, a shared map of a physical space, user settings, and user data. The processor 222 may additionally perform calculations to provide spatialized output to the user of the spatial output device 200. The IMU 228 provides information about the movement of the spatial output device 200 in each dimension.
The processor 222 receives data from the camera 220 of the spatial output device. The processor 222 processes the data received from the camera 220 to generate spatial output. In some implementations, the information provided by the IMU may assist the spatial output device 200 in processing input from the monocular camera 220 to obtain spatial output. For example, in one implementation, the spatial output may be calculated by the processor using simultaneous location and mapping (SLAM) where the IMU provides the processor with data about the acceleration and the orientation of the camera 220 on the spatial output device 200.
The processor 222 may continuously receive data from the camera 220 and continuously process the data received from the camera 220 to generate spatial output. The spatial output may provide information about a particular space (i.e., a room, warehouse, or building), such as the location of walls, doors, and other physical features in the space, objects in the space, and the location of the spatial output device 200 within the space. The continual spatial output may be used by the processor 222 to generate a map of a physical space. The map may include data about the location of physical features in a space, objects in the space, or the location of the spatial output device 200 relative to physical features or objects in the space. The map may be used by the processor 222 to, for example, guide a user to a particular location in the space using the haptic motor 210 and the haptic motor 214.
In some implementations, the map is stored on the memory 224 of the processor 222 for easy reference by the spatial output device 200. In another implementation, the map is uploaded from the spatial output device 200 to a remote computing location through a wireless (e.g., WIFI) or wired connection on the spatial output device 200. The remote computing location may be, for example, the cloud or an external server. When the map is uploaded to a remote computing location, it may be combined with other maps of other spatial output devices operating in the same space to create a more detailed shared map of the space. The shared map may be accessible by all the spatial output devices operating in a space. The shared map may be used by multiple spatial output devices to enable communication between multiple spatial output devices, such as by providing remote multi-dimensional audio.
The spatial output device 200 may guide a user to a particular location on the map using pairs of spatial output mechanisms. The spatial output mechanisms may be, for example, the speaker 208 and the speaker 212 or the haptic motor 210 and the haptic motor 214. In an example implementation, a user is guided to a location on the map using the speaker 208 and the speaker 212. The speaker 208 may emit a tone when the directions indicate that the user should turn right. Similarly, the speaker 212 may emit a tone when the directions indicate that the user should turn left. The speaker 208 and the speaker 212 may emit other tones signaling other information to the user. For example, the speaker 208 and the speaker 212 may emit a combined tone when the user reaches the location.
In some implementations, the spatial output device 200 may include sensors to allow the spatial output device 200 to distinguish between users. For example, the spatial output device 200 may include a fingerprint sensor. The spatial output device 200 may maintain multiple user profiles associated with the fingerprints of multiple users. When a new user wishes to log in to the spatial output device 200, the new user may do so by providing a fingerprint. Other sensors may be used for the same purpose, such as, without limitation a camera for facial recognition or a microphone that has the ability to distinguish between the voices of multiple users.
The spatial output device 200 may include additional electronic components in either the left electronic enclosure 202 or the right electronic enclosure 204. For example, the spatial output device 200 may include, without limitation, biometric sensors, beacons for communication with external sensors placed in a physical space, and user input components, such as buttons, switches, or touch sensors.
An affixing operation 304 affixes at least one power source to at least one of the two hanging electronics enclosures. In one implementation, the power source is located in one of the two electronics enclosures and is connected to the other electronics enclosure via the flexible electronic connector.
A connecting operation 306 connects at least one input sensor to the power source. The input sensor is affixed to one of the two hanging electronics enclosures and receives a monocular input. In one implementation, the input sensor is a monocular camera.
A second connecting operation 308 connects an onboard processor to the at least one power source. The onboard processor processes the monocular input to generate a spatialized output. In some implementations, the monocular input may be processed along with information from other sensors on the spatial output device, such as IMUs, to generate a spatialized output. For example, in one implementation, the spatial output may be calculated by the processor using simultaneous location and mapping (SLAM) where the IMU provides the processor with data about the acceleration of the camera on the spatial output device. The acceleration data provided by the IMU can be used to calculate the distance the camera travels between two images of the same reference point.
In one implementation, the monocular input is processed to generate a spatialized output by building graphs for sensors on the spatial output device. A sensor graph is built for each sensor of the spatial output device that will be used to provide data. Nodes are added to the graph each time a sensor reports that it has substantially new data. Edges created between the newly added node and the previous node represent the spatial transformation between the nodes, as well as the intrinsic error reported by the sensor. A meta-graph is also built, and a new node is also added to the meta-graph when a new node is added to the sensor graph. When the new node is added to the meta-graph, it is called a spatial print. When a spatial print is created, each sensor graph is queried, and edges are created from the spatial print to the most current node of each sensor graph with data available. Accordingly, the meta-graph contains a trail of nodes representing a history of measured locations. As new data is added to the meta-graph, the error value of each edge is analyzed, and the estimated position of each of the previous nodes is adjusted to minimize total error. Any type of sensor may act as an input to the system, including, without limitation, fiducial tag tracking with a camera, object or feature recognition with a camera, GPS, Wi-Fi fingerprinting, and sound source localization.
The onboard processor also outputs the spatial output. In some implementations, the spatial output may be output to memory on the processor of the spatial output device. In other implementations, the spatial output may be output to a remote computing location (e.g., the cloud or an external server) via a communicative connection between the spatial output device and the remote computing location (e.g., WIFI, cellular network, or other wireless connection).
In some implementations, one or more tangible processor-readable storage media are embodied with instructions for executing on one or more processors and circuits of a computing device a process including processing the monocular input to generate a spatial output or outputting the spatial output. The one or more tangible processor-readable storage media may be part of a computing device.
The computing device may include a variety of tangible processor-readable storage media and intangible processor-readable communication signals. Tangible processor-readable storage can be embodied by any available media that can be accessed by the computing device and includes both volatile and nonvolatile storage media, removable and non-removable storage media. Tangible processor-readable storage media excludes intangible communications signals and includes volatile and nonvolatile, removable and non-removable storage media implemented in any method or technology for storage of information such as processor-readable instructions, data structures, program modules or other data. Tangible processor-readable storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CDROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other tangible medium which can be used to store the desired information and which can be accessed by the computing device. In contrast to tangible processor-readable storage media, intangible processor-readable communication signals may embody processor-readable instructions, data structures, program modules or other data resident in a modulated data signal, such as a carrier wave or other signal transport mechanism. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, intangible communication signals include signals traveling through wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared, and other wireless media.
An example spatial output device is provided. The spatial output device includes a flexible electronics connector configured to slidably hang from a support and two electronics enclosures electrically connected by the flexible electronic connector. Each electronics enclosure is weighted relative to the other electronics enclosure to maintain a balanced position hanging from the flexible electronic connector. The spatial output device further includes at least one power source affixed to at least one of the two hanging electronics enclosures and at least one input sensor affixed to at least one of the two hanging electronics enclosures and powered by the at least one power source, the at least one input sensor being configured to receive a monocular input. The spatial output device further includes one or more onboard processors affixed to at least one of the two hanging electronics enclosures and powered by the at least one power source, the onboard processor configured to process the monocular input received from the at least one input sensor to generate a spatial output providing at least two-dimensional information.
A spatial output device of any previous spatial output device is provided, where the one or more onboard processors is further configured to transmit the spatial output to a remote computing location.
A spatial output device of any previous spatial output device is provided, where the support from which the flexible connector hangs includes a neck of the user and the at least one input sensor includes at least one biometric input sensor, the at least one biometric input sensor being configured to determine the identity of the user of the spatial output device.
A spatial output device of any previous spatial output device further includes one or more processor-readable storage media devices, where the one or more onboard processors is further configured to integrate the spatial output into a digital map representation stored in the one or more processor-readable storage media devices.
A spatial output device of any previous spatial output device is provided, where the one or more onboard processors is further configured to output directional information directed to a location on the digital map representation through one or more spatial output components.
A spatial output device of any previous spatial output device is provided, where the one or more spatial output components includes one of a speaker or headphones.
A spatial output device of any previous spatial output device is provided, where the one or more spatial output components include a haptic motor.
A spatial output device of any previous spatial output device is provided, where the at least one input sensor includes an internal measurement unit (IMU), the IMU configured to provide acceleration data and orientation data to the one or more onboard processors.
A spatial output device of any previous spatial output device is provided, where the one or more onboard processors is further configured to process the monocular input to generate the spatial output using the acceleration data and the orientation data provided by the IMU.
An example spatial sensing and processing method includes electrically connecting two electronics enclosures by a flexible electronic connector, the two electronic enclosures hanging from the flexible electronic connector, the flexible electronic being configured to slidably hang from a support, each of the two electronics enclosures being weighted relative to the other electronics enclosure to maintain a balanced position hanging from the flexible electronic connector and the support. The method further includes affixing at least one power source to at least one of the two hanging electronics enclosures and connecting at least one input sensor to the at least one power source, the at least one input sensor being affixed to at least one of the two hanging electronics enclosures to receive a monocular input. The method further includes connecting an onboard processor to the at least one power source, the onboard processor being affixed to at least one of the two hanging electronics enclosures, the onboard processor being configured to process the monocular input received from the at least one input sensor to generate a spatial output providing at least two-dimensional information.
An example method of any previous method is provided, where the spatial output is output to a remote computing location.
An example method of any previous method is provided, where the onboard processor is further configured to integrate the spatial output with a map.
An example method of any previous method is provided, where the onboard processor is further configured to provide directions to a location on the map through a pair of spatial output mechanisms located on each of the two electronics enclosures.
An example method of any previous method is provided, where the pair of spatial output mechanisms are one of speakers or headphones.
An example method of any previous method is provided, where the pair of spatial output mechanisms are haptic motors.
An example method of any previous method further includes connecting an internal measurement unit (IMU) to the at least one power source, the IMU being configured to provide acceleration data and orientation data to the onboard processor.
An example method of any previous method is provided, where the onboard processor is further configured to process the monocular input to generate the spatial output using the acceleration data and the orientation data provided by the IMU.
An example system includes means for electrically connecting two electronics enclosures by a flexible electronic connector, the two electronic enclosures hanging from the flexible electronic connector, the flexible electronic being configured to slidably hang from a support, each of the two electronics enclosures being weighted relative to the other electronics enclosure to maintain a balanced position hanging from the flexible electronic connector and the support. The system further includes means for affixing at least one power source to at least one of the two hanging electronics enclosures and means for connecting at least one input sensor to the at least one power source, the at least one input sensor being affixed to at least one of the two hanging electronics enclosures to receive a monocular input. The system further includes means for connecting an onboard processor to the at least one power source, the onboard processor being affixed to at least one of the two hanging electronics enclosures, the onboard processor being configured to process the monocular input received from the at least one input sensor to generate a spatial output providing at least two-dimensional information.
An example system of any preceding system is provided, where the spatial output is output to a remote computing location.
An example system of any preceding system is provided, where the onboard processor is further configured to integrate the spatial output with a map.
An example system of any preceding system is provided, where the onboard processor is further configured to provide directions to a location on the map through a pair of spatial output mechanisms located on each of the two electronics enclosures.
An example system of any preceding system is provided, where the pair of spatial output mechanisms are one of speakers or headphones.
An example system of any preceding system is provided, where the pair of spatial output mechanisms are haptic motors.
An example system of any preceding system further includes means for connecting an internal measurement unit (IMU) to the at least one power source, the IMU being configured to provide acceleration data and orientation data to the onboard processor.
An example system of any preceding system is provided, where the onboard processor is further configured to process the monocular input to generate the spatial output using the acceleration data and the orientation data provided by the IMU.
An example spatial output device includes a flexible electronics connector configured to slidably hang from a support and two electronics enclosures electrically connected by the flexible electronic connector, each electronics enclosure being weighted relative to the other electronics enclosure to maintain a balanced position hanging from the flexible electronic connector. The spatial output device further includes at least one power source affixed to at least one of the two hanging electronics enclosures and at least one input sensor affixed to at least one of the two hanging electronics enclosures and powered by the at least one power source, the at least one input sensor being configured to receive a monocular input.
A spatial output device of any previous spatial output device further includes one or more onboard processors affixed to at least one of the two hanging electronics enclosures and powered by the at least one power source, the onboard processor configured to process the monocular input received from the at least one input sensor to generate a spatial output providing at least two-dimensional information.
A spatial output device of any previous spatial output device further includes one or more processor-readable storage media devices, wherein the one or more onboard processors is further configured to integrate the spatial output into a digital map representation stored in the one or more processor-readable storage media devices.
Some implementations may comprise an article of manufacture. An article of manufacture may comprise a tangible storage medium to store logic. Examples of a storage medium may include one or more types of computer-readable storage media capable of storing electronic data, including volatile memory or non-volatile memory, removable or non-removable memory, erasable or non-erasable memory, writeable or re-writeable memory, and so forth. Examples of the logic may include various software elements, such as software components, programs, applications, computer programs, application programs, system programs, machine programs, operating system software, middleware, firmware, software modules, routines, subroutines, operation segments, methods, procedures, software interfaces, application program interfaces (API), instruction sets, computing code, computer code, code segments, computer code segments, words, values, symbols, or any combination thereof. In one implementation, for example, an article of manufacture may store executable computer program instructions that, when executed by a computer, cause the computer to perform methods and/or operations in accordance with the described embodiments. The executable computer program instructions may include any suitable type of code, such as source code, compiled code, interpreted code, executable code, static code, dynamic code, and the like. The executable computer program instructions may be implemented according to a predefined computer language, manner or syntax, for instructing a computer to perform a certain operation segment. The instructions may be implemented using any suitable high-level, low-level, object-oriented, visual, compiled and/or interpreted programming language.
The implementations described herein are implemented as logical steps in one or more computer systems. The logical operations may be implemented (1) as a sequence of processor-implemented steps executing in one or more computer systems and (2) as interconnected machine or circuit modules within one or more computer systems. The implementation is a matter of choice, dependent on the performance requirements of the computer system being utilized. Accordingly, the logical operations making up the implementations described herein are referred to variously as operations, steps, objects, or modules. Furthermore, it should be understood that logical operations may be performed in any order, unless explicitly claimed otherwise or a specific order is inherently necessitated by the claim language.
The present application is related to U.S. patent application Ser. No. ______ [Docket No. 404005-US-NP], entitled “Remote Multi-Dimensional Audio,” which is filed concurrently herewith and is specifically incorporated by reference for all that it discloses and teaches.