Many home theater systems provide users with the opportunity to calibrate the speakers of the home theater system to provide optimum sound quality at a particular location. For example, if a user has a favorite seat on the couch in the family room, the home theater system can be calibrated to provide optimum sound quality for anyone sitting in that particular seat on the couch. However, because the sound is only optimized for a single location, the sound is not optimized at other locations within the room. Furthermore, it is typically a tedious process to optimize the sound quality for a particular location, making it undesirable to frequently modify the sound optimization.
The detailed description is described with reference to the accompanying figures. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The use of the same reference numbers in different figures indicates similar or identical components or features.
This disclosure describes systems and techniques for an environment detection node (EDN) that implements some or all of an automated dynamic audio optimization system. The EDN includes one or more sensors—such as image capturing sensors, heat sensors, motion sensors, auditory sensors, and so forth—that capture data from an environment, such as a room, hall, yard, or other indoor or outdoor area. The EDN monitors the environment and detects characteristics of the environment including physical characteristics such as floor, ceiling, and wall surfaces and the presence of humans and/or furniture locations in the environment based on the captured data, such as by recognizing an object and/or distinguishing a particular object from other objects in the environment. The characteristics of an object—such as size, structure, shape, movement patterns, color, noise, facial features, voice signatures, heat signatures, gait patterns, and so forth—are determined from the captured data. Based on the detected objects and/or humans, the EDN determines an optimized target location based on locations of the recognized objects and/or humans. The optimized target location is determined such that when audio output is optimized for the target location, the audio output is optimized for the detected objects and/or humans within the environment. As humans move about within the environment, the optimized target location may be adjusted so that the audio output remains optimized for the humans within the environment.
In response to determining an optimized target location, the EDN causes audio output to be optimized for the target location. Audio may be output through the EDN or audio may be output through a separate device (e.g., a home theater system) that is communicatively connected to the EDN. In an example implementation, the optimized target location may be determined initially based on furniture locations (or locations of other inanimate objects) in the environment, and may then be dynamically modified as humans enter, move about, and/or leave the environment. Furthermore, the optimized target location may be dynamically modified based on user preferences associated with specific humans identified within the environment and/or based on audio content that is currently being output. In addition to dynamically optimizing the audio output based on a determined target location, the EDN may also determine one of multiple available audio output devices to output the audio and may adjust the audio output (e.g., equalizer values) based on audio characteristics of the environment, the audio content, and/or user profiles associated with humans identified within the environment.
For example, based on detected floor, ceiling, and wall surfaces and/or based on sound detected in the environment, the EDN may determine audio characteristics of the room. For instance, a room with tile floor and walls (e.g., a bathroom) may exhibit more echo than a room with plaster walls and a carpeted floor. Detected audio characteristics include but are not limited to levels of echo, reverb, brightness, background noise, and so on.
Illustrative Environment
In addition the EDN 102 may implement all or part of a dynamic audio optimization system. To do so, the EDN 102 scans the environment 100 to determine characteristics of the environment, including the presence of any objects, such as a chair 106 and/or a human 108 within the environment 100. The EDN 102 may keep track of the objects within the environment 100 and monitor the environment for objects that are newly introduced or objects that are removed from the environment. Based on the objects that are identified at any given time, the EDN 102 determines an optimized target location and optimizes audio output from speakers 110 based on the determined target location. That is, the EDN 102 may alter settings associated with the audio output to optimize the sound at that location. This may include selecting one or more speakers to turn on or off, adjusting settings of the speakers, adjusting the physical position (e.g., via motors) of one or more of the speakers, and the like.
For example, the EDN 102 may first identify furniture locations within the environment 100 by identifying the chair 106. Because the chair 106 is the only furniture that provides seating, the location of the chair may be identified as the optimized target location within the environment 100. In an alternate environment that includes multiple seating locations (e.g., a couch and a chair) an average location based on each seating location may be selected as the optimized target location, or an optimized target location may be selected based on the location of users within the environment. For instance, if a user is in a first chair but a second chair is unoccupied, then the EDN 102 may optimize the sound at the location of the first chair. If users are sitting in both the first and second chair, however, then the EDN 102 may select a location in the middle of the chairs as the location at which to optimize the sound.
In another example, when the EDN 102 identifies the presence of the human 108 and the EDN determines that no one is sitting in the chair 106, the optimized target location may be dynamically adjusted to the location of the human 108 rather than related to the furniture. In an example implementation, when multiple humans are identified within the environment 100, an average location based on the locations of each of the identified humans may be determined to be the optimized target location. Similarly, as one or more humans move about within the environment, the optimized target location may be dynamically modified.
After identifying the optimized target location, the EDN 102 adjusts audio output based, at least in part, on the determined target location. For example, the EDN 102 adjusts equalizer values (e.g., treble and bass), volume, sound delay, speaker positions, and so on for each of multiple speakers 110 so that the sound quality is optimum at the determined target location.
If the optimized target location is based on the detection of a particular human, the sound quality may also be adjusted based on a user profile associated with the particular human. For example, in a family setting, a teenage boy may prefer an audio adjustment that includes more bass, while a mother may prefer an audio adjustment with less bass. In some instances, the EDN may include sensors (e.g., a camera, a microphone) to identify users based on facial recognition techniques, audio recognition techniques, and/or the like.
The EDN may also adjust the sound quality based, at least in part, on the audio content that is being output. For example, the EDN may use different adjustments for televised sporting events, classical music, action movies, children's television programs, or any other genre of audio output.
As illustrated, the EDN 102 comprises a computing device 112, one or more speakers 110, a projector 114, and one or more sensor(s) 116. Some or all of the computing device 112 may reside within a housing of the EDN 102 or may reside at another location that is operatively connected to the EDN 102. For example, the speakers 110 may be controlled by a home theater system separate from the EDN 102. The computing device 112 comprises one or more processor(s) 118, an input/output interface 120, and storage media 122. The processor(s) 118 may be configured to execute instructions that may be stored in the storage media 122 or in other storage media accessible to the processor(s) 118.
The input/output interface 120, meanwhile, may be configured to couple the computing device 112 to other components of the EDN 102, such as the projector 114, the sensor(s) 116, other EDNs 102 (such as in other environments or in the environment 100), other computing devices, network communication devices (such as modems, routers, and wireless transmitters), a home theater system, and so forth. The coupling between the computing device 112 and other devices may be via wire, fiber optic cable, wireless connection, or the like. The sensors may include, in various embodiments, one or more image sensors such as one or more cameras (including a motion camera, a still camera, an RGB camera), a ToF sensor, audio sensors such as microphones, ultrasound transducers, heat sensors, motion detectors (including infrared imaging devices), depth sensing cameras, weight sensors, touch sensors, tactile output devices, olfactory sensors, temperature sensors, humidity sensors, and pressure sensors. Other sensor types may be utilized without departing from the scope of the present disclosure.
The storage media 122, meanwhile, may include computer-readable storage media (“CRSM”). The CRSM may be any available physical media accessible by a computing device to implement the instructions stored thereon. CRSM may include, but is not limited to, random access memory (“RAM”), read-only memory (“ROM”), electrically erasable programmable read-only memory (“EEPROM”), flash memory, or other memory technology, compact disk read-only memory (“CD-ROM”), digital versatile disks (“DVD”) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computing device 112. The storage media 122 may reside within a housing of the EDN, on one or more storage devices accessible on a local network, on cloud storage accessible via a wide area network, or in any other accessible location.
The storage media 122 may store several modules, such as instructions, datastores, and so forth that are configured to execute on the processor(s) 118. For instance, the storage media 122 may store an operating system module 124, an interface module 126, a detection module 128, a characteristics datastore 130, an authentication module 132, a target location module 134, an audio adjustment module 136, and an audio profiles datastore 138.
The operating system module 124 may be configured to manage hardware and services within and coupled to the computing device 112 for the benefit of other modules. The interface module 126, meanwhile, may be configured to receive and interpret commands received from users within the environment 100. For instance, the interface module 126 may analyze and parse images captured by one or more cameras of the sensor(s) 116 to identify objects and users within the environment 100 and to identify gestures made by users within the environment 100, such as gesture commands to project display content. In other instances, the interface module 126 identifies commands audibly issued by users within the environment and captured by one or more microphones of the sensor(s) 116. In still other instances, the interface module 126 allows users to interface and interact with the EDN 102 in any way, such as via physical controls, and the like.
The detection module 128, meanwhile, receives data from the sensor(s) 116, which may be continuously or periodically monitoring the environment 100 by capturing data from the environment. For example, the detection module 128 may receive video or still images, audio data, infrared images, and so forth. The detection module 128 may receive data from active sensors, such as ultrasonic, microwave, radar, light detection and ranging (LIDAR) sensors, and the like. From the perspective of a human 108 within the environment, the sensing of the data from the environment may be passive or may involve some amount of interaction with the sensor(s) 116. For example, a person may interact with a fingerprint scanner, an iris scanner, or a keypad within the environment 100.
The detection module 128 may detect in real-time, or near real-time, the presence of objects, such as the human 108, within the environment 100 based on the received data. This may include detecting motion based on the data received by the detection module 128. For example, the detection module 128 may detect motion in the environment 100, an altered heat signature within the environment 100, vibrations (which may indicate a person walking within the environment 100), sounds (such as people talking), increased/decreased humidity or temperature (which may indicate more or fewer humans within an interior environment), or other data that indicates the presence or movement of an object within the environment 100.
The detection module 128 determines one or more characteristics of identified humans, such as the human 108, using the captured data. As with detection, sensing of the data from the environment used to determine characteristics of the human 108 may either be passive from the perspective of the human 108 or involve interaction by the human 108 with the environment 100. For example, the human 108 may pick up a book and turn to a particular page, the human 108 may tap a code onto a wall or door of the environment 100, or the human 108 may perform one or more gestures. Other interactions may be used without departing from the scope of embodiments.
The characteristics of the human 108 may be usable to determine, or attempt to determine, an identity of the human 108. For example, the characteristics may be facial characteristics captured using one or more images, as described in more detail below. The characteristics may include other biometrics such as gait, mannerisms, audio characteristics such as vocal characteristics, olfactory characteristics, walking vibration patterns, and the like. The detection module 128 attempts to determine the identity of a detected human, such as the human 108, based at least on one or more of the determined characteristics, such as by attempting to match one or more of the determined characteristics to characteristics of known humans in the characteristics datastore 130.
Where the determined characteristics match the known characteristics within a threshold likelihood, such as at least 80%, 90%, 99.9%, or 99.99% likelihood, the detection module 128 determines that a detected human is “known” and identified. For instance, if a system is more than 95% confident that a detected human is a particular human (e.g., the mom in the household), then the detection module 128 may determine that the detected human is known and identified. The detection module 128 may use a combination of characteristics, such as face recognition and vocal characteristics, to identify the human.
The detection module 128 may interact with the authentication module 132 to further authenticate the human, such as by active interaction of the human with the environment 100. For example, the human 108 may perform one or more authentication actions, such as performing a physical gesture, speaking a password, providing voice input, code or passphrase, tapping a pattern onto a surface of the environment 100, interacting with a reference object (such as a book, glass, or other item in the environment 100), or engaging in some other physical action that can be used to authenticate the human 108. The authentication module 132 may utilize speech recognition to determine a password, code, or passphrase spoken by the human. The authentication module 132 may extract voice data from audio data (such as from a microphone) to determine a voice signature of the human, and to determine the identity of the human based at least on a comparison of the detected voice signature with known voice signatures of known humans. The authentication module 132 may perform one or more of these actions to authenticate the human, such as by both comparing a voice signature to known voice signatures and listening for a code or password/passphrase. In other examples, the human 108 performs a secret knock on a door; the human 108 picks up a book and open the book to specified page in order to be authenticated; or the human 108 picks up an object and places it into a new location within the environment such as on a bookcase, or into his or her pocket, in order to authenticate. Other examples are possible without departing from the scope of embodiments. The authentication module 132 may receive sensor data from the one or more sensor(s) 116 to enable the authentication.
Authenticating the human may be in addition to, or instead of, determining an identity of the human by the detection module 128.
The target location module 134, meanwhile, is configured to determine an optimized target location based on data that is received through the sensor(s) 116 and analyzed by the detection module 128. For example, based on the data received through the sensor(s) 116, the detection module 128 may determine one or more seating locations based on a furniture configuration, may determine locations of one or more humans within the environment, and/or may identify one or more humans within the environment. Based on any combination of determined seating locations, locations of determined humans, and/or identities of particular determined humans, the target location module 134 determines an optimized target location.
The target location module 134 may initially determine the optimized target location based solely on detected furniture locations, and then, as humans are detected within the environment, the target location module 134 may dynamically adjust the optimized target location based on the locations of the detected and/or identified humans. In an example implementation, the optimized target location based solely on the detected furniture locations may be maintained and used as a default optimized target location, for example, each time the EDN 102 is powered on.
The target location module 134 may use any of multiple techniques to determine the optimized target location. For example, if only a single seating area or a single human is detected in the environment 100, then the target location module 134 may determine the optimized target location to be the location that corresponds to the single detected seating location or human. If multiple seating locations and/or humans are detected, target location module 134 may determine the optimized target location to be the location that corresponds to a particular one of the detected seating areas or humans. For example, if the detection module 128 detects one adult-size recliner and several child-size chairs within the environment, the target location module 134 may determine the optimized target location to be the location that corresponds to the single adult-size recliner. Similarly, the target location module 134 may determine the optimized target location to be the location that corresponds to a particular detected human (e.g., the only adult in an environment with other detected children).
As another example, the target location module 134 may determine the optimized target location based on locations of multiple detected seating locations and/or humans. For example, the target location module 134 may determine the optimized target location to be an average location based on locations corresponding to the multiple detected seating areas and/or humans.
Once the target location module 134 has determined the optimized target location, the audio adjustment module 136 causes the audio output to be optimized for the target location. For example, the audio adjustment module 136 sends commands to speakers 110 (or to a home theater system that controls the speakers) to adjust, for instance, the volume, bass level, treble level, physical position, and so on, of each speaker so that the audio quality is optimized at the determined target location.
In addition to optimizing the audio output for the determined particular location, the audio adjustment module 136 may also be configured to adjust the audio output based on user preferences. For example, if the detection module 128 or the authentication module 132 identifies a particular human within the environment, an audio profile associated with the particular human may be accessed in audio profiles datastore 138. The audio profile may indicate user preferences, for example, for volume, treble, and bass values. If such a profile exists for an identified human, the audio adjustment module 136 may further adjust the audio output according to the profile data.
Furthermore, the audio adjustment module 136 may also be configured to adjust the audio output based on detected audio characteristics of the environment. For example, if the detection module 128 detects environmental characteristics that affect audio quality (e.g., hard surface walls, small enclosed space, etc.), the audio adjustment module 136 may further adjust the audio output to compensate for the detected audio characteristics of the environment.
Illustrative Processes
At 202, the EDN 102 receives data from sensors 116. For example, the detection module 128 may receive one or more images captured by an image capturing sensor.
At 204, the EDN 102 determines furniture locations within the environment. For example, the detection module 128 may analyze the captured images to identify one or more seating locations based on furniture (e.g., chairs, sofas, etc.) that is depicted in the captured images.
At 206, the EDN 102 determines an optimized target location based on the furniture locations. For example, the detection module 128 may transmit data that identifies one or more seating locations to the target location module 134. The target location module 134 then determines an optimized target location based on the identified seating locations. As an example, if the environment includes only a single seating location (e.g., chair 106 in environment 100), then the target location module 134 may determine that single seating location to be the optimized target location. However, if the environment includes multiple seating locations (e.g., a chair and a sofa), then the target location module 134 may employ one of multiple techniques for determining an optimized target location. In an example implementation, the target location module 134 may select a particular one (e.g., a most centrally located) of the multiple identified seating locations as the optimized target location. Alternatively, the target location module 134 may determine the optimized target location to be an average location based on the locations of the multiple identified seating locations.
In another alternate implementation, the target location module 134 may determine the optimized target location based on a particular seating location that is most frequently used. For example, in a family environment, if the father is most often present when audio is being output in the environment, and the father usually sits in a particular chair, then the target location module 134 may determine the location of that particular chair to be the optimized target location.
At 208, the EDN adjusts audio output based on the determined target location. For example, if the speakers 110 are part of the EDN, the EDN adjusts the volume, bass, and treble of each speaker to optimize the sound quality at the determined target location. In an alternate implementation, if the speakers are separate from the EDN (e.g., part of a home theater system), the EDN communicates optimization commands to the home theater system, directing the home theater system to adjust any combination of the volume, bass, treble, delay, physical position, etc. of each speaker to optimize the sound quality at the determined target location.
At 302, the EDN 102 receives data from sensors 116. For example, the detection module 128 may receive data from one or more sensors, including image capturing sensors, heat sensors, motion sensors, auditory sensors, and so on.
At 304, the EDN 102 detects one or more humans within the environment. For example, based on the data received from the sensors 116, the detection module 128 determines that there is at least one human within the environment. Such determination may be based on any combination of, for example, image data, heat data, motion data, auditory data, and so on.
At 306, the EDN 102 determines an optimized target location based, at least in part, on locations of the detected humans within the environment. For example, if the detection module 128 identifies a single human within the environment, then the target location module 134 may determine that the optimized target location is a determined location of the detected single human. If the detection module 128 identifies multiple humans within the environment, then the target location module 134 may determine the optimized target location to be an average location based on the locations of the multiple detected humans.
At 308, the EDN adjusts audio output based on the determined target location. For example, if the speakers 110 are part of the EDN, the EDN adjusts the volume, bass, and treble of each speaker to optimize the quality of the audio heard at the determined target location. In an alternate implementation, if the speakers are separate from the EDN (e.g., part of a home theater system), the EDN communicates optimization commands to the home theater system, directing the home theater system to adjust the volume, bass, treble, delay, etc. of each speaker to optimize the quality of the sound heard at the determined target location.
At 402, the EDN 102 receives data from sensors 116. For example, the detection module 128 may receive data from one or more sensors, including image capturing sensors, heat sensors, motion sensors, auditory sensors, and so on.
At 404, the EDN 102 detects multiple humans within the environment. For example, based on the data received from the sensors 116, the detection module 128 determines that there are multiple humans within the environment. Such determination may be based on any combination of, for example, image data, heat data, motion data, auditory data, and so on.
At 406, the EDN 102 identifies an audio output. For example, the EDN determines what type of audio content is being output. If the EDN 102 is providing the audio output, the EDN 102 may identify the audio output based on a source of the audio output (e.g., a particular television program, a particular song, a particular video, etc.). If the EDN is not providing the audio output, the EDN 102 may identify the audio output based on data (e.g., audio data) received from the sensors 116. Alternatively, if the audio output is being provided through a home theater system, the EDN 102 may identify the audio output based on data requested and received from the home theater system.
At 408, the EDN 102 associates one or more of the detected humans with the audio output. For example, based on characteristics datastore 130, the detection module 128 may determine specific identities of one or more of the detected humans. Alternatively, the detection module 128 may determine characteristics of the detected humans, even though the detection module 128 may not positively identify the humans. The identities and/or the determined characteristics may indicate, for example, at least an approximate age and/or gender of each human.
Based on the determined identities and/or characteristics, the target location module 134 associates one or more of the detected humans with the audio output. For example, if the detected humans include one or more adult males and one or more children, and the audio output is identified to be a televised sporting event, then the target location module 134 may associate each of the adult male humans with the audio output while not associating each of the children with the audio output. Similarly, if the detected humans include one or more children and one or more adults, and the audio content is determined to be a children's television program, then the target location module 134 may associate each of the children with the audio output while not associating each of the adults with the audio output. These associations may be made with reference to an array of characteristics associated with the audio, such as a title of the audio, a genre of the audio, a target age range associated with the audio, and the like.
At 410, the EDN 102 determines an optimized target location based, at least in part, on locations of the detected humans within the environment that are associated with the audio output. For example, if the target location module 134 associates a single human within the environment with the audio output, then the target location module 134 may determine that the optimized target location is a determined location of that single human. If the target location module 134 associates multiple humans within the environment with the audio output, then the target location module 134 may determine the optimized target location to be an average location based on the locations of those multiple humans.
At 412, the EDN adjusts audio output based on the determined target location. For example, if the speakers 110 are part of the EDN, the EDN adjusts the volume, bass, treble, delay, physical position, etc. of each speaker to optimize the quality of the sound heard at the determined target optimization location. In an alternate implementation, if the speakers are separate from the EDN (e.g., part of a home theater system), the EDN communicates optimization commands to the home theater system, directing the home theater system to adjust the volume, bass, treble, delay, physical position, etc. of each speaker to optimize the quality of the sound heard at the determined target location.
In addition to optimizing the audio quality at a particular location, EDN 102 may also adjust the audio output based on preferences of specific humans and/or based on detected audio characteristics of the environment.
At 502, the EDN 102 receives data from sensors 116. For example, the detection module 128 may receive data from one or more sensors, including image capturing sensors, heat sensors, motion sensors, auditory sensors, and so on.
At 504, the EDN 102 detects one or more humans within the environment. For example, based on the data received from the sensors 116, the detection module 128 determines that there is at least one human within the environment. Such determination may be based on any combination of, for example, image data, heat data, motion data, auditory data, and so on.
At 506, the EDN 102 identifies a particular human within the environment. For example, the detection module 128 may compare characteristics of a detected human to known characteristics in characteristics datastore 130 to positively identify a particular human. Alternatively, the authentication module 132 may positively identify a particular human based on one or more authentication techniques.
At 508, the EDN 102 adjusts audio output based on an audio profile associated with the identified human. For example, an audio profile may be stored in the audio profiles datastore 138 in association with the identified human. The audio profile may indicate the particular human's preferences for audio quality including, for example, preferred volume, bass, and treble levels. Based on the identified audio profile, the audio adjustment module 136 adjusts the volume, bass, treble, etc. of the audio output, either directly or through communication with the audio source (e.g., a home theater system).
At 602, the EDN 102 receives data from sensors 116. For example, the detection module 128 may receive data from one or more sensors, including image capturing sensors, heat sensors, motion sensors, auditory sensors, and so on.
At 604, the EDN 102 detects one or more audio characteristics of the environment. For example, based on the data received from the sensors 116, the detection module 128 determines characteristics of the environment that may affect audio quality. For example, audio quality may be affected by the size of the environment, the surfaces of walls, ceilings, and floors, the furnishings (or lack thereof) within the environment, background noise, and so on. For example, a small room with tile surfaces (e.g., a bathroom) or a large room void of furnishings may have an echoing and/or reverb affect on audio. Similarly, room with plush carpeting and heavy upholstered furniture may have a sound-absorbing affect on audio. Such determination may be based on any combination of, for example, image data, heat data, auditory data, and so on.
At 606, the EDN 102 adjusts audio output based on the detected audio characteristics of the environment. For example, the audio adjustment module 136 adjusts any combination of the volume, bass, treble, reverb, delay, etc. of the audio output, either directly or through communication with the audio source (e.g., a home theater system), to counteract the detected audio characteristics of the environment.
Although the subject matter has been described in language specific to structural features, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features described. Rather, the specific features are disclosed as illustrative forms of implementing the claims.
Number | Name | Date | Kind |
---|---|---|---|
4953223 | Householder | Aug 1990 | A |
5616876 | Cluts | Apr 1997 | A |
5666426 | Helms | Sep 1997 | A |
5809150 | Eberbach | Sep 1998 | A |
6088455 | Logan et al. | Jul 2000 | A |
6192340 | Abecassis | Feb 2001 | B1 |
6652046 | Christner | Nov 2003 | B2 |
7028082 | Rosenberg | Apr 2006 | B1 |
8184835 | Goodwin | May 2012 | B2 |
8255948 | Black et al. | Aug 2012 | B1 |
8739207 | Black et al. | May 2014 | B1 |
9258665 | Whitley | Feb 2016 | B2 |
20030031333 | Cohen | Feb 2003 | A1 |
20030039366 | Eid et al. | Feb 2003 | A1 |
20030079038 | Robbin et al. | Apr 2003 | A1 |
20030212786 | Jystad et al. | Nov 2003 | A1 |
20050166135 | Burke et al. | Jul 2005 | A1 |
20060107281 | Dunton | May 2006 | A1 |
20060184800 | Rosenberg | Aug 2006 | A1 |
20070011196 | Ball | Jan 2007 | A1 |
20070116306 | Riedel et al. | May 2007 | A1 |
20070124293 | Lakowske | May 2007 | A1 |
20070220552 | Juster | Sep 2007 | A1 |
20080037803 | Breed | Feb 2008 | A1 |
20080040758 | Beetcher | Feb 2008 | A1 |
20080130958 | Ziomek | Jun 2008 | A1 |
20080153537 | Khawand | Jun 2008 | A1 |
20080176511 | Tan | Jul 2008 | A1 |
20090138805 | Hildreth | May 2009 | A1 |
20090304205 | Hardacker | Dec 2009 | A1 |
20110009841 | Ahmed | Jan 2011 | A1 |
20110069841 | Angeloff et al. | Mar 2011 | A1 |
20110164763 | Ogata | Jul 2011 | A1 |
20110222715 | Sato et al. | Sep 2011 | A1 |
20120128173 | Whikehart et al. | May 2012 | A1 |
20120185769 | Whitley | Jul 2012 | A1 |
20120223885 | Perez | Sep 2012 | A1 |
20120230525 | Higuchi | Sep 2012 | A1 |
20130279706 | Marti | Oct 2013 | A1 |
20130329921 | Salsman | Dec 2013 | A1 |
20130336094 | Gruteser | Dec 2013 | A1 |
20150010169 | Popova | Jan 2015 | A1 |
Number | Date | Country |
---|---|---|
2181895 | May 2010 | EP |
WO2011088053 | Jul 2011 | WO |
Entry |
---|
Pinhanez, “The Everywhere Displays Projector: A Device to Create Ubiquitous Graphical Interfaces”, IBM Thomas Watson Research Center, Ubicomp 2001, 18 pages. |