The technology herein relates to human-machine interfaces, and more particularly to interactive graphical computer interfaces with multiple display surfaces. Still more particularly, the technology herein relates to immersive, intuitive, interactive multi-display human-machine interfaces offering plural spatially-correlated displays (e.g., handheld and other) to enhance the user experience through virtual spatiality.
We humans exist in three-dimensional space. We know where we are within the 3D world that surrounds us. Turn your head to the left. Now turn it to the right. Look up. Look down. You have just determined your place within the 3D environment of the room.
Ever since computer graphics became interactive, people have been trying to create human-machine interfaces that provide the same sort of immersive spatial experience we get from looking around the real world. As one example, systems are known that provide users with inertially-sensed head-mounted displays that allow them to look around and interact with a so-called CAVE virtual environment. See e.g., Buxton et al., HMD's, Caves & Chameleon: A Human-Centric Analysis of Interaction in Virtual Space, Computer Graphics: The SIGGRAPH Quarterly, 32(4), 64-68 (1998). However, some kinds of head mounted displays can restrict the user's field of view or otherwise impair a person's ability to interact with the real world.
Flat screen televisions and other such displays common in many homes today can be used to display a virtual world. They provide a reasonably immersive environment with high resolution or high definition at low cost, and most consumers already own one. Some are even “3D” (e.g., through use of special glasses) to provide apparent image depth. The person watching or otherwise interacting with a television or other such display typically at least generally faces it and can see and interact with the virtual environment it presents. Other people in the room can also see and may be able to interact. These displays are wonderful for displaying immersive content but, unlike stereo or theater sound, they generally do not wrap around or envelop the user. Wrap-around projected or other displays are known (see e.g., Blanke et al., “Active Visualization in a Multidisplay Immersive Environment”, Eighth Eurographics Workshop on Virtual Environments (2002)), but may be too expensive for home use. Further innovations are possible.
Some non-limiting example implementations of technology herein provide a movable (e.g., handholdable) window or porthole display surface into a virtual space. Aspects of multi-dimensional spatiality of the moveable display surface relative to another e.g., stationary display are determined and used to generate images while at least some spatial aspects of the plural displays are correlated. As one non-limiting example, the moveable display can present a first person perspective “porthole” view into the virtual space, this porthole view changing based on aspects of the moveable display's spatiality in multi-dimensional space relative to a stationary display providing a contextual spatial reference for the user.
In one non-limiting aspect, a human-machine interface involves plural visual presentation surfaces at least some of which are movable. For example, a display can present an image of a virtual space, and another, moveable display can present an additional image of the same virtual space but from a potentially different viewpoint, viewing perspective, field of view, scale, image orientation, augmentation and/or other image characteristic or aspect.
The non-limiting movable display can selectively display an additional image of the virtual space and/or other user interface information (e.g., text, graphics or video relating or auxiliary to the images the other display presents) depending on the movable display's attitude relative to the other display.
In some example implementations, the movable display can act under some circumstances as a pointing device. When pointed/aimed at another display, the movable display's attitude can control or influence the position of a pointing symbol on the other display. When pointed/aimed away from the other display, the movable display can display a further image of the virtual space displayed on the other display, but from a different viewing direction that depends on the movable display's attitude.
Technology is used to determine aspects of the spatiality of at least one of the display devices in the physical world, and to use the determined spatiality aspects to present appropriately-viewpointed, -viewing-perspectived, -directioned and/or other characteristic images on the displays. As one non-limiting example, determined spatiality of the movable display relative to a stationary or other display can be used to provide relative spatiality of images the plural displays present.
One example non-limiting implementation provides an immersive spatial human-machine interface comprising at least one handheld display movable in free space; an arrangement configured to determine at least some aspects of the attitude of the movable handheld display; at least one additional display; and at least one graphical image generator operatively coupled to the handheld display and the additional display, the at least one graphical image generator generating images of a virtual space for display by each of the handheld display and the additional display, wherein the at least one graphical image generator generates images from different viewing directions images of different perspectives (viewing) from a same or similar virtual location, viewing point, neighborhood, vantage point, neighborhood, vicinity, region or the like, for display by the handheld display and the additional display at least in part in response to the determined attitude aspects to provide spatial correlation between the two images and thereby enhance the immersive spatiality of the human-machine interface.
Such a non-limiting example interface may further provide that when two images are practically similar, the image presented by the handheld display is substituted by other image(s) and/or associated information. In this context, “practically similar” may include or mean similar or the same or nearly the same from the perception of the user who is viewing the two displays.
In some implementations, the additional display comprises a relatively large stationary display, and the at least one graphical image generator does not practically alter the rendering perspective (or viewpoint), except perhaps some marginal, peripheral or border-situation look-around perspective shifts, for images generated for the stationary display based on the determined attitude aspects.
These and other features and advantages will be better and more completely understood from the following detailed description of exemplary non-limiting illustrative embodiments in conjunction with the drawings, of which:
One or more graphics source(s) G control or enable display devices SD, MD to display computer-generated or other images. In the non-limiting example shown, display devices SD, MD are spatially correlated to display respective aspects of a common virtual world for example from different viewpoints or viewing perspectives.
If display device SD is stationary, its location and attitude in 3D space can be assumed and need not be measured. Sensors T measure the attitude or other aspect(s) of potentially-changing spatiality of movable display device MD, e.g., using MARG (“Magnetic Angular Rate Gravity”) technology.
The graphics source(s) G are configured to be responsive to information from the sensors T to control the images displayed on one or both of display devices SD, MD. For example, the graphics source(s) G may include, or be in communication with, processing circuitry that receives information from the sensors T and that controls the displayed images based on the received sensor information.
In example non-limiting implementations, the perspectives, viewpoints and/or viewing directions of images displayed by movable display device MD respond to the spatiality of the movable display device. In some instances, those image perspectives, viewing directions and/or viewpoints may be at least in part determined by apparent or actual relative spatiality of the two display devices SD, MD to continually spatially-correlate the two display presentations. The series of images presented on movable display MD thus are correlated in both space and time with the series images presented on stationary display SD in an example non-limiting implementation.
In one particular non-limiting example, physical display device SD is physically stationary in the room where it is located. It can for example be a relatively large fixed display (e.g., a wall or table mounted television or other display) used to present an image of a 3D virtual world—in this case a landscape including a breathtaking mountain. The images that display device SD displays can be static or dynamic. For example, they can be moving and/or animated (e.g., live action) images that dynamically change in various ways including viewpoint, e.g., in response to user input or other factors.
In the non-limiting example shown, display device MD is movable in free space. As movable display device MD is moved, it can display a movable image of the same virtual world as is displayed on the stationary display SD, but the image the movable display device displays of the virtual world is transformed based on aspects of the spatiality of movable display device MD. For example, the attitude of the movable display MD can be used to display the virtual world on the movable display from a different viewpoint, viewing perspective, viewing direction, field of view, image orientation, augmentation, scale and/or other image characteristic(s).
A human can move movable display device MD anywhere in free space. Moving an object anywhere in free space is sometimes called moving in “six degrees of freedom” (6DOF) because there are generally six basic ways an object can move (see
1) Up/down,
2) Left/right,
3) Forward/backward,
4) pitch rotation (nose up or down),
5) yaw rotation (compass direction or heading during level flight), and
6) roll rotation (one or the other wing up).
For example, the person can move (“translate”) display device MD along any of three orthogonal axes: up and down (vertical or “y”), left and right (horizontal or “x”), and forward and backward (depth or “z”). The person can rotate display device MD about any of three orthogonal rotational axes (pitch, yaw and roll). Just like any object, the person can simultaneously move display device MD in any combination of these “six degrees of freedom” to move display device MD in any direction and manner in three-dimensional space. For example, the person can spin display device MD about the roll (and/or any other) rotational axis at the same time she translates the device down or in any other direction(s)—like an airplane that rolls and descends at the same time, for example.
In the example non-limiting implementation, system S has sensors T that detect aspects of the spatiality of movable display MD such as its attitude. These sensors T may include one or more of accelerometers, gyrosensors, magnetometers, ultrasonic transducers, cameras, and the like. These sensors T enable movable display device MD to display corresponding different parts of the virtual world (e.g., the mountain) from different viewpoints, viewing perspectives, viewing directions or other characteristics responsive to aspects of the display device's current spatiality.
In one non-limiting example, the person can move display device MD to new attitudes that permit the person to examine different parts of the mountain. For example, to look at the base of the mountain, the person can move or rotate display device MD downward. One or more of the sensors T detect this downward movement such as translation and/or rotation, and the graphics source(s) G are responsive to the downward movement detection to display the base of the mountain on the movable display device MD. At the same time, the graphics source(s) G may continue to display the same view of the mountain on the stationary display device, thereby effectively providing a view that the person can intuitively use as a reference or context to understand that the display device MD is showing the base of the mountain and as a reference or context for further movement or rotation of display device MD to look at other parts of the mountain. For example, to look at the mountain's peak, the person can move or rotate the display device upwards.
In some non-limiting examples, the person can move display device MD to examine parts of the 3D virtual world that can't currently be seen on stationary display SD. For example, the person can turn display device MD to the left or the right, up or down, or away from stationary display device SD (e.g., so a ray projecting from the movable display device MD in a direction that is normal to the planar dimension of the movable display device does not intersect the stationary display SD) to visualize portions of the virtual world (e.g., other mountains, a mountain lake, a valley, a town, other side(s) of the same mountain, etc.) that cannot currently be seen on stationary display SD.
Thus, in one particular example, the stationary display SD acts as a spatial reference that orients the user within the 3D virtual space. It provides a perceptual context or anchor for the person using movable display device MD. Because the person can see both the stationary display SD and the movable display MD at the same time, the person's brain takes both images into account. If at least some aspects of the two images are spatially correlated, the person's brain can interpolate or interpret the two displayed images to be two windows or other views into the same virtual world. The movable display MD thus provides a 3D spatial effect that is not available merely from the image displayed by the stationary display SD.
The image displayed on stationary display SD can be static or dynamic. For example, it is possible to use a joystick to move a character in a virtual reality database or space. For example, the analog stick can be used to change the direction of the first person player and can control the walk of a first person through the game. An example could be a virtual reality walkthrough. In this case, the stationary (e.g., TV) display will display the character's view or location as a moving scene. Manipulating controls on the movable display MD (or changing the attitude of the movable display housing) could for example cause system S to generate a series of images on stationary display SD, the images changing to show progression of a virtual character through the virtual scene. The changing images displayed on stationary display SD may thus be animated in some example non-limiting implementation to expose different objects, enemies, friends, and other virtual features as the virtual character moves through the 3D virtual space. See
System S thus provides a high degree of interactivity. By interacting with an additional, spatially-correlated viewing surface that the person can move anywhere in 3D space, the person becomes immersed within the virtual environment. She feels as if she is inside the virtual environment. She can look around, explore and discover new aspects of the virtual environment by rotating and/or translating the movable display MD (or herself while she is holding the movable display device). It is even possible for the person to rubberneck or look inquisitively around the virtual environment by looking about or survey with wonderment or curiosity. She can also interact with the virtual environment though additional user input arrangements on the movable display MD. For example, the movable display MD provides, in one non-limiting implementation, a touch screen TS that exposes a portion of the virtual environment to being directly manipulated (e.g., touched) by the person. See
For example, suppose the person is virtually moving in the virtual space. The person can virtually walk through the virtual world (e.g., by controlling joystick or slider type controls on the movable display MD) and see corresponding changing images displayed on the stationary display SD that reflect the user's progress and travels through the virtual space. The person can at any time stop and look around. At this point, the system S can define two viewing angles from the approximately same location. In one example non-limiting implementation, the two displays (one on SD, the other on MD) can be based on the same or similar location but defined by two different viewing angles or perspectives.
In one non-limiting example, the size of movable display device touch screen TS is small enough so that movable display device MD is easily movable and holdable without fatigue, but large enough so the spatially-correlated patch of the 3D virtual world the movable display device MD presents is significant in affecting the person's spatial perception. Because the movable display device MD is much closer to the person's eyes than the stationary display SD, the person perceives it to be larger in her visual field of view. In fact, the person can hold the movable display device MD such that it may temporarily block or obscure her view of stationary display SD. Nevertheless, the stationary display SD remains present in the person's environment to be glanced at from time to time. It continues to anchor and orient the person's visual perception with a context that makes the spatially-correlated image displayed by movable display MD appear to be part of the same virtual environment that the stationary display SD displays. Generally speaking, watching a larger screen on stationary display SD is more comfortable because it is big. If the same, practically the same or similar images are displayed on the stationary display SD and movable display MD, most players will focus on the big screen rather than the small screen. In many non-limiting cases, a pointing cursor may be displayed on the stationary display SD as a pointing target. By changing the attitude of the movable display MD, it is possible to control the position of a pointing target within the display or other presentation area of the stationary display SD.
In this context, “practically similar” or “practically the same” can mean the same or nearly the same from the user's perspective. In one example implementation, processing is not necessarily conditional on the images being exactly the same. When the perspective is the same or similar from a user perspective, or when the situation is in range, then the processing will take place. In one non-limiting example, when “in range” (e.g., when a ray projecting from the movable display device MD in a direction that is normal to the planar dimension of the movable display device intersects the stationary display SD—or an assumed location of stationary display SD), the movable display will display different information. Thus, when the displays on the display devices would be essentially the same from the user's standpoint, the user can instead see something different on the movable display MD.
For example, in another non-limiting example (see
In many non-limiting cases, the relatively small movable display MD shows status such as for example selection of weapons or other items or status information. It is not necessary for the subject to continually watch the movable display, but at any desired time the subject can touch to select items, change items, etc. If something interesting happens on the big screen SD, e.g., sound and voices or some enemy flying up or flying away from the big screen, the person can change or move the movable display devices to a location that is “out of range” (e.g., when a ray projecting from the movable display device MD in a direction that is normal to the planar dimension of the movable display device does not intersect the stationary display SD—or an assumed location of stationary display SD), such as upwards, downwards or even backwards. A continuous area surrounding the stationary display SD's view will be displayed on the movable display MD, while keeping the stationary display in the same or similar location(s). The bigger screen of stationary display SD becomes an absolute or relative reference for the virtual world, allowing the person to turn back to or glance at the original place.
Illustrative Examples of Spatial Correlation Between the Displays
In the example non-limiting implementation, the real-world attitude of display device MD determines the virtual world viewpoint, viewing perspective or direction, or other viewing transformation for generating the image that display device MD displays. For example,
The image the stationary display device SD displays can itself be interactive and animated (e.g., live action), adding to the sense of realism. The patch displayed on movable display MD can also be interactive and animated (e.g., live action). While the person may feel immersed to some degree in a large stationary display presentation (and immersion can be improved by providing 3D effects and interactivity), immersiveness of the stationary display device SD is substantially enhanced through the addition of a spatially-correlated image patch displayed on the movable display MD.
Movable display device MD is closer to the person, so it can also be quite immersive despite its relatively small yet adequate size. It may also display animated interactive images responsive to user input. In one example embodiment, the person can control the viewpoint, viewing perspective, viewing direction or other viewing transformation of movable display device MD into the virtual 3D world by moving the movable display device. Moving display device MD provides an interactive view into the virtual world that changes in view as the person moves her body, head, etc. This functionality allows the person to enjoy a high degree of immersive interactivity with the virtual 3D world—an effect that is substantially enhanced by the omnipresence of stationary display SD which provides a larger immersive image orienting the user within the same virtual world. In one example implementation, the person has her own private porthole into the animated virtual 3D world without the need to wear restrictive gear such as a head mounted display or special glasses.
Furthermore, the user in some applications can directly interact with the patch the movable display MD displays. As one example, system S can simulate the user touching parts of the virtual world by placing a finger or stylus on the movable display MD. See
For example, moving display device MD from the attitude shown in
Additionally, during such usage in some non-limiting embodiments, movable display MD may act as a pointing device to specify the position of a pointing object the stationary display SD displays. If a ray normal to the movable display MD intersects an assumed location of the stationary display SD, system S may use aspects of the attitude of movable display MD to control the position of a pointing object such as a cursor, site or other indicator on the stationary display SD.
As an additional example,
There could be more than one stationary display SD and more than one movable display MD.
In other example implementations, there may be no stationary screen. Imagine N mobile screens that start with an initial calibration step to correlate their physical spatial orientation. After this calibration step, the N screens represent a physically spatially coherent view of the virtual database. This can even be extended to position correlation if there is a good way to sense positions unambiguously in all conditions. With some example implementations, orientation is determined by using the magnetic sensor to correct orientational drift. However, it is possible to provide the same if a better positional sensor(s) exist to have perfect location info.
Example Applications that Context-Switch Movable Display Based on Attitude
In some non-limiting applications, it may be desirable to use movable display MD for different purposes or contexts depending on the attitude and/or pointing direction of the movable display e.g., relative to the stationary display SD.
(a) in range,
(b) out of range, and
(c) transition area.
In some example non-limiting implementations, system S can automatically switch usage or context of movable display MD depending on whether the movable display's attitude or pointing direction is in-range or out-of-range. In one example non-limiting implementation, the pointing direction of movable display MD may for example be defined by a ray normal to the plane of the movable display and extending outward from the back surface of the movable display away from the user. Such an example non-limiting pointing direction thus corresponds to the direction of the user's gaze if the user were able to look through movable display MD as if it were a pane of transparent glass.
In Range
In such an in-range case for some non-limiting implementations, the position of a pointer such as cursor or other object displayed on the stationary display SD (as shown in
In such an example, a virtual camera defining the view for display by stationary display SD does not necessarily move or change according to the attitude of movable display MD. Users will mainly see the stationary display SD in many or most applications, so the virtual space the stationary display SD displays does not necessarily need to be displayed on the movable display MD. In such a non-limiting instance, the movable display MD can be used to display other types of information such as for example score (for a game), statistics, menu selections, user input options, or any of a variety of other ancillary or other information useful to the user. In some cases, the movable display may overlay such information to augment a displayed image of the virtual world from a direction based on the attitude of the movable display MD. Thus, in some implementations, the movable display MD can display more limited (or different, supplemental, control or non-redundant) information while the user's attention is likely to be focused on the stationary display SD.
Out-of-Range
In some non-limiting implementations, the pointing position of a pointer, cursor or other object displayed on the movable display MD can also be controlled, for example, to stay in the center of the movable display device's display screen as the image beneath it changes in viewing direction to reflect the attitude of the movable display. See
Transition Area Case
Thus, if the user is aiming at the big screen of stationary display SD using the movable device MD, he is likely always watching the big screen and controlling the target. If the user targets near the edge area, the transition mode may exist. The big screen slightly or marginally shifts toward the direction of the gun direction. But after some marginal movement, suddenly the smaller screen of movable display MD starts displaying portions of the virtual space outside the stationary display SD. This can be a dramatic effect. At the transition, a slight lookabout for a small degree is possible to move the image displayed by the stationary display SD. Such a marginal or peripheral lookabout shift (e.g., based on the presence of a cursor or other pointer object controlled in accordance with aspects of the attitude of the movable display) can be provided in the direction of and in response to movement of movable display MD, e.g., to alert the user that a context switch of the movable display is about to occur either from a lookaround display mode to an auxiliary information display mode or vice versa.
More Detailed Non-Limiting Examples:
As shown in
In some example implementations, display device MD could be worn by a person, for example attached to a part of the person's body such as the person's wrist, forearm, upper arm, leg, foot, waist, head, or any other portion of the person's body or clothing. In other example implementations, movable handholdable display device MD can be fixedly or otherwise mounted to a movable or fixed inanimate structure (see e.g., 7, 8C). For example, in some example arrangements, as shown in
As mentioned above, non-limiting display device MD has a touch screen TS that can be controlled by touching it with a stylus (see
Sensors T determine the attitude of display device MD. Sensors T may be contained within display device MD, placed outside of display device MD, or some sensors may be disposed within display device MD and other sensors may be disposed externally to display device MD. More detail concerning example non-limiting arrangements for sensors T may be found in U.S. patent application Ser. Nos. 13/019,924 and 13/019,928, filed on Feb. 2, 2011, the contents of which are incorporated herein by reference in their entirety. Special solutions or applications may use G (gravity) only, M (magnetic) only, or MG (magnetic+gyro) only out of “MARG” for 2D, leaner or reduced movement or special games or other applications.
In one particular example, sensors T may comprise a “MARG” sensor array spread between display device MD and a movable accessory device to which the movable display MD can be fixedly attached (see
In other implementations, sensors T shown in
As mentioned above, in some example non-limiting implementations, sensors T representing a collection of sensors which makes up MARG may be distributed in one or more housings or places. Such housings can be for example either stationary or moved spatially by the user. The subset of housing elements can be combined together to behave as a single element as shown in
In one example, sensors T may include some or all of the sensors used in the Nintendo Wii Remote Controller and/or Nintendo Wii Remote Plus video game controller, i.e., a direct pointing device, a triaxial or other accelerometer, and/or a triaxial or other gyroscope. Such sensors T could also include a multi-axis single-chip magnetometer, an inside-looking-out or outside-looking-in external optical, ultrasonic and/or other tracker, or many other variations as would be understood by those skilled in the art. Although not necessary for many applications such as home virtual reality and video game play, it might be desirable in certain applications to accurately track the complete pose (position and orientation) of the movable display MD with a desired degree of accuracy to provide nearly complete fidelity in spatial correlation between the image displayed by the movable display MD and the image the stationary display SD displays. The items listed below and incorporated by reference enable one skilled in the art to implement any desired arrangement of sensors T based on particular system requirements and desired applications:
The remote console or computer G can be one or more graphics generators located in one place or distributed in a variety of places communicating via one or more networks. Such graphics generator(s) can use conventional 3D graphics transformations, virtual camera and other techniques to provide appropriately spatially-coherent or other images for display by the displays MD, SD. For example, the graphics generator G can be any of:
a graphics generator that is part of or is a separate component co-located with stationary display SD and communicates remotely (e.g., wirelessly) with the movable display MD; or
a graphics generator that is part of or is a separate component co-located with movable display MD and communicates remotely (e.g., wirelessly) with the stationary display SD or associated equipment; or
a distributed graphics generating arrangement some of which is contained within the movable display MD housing and some of which is co-located with the stationary display SD, the distributed portions communicating together via a connection such as a wireless or wired network; or
a graphics generator located remotely (e.g., in the cloud) from both the stationary and movable displays SD, MD and communicating with each of them via one or more network connections; or
any combination or variation of the above.
In the case of a distributed graphics generator architecture or arrangement, appropriate data exchange and transmission protocols are used to provide low latency and maintain interactivity, as will be understood by those skilled in the art.
In one particular example, the MARG attitude sensors (i.e., a triaxial accelerometer, a triaxial gyroscope and a tri-axial magnetometer) provide sensor outputs to a processor 102 that executes instructions stored in non-transitory firmware storage 104. An example non-limiting method uses the 3-axis gyroscope (Angular Rate) with error correction by using 3-axis accelerometer based Gravity vector reference (for roll and pitch), and 3-axis Magnetometer based north vector reference (for yaw). This so called orientation or attitude measurement is by “Magnetic Angular Rate Gravity (MARG)”. In the example shown, the MARG sensors T are not capable of detecting absolute position and use a magnetometer to detect what can be thought of as something like magnetic compass heading relative to the earth's magnetic field.
It is not necessary in many applications for system S to track the absolute or even relative position of movable display MD in 3D space because the user's depth perception does not in at least some such applications require such fidelity. For example, the human eye includes a lens that changes shape to focus on near and far objects, and the user uses monocular and binocular cues to perceive distance. See e.g., Schwartz et al., “Visual Perception, Fourth Edition: A Clinical Orientation” Chapter 10 (McGraw Hill 2004). In many applications, movable display MD will fill a large part of the user's near vision, and stationary display SD will be in only a portion of the user's far vision. Thus, in many applications, the user's eye may focus on one of the two displays at a time. The user may first look at one display and then at the other even though both displays are both in the user's field of vision. This may mean that in many useful applications, the fidelity of the positional spatial correlation and coherence does not need to be precise in order to create the desired immersive user interface effect, and maintaining precise positional or distance spatial correlation between the two displays may be unnecessary. In other applications, other arrangements may be desired.
Additionally, using practical consumer grade components, there may be instances where system S is unable to keep up with rapid movement of movable display MD. Luckily, during such rapid movement the person holding the movable display MD will also generally not be able to see the image it's displaying. Thus, it may be generally sufficient in many applications for system S to maintain spatial correlation for movable display MD for relatively static or slow-moving situations when the user can see the image the movable display MD displays.
An initial calibration step can be used to allow system S to establish and maintain attitudinal spatial coherence between the two displays SD, MD. In such case, the person may be asked to orient the movable display MD in a certain way relative to the stationary display SD to calibrate the system so that the magnetic compass bearing (e.g., NNW or 337.5°) of stationary display SD within the room or other environment is known. Calibration of movable display MD is possible in a general free space (3D) orientation, or transformation (movement) case (note that some special usage like 2D or linear orientation or movement as the subset of free space usage would also be possible). There, the example non-limiting implementation relies on the relative movement of the movable display MD with initial calibration.
To calibrate during setup, the person may hold the screen of movable display MD parallel to the stationary display SD while generally aiming a perpendicular vector at the center of the stationary display. A button or other user control may be pushed. The system S will measure and store the magnetic field in the room to determine the orientation that the movable display MD is in when the user says it is pointing at the stationary display SD. This calibration assumes (1) the magnetic field of the room doesn't change, (2) the stationary display SD isn't moved, and (3) the movable display MD is set up with this stationary display SD in this room. If the magnetic field is changed or the stationary display SD is moved or the system S is set up in another location, recalibration may be desirable. For example, recalibration may be performed when the room furniture or a metal panel in the room moves, or the battery is replaced (metal difference of alkaline batteries), or the environment changes the magnetic field in the room. In some residential areas located near street cars or light rails, dynamically changing magnetic fields may exist. It is possible to filter unexpected dynamic disturbance of the earth magnetic field by the initial magnitude of magnetic readouts, or unexpected values against gyroscope correspondence.
The
As shown in
While the technology herein has been described in connection with exemplary illustrative non-limiting embodiments, the invention is not to be limited by the disclosure. For example, given the conceptual nature of the present disclosure and its detailed exploration of a variety of different possible implementations, many statements herein do not correspond to any particular actual product that may be eventually be made available to consumers. Additionally, while the preferred embodiments use a stationary display SD and a movable display MD, both displays could be movable or both could be stationary, or there could be more displays some of which are movable and some of which are stationary. One or both the displays SD and MD may be a 3D display (e.g., an autostereoscopic display) that provide stereoscopic perception of depth to a viewer. One or both of the displays SD and MD may be so-called high-definition displays (e.g., 1,280×720 pixels (720p) or 1,920×1,080 pixels (1080i/1080p)). Still additionally, the images displayed on the displays SD and MD may be from any source such as an animation engine, a video game machine, a simulator, or a video which is appropriately transformed or processed to provide displays from different perspectives. The invention is intended to be defined by the claims and to cover all corresponding and equivalent arrangements whether or not specifically disclosed herein.
Number | Date | Country | Kind |
---|---|---|---|
2010-022022 | Feb 2010 | JP | national |
2010-022023 | Feb 2010 | JP | national |
2010-177893 | Aug 2010 | JP | national |
2010-185315 | Aug 2010 | JP | national |
2010-192220 | Aug 2010 | JP | national |
2010-192221 | Aug 2010 | JP | national |
2010-245298 | Nov 2010 | JP | national |
2010-245299 | Nov 2010 | JP | national |
This application is a continuation of U.S. patent application Ser. No. 13/153,106 filed Jun. 3, 2011, pending. This application is also a continuation-in-part of U.S. patent application Ser. No. 13/019,924 filed Feb. 2, 2011, pending; which claims benefit under 35 U.S.C. Section 119 of the following Japanese Patent Applications: 2010-022022 and 2010-022023 filed Feb. 3, 2010; 2010-177893 filed Aug. 6, 2010; 2010-185315 filed Aug. 20, 2010; 2010-192220 and 2010-192221 filed Aug. 30, 2010; and 2010-245298 and 2010-245299 filed Nov. 1, 2010. This application is also a continuation of U.S. patent application Ser. No. 13/672,862 filed Nov. 9, 2012, pending; which is a continuation of U.S. patent application Ser. No. 13/244,685 filed Sep. 26, 2011, now U.S. Pat. No. 8,339,364; which is a division of U.S. patent application Ser. No. 13/153,106 filed Jun. 3, 2011. All of the above are incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
4210329 | Steiger et al. | Jul 1980 | A |
5009501 | Fenner et al. | Apr 1991 | A |
5440326 | Quinn | Aug 1995 | A |
5452104 | Lee | Sep 1995 | A |
5602566 | Motosyuku et al. | Feb 1997 | A |
5608449 | Swafford, Jr. et al. | Mar 1997 | A |
5619397 | Honda et al. | Apr 1997 | A |
5820462 | Yokoi et al. | Oct 1998 | A |
5825350 | Case, Jr. et al. | Oct 1998 | A |
5825352 | Bisset et al. | Oct 1998 | A |
D411530 | Carter et al. | Jun 1999 | S |
5943043 | Furuhata et al. | Aug 1999 | A |
6020891 | Rekimoto | Feb 2000 | A |
6069790 | Howell et al. | May 2000 | A |
6084584 | Nahi et al. | Jul 2000 | A |
6084594 | Goto | Jul 2000 | A |
6104380 | Stork et al. | Aug 2000 | A |
6126547 | Ishimoto | Oct 2000 | A |
6164808 | Shibata et al. | Dec 2000 | A |
6183365 | Tonomura et al. | Feb 2001 | B1 |
6201554 | Lands | Mar 2001 | B1 |
6208329 | Ballare | Mar 2001 | B1 |
6238291 | Fujimoto et al. | May 2001 | B1 |
6252153 | Toyama | Jun 2001 | B1 |
6254481 | Jaffe | Jul 2001 | B1 |
6323846 | Westerman et al. | Nov 2001 | B1 |
6340957 | Adler et al. | Jan 2002 | B1 |
6347290 | Bartlett | Feb 2002 | B1 |
6379249 | Satsukawa et al. | Apr 2002 | B1 |
6396480 | Schindler et al. | May 2002 | B1 |
6400376 | Singh et al. | Jun 2002 | B1 |
6411275 | Hedberg | Jun 2002 | B1 |
6425822 | Hayashida et al. | Jul 2002 | B1 |
6466198 | Feinstein | Oct 2002 | B1 |
6466831 | Shibata et al. | Oct 2002 | B1 |
6498860 | Sasaki et al. | Dec 2002 | B1 |
6499027 | Weinberger | Dec 2002 | B1 |
6500070 | Tomizawa et al. | Dec 2002 | B1 |
6509896 | Saikawa et al. | Jan 2003 | B1 |
6538636 | Harrison | Mar 2003 | B1 |
6540610 | Chatani | Apr 2003 | B2 |
6540614 | Nishino et al. | Apr 2003 | B1 |
6557001 | Dvir et al. | Apr 2003 | B1 |
6567068 | Rekimoto | May 2003 | B2 |
6567101 | Thomas | May 2003 | B1 |
6567984 | Allport | May 2003 | B1 |
6570557 | Westerman et al. | May 2003 | B1 |
6582299 | Matsuyama et al. | Jun 2003 | B1 |
6657627 | Wada et al. | Dec 2003 | B1 |
6690358 | Kaplan | Feb 2004 | B2 |
6716103 | Eck et al. | Apr 2004 | B1 |
6798429 | Bradski | Sep 2004 | B2 |
6834249 | Orchard | Dec 2004 | B2 |
6847351 | Noguera | Jan 2005 | B2 |
6856259 | Sharp | Feb 2005 | B1 |
6888536 | Westerman et al. | May 2005 | B2 |
6897833 | Robinson et al. | May 2005 | B1 |
6908386 | Suzuki et al. | Jun 2005 | B2 |
6921336 | Best | Jul 2005 | B1 |
6930661 | Uchida et al. | Aug 2005 | B2 |
6933923 | Feinstein | Aug 2005 | B2 |
6939231 | Mantyjarvi et al. | Sep 2005 | B2 |
6954491 | Kim et al. | Oct 2005 | B1 |
6966837 | Best | Nov 2005 | B1 |
6988097 | Shirota | Jan 2006 | B2 |
6990639 | Wilson | Jan 2006 | B2 |
6993451 | Chang et al. | Jan 2006 | B2 |
7007242 | Suomela et al. | Feb 2006 | B2 |
D519118 | Woodward | Apr 2006 | S |
7023427 | Kraus et al. | Apr 2006 | B2 |
7030856 | Dawson et al. | Apr 2006 | B2 |
7030861 | Westerman et al. | Apr 2006 | B1 |
7038662 | Noguera | May 2006 | B2 |
7053887 | Kraus et al. | May 2006 | B2 |
7068294 | Kidney et al. | Jun 2006 | B2 |
7088342 | Rekimoto et al. | Aug 2006 | B2 |
7109978 | Gillespie et al. | Sep 2006 | B2 |
7115031 | Miyamoto et al. | Oct 2006 | B2 |
7128648 | Watanabe | Oct 2006 | B2 |
7140962 | Okuda et al. | Nov 2006 | B2 |
7142191 | Idesawa et al. | Nov 2006 | B2 |
7158123 | Myers et al. | Jan 2007 | B2 |
7173604 | Marvit et al. | Feb 2007 | B2 |
7176886 | Marvit et al. | Feb 2007 | B2 |
7176887 | Marvit et al. | Feb 2007 | B2 |
7176888 | Marvit et al. | Feb 2007 | B2 |
7180500 | Marvit et al. | Feb 2007 | B2 |
7180501 | Marvit et al. | Feb 2007 | B2 |
7180502 | Marvit et al. | Feb 2007 | B2 |
7184020 | Matsui | Feb 2007 | B2 |
7225101 | Usuda et al. | May 2007 | B2 |
7233316 | Smith et al. | Jun 2007 | B2 |
7254775 | Geaghan et al. | Aug 2007 | B2 |
7256767 | Wong et al. | Aug 2007 | B2 |
7271795 | Bradski | Sep 2007 | B2 |
7275994 | Eck et al. | Oct 2007 | B2 |
7280096 | Marvit et al. | Oct 2007 | B2 |
7285051 | Eguchi et al. | Oct 2007 | B2 |
7289102 | Hinckley et al. | Oct 2007 | B2 |
7295191 | Kraus et al. | Nov 2007 | B2 |
7301526 | Marvit et al. | Nov 2007 | B2 |
7301527 | Marvit | Nov 2007 | B2 |
7301529 | Marvit et al. | Nov 2007 | B2 |
7321342 | Nagae | Jan 2008 | B2 |
7333087 | Soh et al. | Feb 2008 | B2 |
RE40153 | Westerman et al. | Mar 2008 | E |
7339580 | Westerman et al. | Mar 2008 | B2 |
7352358 | Zalewski et al. | Apr 2008 | B2 |
7352359 | Zalewski et al. | Apr 2008 | B2 |
7365735 | Reinhardt et al. | Apr 2008 | B2 |
7365736 | Marvit et al. | Apr 2008 | B2 |
7365737 | Marvit et al. | Apr 2008 | B2 |
D568882 | Ashida et al. | May 2008 | S |
7376388 | Ortiz et al. | May 2008 | B2 |
7389591 | Jaiswal et al. | Jun 2008 | B2 |
7391409 | Zalewski et al. | Jun 2008 | B2 |
7403220 | MacIntosh et al. | Jul 2008 | B2 |
7431216 | Weinans | Oct 2008 | B2 |
7446731 | Yoon | Nov 2008 | B2 |
7461356 | Mitsutake | Dec 2008 | B2 |
7479948 | Kim et al. | Jan 2009 | B2 |
7479949 | Jobs et al. | Jan 2009 | B2 |
7510477 | Argentar | Mar 2009 | B2 |
7518503 | Peele | Apr 2009 | B2 |
7519468 | Orr et al. | Apr 2009 | B2 |
7522151 | Arakawa et al. | Apr 2009 | B2 |
7540011 | Wixson et al. | May 2009 | B2 |
7552403 | Wilson | Jun 2009 | B2 |
7570275 | Idesawa et al. | Aug 2009 | B2 |
D599352 | Takamoto et al. | Sep 2009 | S |
7607111 | Vaananen et al. | Oct 2009 | B2 |
7614008 | Ording | Nov 2009 | B2 |
7619618 | Westerman et al. | Nov 2009 | B2 |
7626598 | Manchester | Dec 2009 | B2 |
7647614 | Krikorian et al. | Jan 2010 | B2 |
7656394 | Westerman et al. | Feb 2010 | B2 |
7663607 | Hotelling et al. | Feb 2010 | B2 |
7667707 | Margulis | Feb 2010 | B1 |
7692628 | Smith et al. | Apr 2010 | B2 |
7696980 | Piot et al. | Apr 2010 | B1 |
7699704 | Suzuki et al. | Apr 2010 | B2 |
7705830 | Westerman et al. | Apr 2010 | B2 |
7710396 | Smith et al. | May 2010 | B2 |
7719523 | Hillis | May 2010 | B2 |
7721231 | Wilson | May 2010 | B2 |
7730402 | Song | Jun 2010 | B2 |
7736230 | Argentar | Jun 2010 | B2 |
7762891 | Miyamoto et al. | Jul 2010 | B2 |
D620939 | Suetake et al. | Aug 2010 | S |
7782297 | Zalewski et al. | Aug 2010 | B2 |
7791808 | French et al. | Sep 2010 | B2 |
7827698 | Jaiswal et al. | Nov 2010 | B2 |
D636773 | Lin | Apr 2011 | S |
7934995 | Suzuki | May 2011 | B2 |
D641022 | Dodo et al. | Jul 2011 | S |
8038533 | Tsuchiyama et al. | Oct 2011 | B2 |
8105169 | Ogasawara et al. | Jan 2012 | B2 |
D666250 | Fulghum et al. | Aug 2012 | S |
8246460 | Kitahara | Aug 2012 | B2 |
8253649 | Imai | Aug 2012 | B2 |
8256730 | Tseng | Sep 2012 | B2 |
8306768 | Yamada et al. | Nov 2012 | B2 |
8317615 | Takeda et al. | Nov 2012 | B2 |
8337308 | Ito et al. | Dec 2012 | B2 |
8339364 | Takeda et al. | Dec 2012 | B2 |
8529352 | Mae et al. | Sep 2013 | B2 |
8531571 | Cote | Sep 2013 | B1 |
8567599 | Beatty et al. | Oct 2013 | B2 |
8613672 | Mae et al. | Dec 2013 | B2 |
8684842 | Takeda et al. | Apr 2014 | B2 |
8690675 | Ito et al. | Apr 2014 | B2 |
8702514 | Ashida et al. | Apr 2014 | B2 |
8804326 | Ashida et al. | Aug 2014 | B2 |
8814680 | Ashida et al. | Aug 2014 | B2 |
8814686 | Takeda et al. | Aug 2014 | B2 |
8827818 | Ashida et al. | Sep 2014 | B2 |
8845426 | Ohta et al. | Sep 2014 | B2 |
8896534 | Takeda et al. | Nov 2014 | B2 |
8913009 | Takeda et al. | Dec 2014 | B2 |
8956209 | Nishida et al. | Feb 2015 | B2 |
8961305 | Takeda et al. | Feb 2015 | B2 |
9132347 | Nishida et al. | Sep 2015 | B2 |
9199168 | Ito et al. | Dec 2015 | B2 |
20010019363 | Katta et al. | Sep 2001 | A1 |
20020103026 | Himoto et al. | Aug 2002 | A1 |
20020103610 | Bachmann et al. | Aug 2002 | A1 |
20020107071 | Himoto et al. | Aug 2002 | A1 |
20020122068 | Tsuruoka | Sep 2002 | A1 |
20020165028 | Miyamoto et al. | Nov 2002 | A1 |
20030027517 | Callway et al. | Feb 2003 | A1 |
20030207704 | Takahashi et al. | Nov 2003 | A1 |
20030216179 | Suzuki et al. | Nov 2003 | A1 |
20040023719 | Hussaini et al. | Feb 2004 | A1 |
20040092309 | Suzuki | May 2004 | A1 |
20040229687 | Miyamoto et al. | Nov 2004 | A1 |
20040266529 | Chatani | Dec 2004 | A1 |
20050176502 | Nishimura et al. | Aug 2005 | A1 |
20050181756 | Lin | Aug 2005 | A1 |
20050253806 | Liberty et al. | Nov 2005 | A1 |
20060012564 | Shiozawa et al. | Jan 2006 | A1 |
20060015808 | Shiozawa et al. | Jan 2006 | A1 |
20060015826 | Shiozawa et al. | Jan 2006 | A1 |
20060038914 | Hanada et al. | Feb 2006 | A1 |
20060077165 | Jang | Apr 2006 | A1 |
20060094502 | Katayama et al. | May 2006 | A1 |
20060174026 | Robinson et al. | Aug 2006 | A1 |
20060250764 | Howarth et al. | Nov 2006 | A1 |
20060252537 | Wu | Nov 2006 | A1 |
20060252541 | Zalewski et al. | Nov 2006 | A1 |
20060267928 | Kawanobe et al. | Nov 2006 | A1 |
20070021210 | Tachibana | Jan 2007 | A1 |
20070021216 | Guruparan | Jan 2007 | A1 |
20070049374 | Ikeda et al. | Mar 2007 | A1 |
20070060383 | Dohta | Mar 2007 | A1 |
20070202956 | Ogasawara et al. | Aug 2007 | A1 |
20070252901 | Yokonuma et al. | Nov 2007 | A1 |
20070265085 | Miyamoto et al. | Nov 2007 | A1 |
20080015017 | Ashida et al. | Jan 2008 | A1 |
20080024435 | Dohta | Jan 2008 | A1 |
20080030458 | Helbing et al. | Feb 2008 | A1 |
20080039202 | Sawano et al. | Feb 2008 | A1 |
20080064500 | Satsukawa et al. | Mar 2008 | A1 |
20080100995 | Ryder et al. | May 2008 | A1 |
20080150911 | Harrison | Jun 2008 | A1 |
20080220867 | Zalewski et al. | Sep 2008 | A1 |
20080300055 | Lutnick et al. | Dec 2008 | A1 |
20090082107 | Tahara et al. | Mar 2009 | A1 |
20090143140 | Kitahara | Jun 2009 | A1 |
20090170579 | Ishii et al. | Jul 2009 | A1 |
20090183193 | Miller, IV | Jul 2009 | A1 |
20090195349 | Frader-Thompson et al. | Aug 2009 | A1 |
20090219677 | Mori et al. | Sep 2009 | A1 |
20090225159 | Schneider et al. | Sep 2009 | A1 |
20090241038 | Izuno et al. | Sep 2009 | A1 |
20090254953 | Lin | Oct 2009 | A1 |
20090256809 | Minor | Oct 2009 | A1 |
20090280910 | Gagner et al. | Nov 2009 | A1 |
20090322671 | Scott et al. | Dec 2009 | A1 |
20090322679 | Sato et al. | Dec 2009 | A1 |
20100007926 | Imaizumi et al. | Jan 2010 | A1 |
20100009746 | Raymond et al. | Jan 2010 | A1 |
20100009760 | Shimamura et al. | Jan 2010 | A1 |
20100045666 | Kornmann et al. | Feb 2010 | A1 |
20100053164 | Imai | Mar 2010 | A1 |
20100083341 | Gonzalez | Apr 2010 | A1 |
20100105480 | Mikhailov et al. | Apr 2010 | A1 |
20100149095 | Hwang | Jun 2010 | A1 |
20100156824 | Paleczny et al. | Jun 2010 | A1 |
20100311501 | Hsu | Dec 2010 | A1 |
20110021274 | Sato et al. | Jan 2011 | A1 |
20110190049 | Mae et al. | Aug 2011 | A1 |
20110190050 | Mae et al. | Aug 2011 | A1 |
20110190052 | Takeda et al. | Aug 2011 | A1 |
20110190061 | Takeda et al. | Aug 2011 | A1 |
20110195785 | Ashida et al. | Aug 2011 | A1 |
20110228457 | Moon et al. | Sep 2011 | A1 |
20110285704 | Takeda et al. | Nov 2011 | A1 |
20110287842 | Yamada et al. | Nov 2011 | A1 |
20110295553 | Sato | Dec 2011 | A1 |
20120001048 | Takahashi et al. | Jan 2012 | A1 |
20120015732 | Takeda et al. | Jan 2012 | A1 |
20120026166 | Takeda et al. | Feb 2012 | A1 |
20120040759 | Ito et al. | Feb 2012 | A1 |
20120044177 | Ohta et al. | Feb 2012 | A1 |
20120046106 | Ito et al. | Feb 2012 | A1 |
20120052952 | Nishida et al. | Mar 2012 | A1 |
20120052959 | Nishida et al. | Mar 2012 | A1 |
20120062445 | Haddick et al. | Mar 2012 | A1 |
20120068927 | Poston et al. | Mar 2012 | A1 |
20120086631 | Osman et al. | Apr 2012 | A1 |
20120087069 | Fu et al. | Apr 2012 | A1 |
20120088580 | Takeda et al. | Apr 2012 | A1 |
20120106041 | Ashida et al. | May 2012 | A1 |
20120106042 | Ashida et al. | May 2012 | A1 |
20120108329 | Ashida et al. | May 2012 | A1 |
20120108340 | Ashida et al. | May 2012 | A1 |
20120119992 | Nishida et al. | May 2012 | A1 |
20120258796 | Ohta et al. | Oct 2012 | A1 |
20120270651 | Takeda et al. | Oct 2012 | A1 |
20130063350 | Takeda et al. | Mar 2013 | A1 |
20130109477 | Ito et al. | May 2013 | A1 |
20140295966 | Ashida et al. | Oct 2014 | A1 |
Number | Date | Country |
---|---|---|
2707609 | Jun 2009 | CA |
1593709 | Mar 2005 | CN |
1868244 | Nov 2006 | CN |
201572520 | Sep 2010 | CN |
202270340 | Jun 2012 | CN |
202355827 | Aug 2012 | CN |
202355828 | Aug 2012 | CN |
202355829 | Aug 2012 | CN |
0 710 017 | May 1996 | EP |
0 835 676 | Apr 1998 | EP |
1 469 382 | Oct 2004 | EP |
1723992 | Nov 2006 | EP |
2 158 947 | Mar 2010 | EP |
2 932 998 | Jan 2010 | FR |
06-285259 | Oct 1994 | JP |
07-068052 | Mar 1995 | JP |
07-088251 | Apr 1995 | JP |
08-045392 | Feb 1996 | JP |
08-095669 | Apr 1996 | JP |
09-099173 | Apr 1997 | JP |
09-294260 | Nov 1997 | JP |
10-341388 | Dec 1998 | JP |
2000-222185 | Aug 2000 | JP |
2001-034247 | Feb 2001 | JP |
2001-084131 | Mar 2001 | JP |
2001-340641 | Dec 2001 | JP |
2002-177644 | Jun 2002 | JP |
2002-248267 | Sep 2002 | JP |
2003-189484 | Jul 2003 | JP |
2003-325972 | Nov 2003 | JP |
2004-032548 | Jan 2004 | JP |
2004-329744 | Nov 2004 | JP |
3108313 | Feb 2005 | JP |
3703473 | Feb 2005 | JP |
2005-137405 | Jun 2005 | JP |
2005-269399 | Sep 2005 | JP |
3770499 | Apr 2006 | JP |
3797608 | Jul 2006 | JP |
2006-301654 | Nov 2006 | JP |
2006-350986 | Dec 2006 | JP |
2007-021065 | Feb 2007 | JP |
2007-061271 | Mar 2007 | JP |
2007-075353 | Mar 2007 | JP |
2007-075751 | Mar 2007 | JP |
2007-260409 | Oct 2007 | JP |
2007-274836 | Oct 2007 | JP |
2007-289413 | Nov 2007 | JP |
2007-310840 | Nov 2007 | JP |
2007-313354 | Dec 2007 | JP |
2008-067844 | Mar 2008 | JP |
2008-067875 | Mar 2008 | JP |
2008-264402 | Nov 2008 | JP |
2009-142511 | Jul 2009 | JP |
2009-178363 | Aug 2009 | JP |
2009-200799 | Sep 2009 | JP |
3153862 | Sep 2009 | JP |
2009-219707 | Oct 2009 | JP |
2009-230753 | Oct 2009 | JP |
2009-247763 | Oct 2009 | JP |
2009-536058 | Oct 2009 | JP |
2010-017395 | Jan 2010 | JP |
2010-017412 | Jan 2010 | JP |
2010-066899 | Mar 2010 | JP |
2010-142561 | Jul 2010 | JP |
4601925 | Dec 2010 | JP |
M278452 | Oct 2005 | TW |
419388 | Jan 2011 | TW |
2003-007117 | Jan 2003 | WO |
03083822 | Oct 2003 | WO |
2007-128949 | Nov 2007 | WO |
2007-143632 | Dec 2007 | WO |
2008-136064 | Nov 2008 | WO |
2009-038596 | Mar 2009 | WO |
2010088477 | Aug 2010 | WO |
Entry |
---|
Communication Pursuant to Article 94(3) EPC dated Apr. 29, 2016, issued in related European Patent Application No. 12 000 320.7. |
Sony HMZ-T1 with TrackIR 5 playing PC games! WoW and Skyrim Uploaded by iphwne Nov. 16, 2011 http://www.youtube.com/watch?v=5OLCFMBWT6I. |
Sony's New 3D OLED Headset/VR Goggles Uploaded by TheWaffleUniverse Jan. 8, 2011 http://www.youtube.com/watch?v=UoE5ij63EDI. |
TrackIR 5—review Uploaded by arnycracker8 Jan. 27, 2011 http://www.youtube.com/watch?v=EXMXvAuBzo4. |
Partial English-language translation of TWM278452. |
Mae et al., U.S. Appl. No. 13/017,381, filed Jan. 31, 2011—now U.S. Pat. No. 8,613,672. |
Mae et al., U.S. Appl. No. 13/017,527, filed Jan. 31, 2011—now U.S. Appl. No. 8,529,352. |
Takeda et al., U.S. Appl. No. 13/019,924, filed Feb. 2, 2011—non-final office action dated Oct. 8, 2013. |
Takeda et al., U.S. Appl. No. 13/019,928, filed Feb. 2, 2011—now U.S. Pat. No. 8,317,615. |
Takeda et al., U.S. Appl. No. 13/145,690, filed Dec. 19, 2011—allowed. |
Takeda et al., U.S. Appl. No. 13/153,106, filed Jun. 3, 2011—non-final office action dated Oct. 10, 2013. |
Ito et al., U.S. Appl. No. 13/198,251, filed Aug. 4, 2011—awaiting USPTO action. |
Ashida et al., U.S. Appl. No. 13/206,059, filed Aug. 9, 2011—awaiting USPTO action. |
Ashida et al., U.S. Appl. No. 13/206,767, filed Aug. 10, 2011—Quayle action mailed Nov. 14, 2013. |
Ashida et al., U.S. Appl. No. 13/206,914, filed Aug. 10, 2011—allowed. |
Ashida et al., U.S. Appl. No. 13/207,867, filed Aug. 11, 2011—non-final office action dated Oct. 7, 2013. |
Ito et al., U.S. Appl. No. 13/208,719, filed Aug. 12, 2011—now U.S. Pat. No. 8,337,308. |
Ohta et al., U.S. Appl. No. 13/209,756, filed Aug. 15, 2011—awaiting USPTO action. |
Nishida et al., U.S. Appl. No. 13/211,679, filed Aug. 17, 2011—final office action dated Oct. 15, 2013. |
Nishida et al., U.S. Appl. No. 13/212,648, filed Aug. 18, 2011—final office action dated Dec. 2, 2013. |
Takeda et al., U.S. Appl. No. 13/244,685, filed Sep. 26, 2011—now U.S. Pat. No. 8,339,364. |
Takeda et al., U.S. Appl. No. 13/244,710, filed Sep. 26, 2011—non-final office action dated Sep. 24, 2013. |
Ohta et al., U.S. Appl. No. 13/354,000, filed Jan. 19, 2012—non-final office action dated Nov. 8, 2013. |
Takeda et al., U.S. Appl. No. 13/541,282, filed Jul. 3, 2012—non-final office action dated Sep. 13, 2013. |
Takeda et al., U.S. Appl. No. 13/672,862, filed Nov. 9, 2012—awaiting USPTO action. |
Ito et al., U.S. Appl. No. 13/687,057, filed Nov. 28, 2012—allowed. |
Apple Support, “iPhone—Technical Specifications”, http://support.apple.com/kb/SP2, 2010. 3 pages. |
Apple Support, “iPhone—Technical Specifications”, http://support.apple.com/kb/SP2, Feb. 19, 2010, 2 pages. |
Apple Support, “iPhone—Technical Specifications”, Apple, Aug. 22, 2008, XP002673788, retrieved from the Internet: URL; http://support.apple.com/kb/SP495 [retrieved on Apr. 13, 2012]. |
IGN Staff, “PS3 Games on PSP?”, URL: http://www.ign.com/articles/2006/10/25/ps3-games-on-psp, Publication date printed on article: Oct. 2006 |
Jhrogersii, “Review: Gyro Tennis for iPhone”, iSource, Sep. 17, 2010, http://isource.com/2010/09/17/review-gyro-tennis-for-phone/, 10 pages. |
Marcusita, “What Benefits Can I Get Out of My PSP on My PS3”, URL: http://web.archive.org/web/20080824222755/http://forums.afterdawn.com/thread—view.cfm/600615, Publication date printed on article; Dec. 15, 2007. |
PersonalApplets: “Gyro Tennis App for iPhone 4 and iPod Touch 4th gen” YouTube, Aug. 9, 2010, Hyyp://www.youtube.com/watch?v=c7PRFbqWKIs, 2 pages. |
English-language machine translation for JP 4601925. |
English-language machine translation for JP 09-294260. |
English-language machine translation for JP 2002-248267. |
English-language machine translation for JP 2004-032548. |
English-language machine translation for JP 2007-075751. |
English-language machine translation for JP 2008-264402. |
English-language machine translation for JP 2009-178363. |
English-language machine translation for JP 2009-247763. |
Mae et al., U.S. Appl. No. 13/017,381, filed Jan. 31, 2011. |
Mae et al., U.S. Appl. No. 13/017,527, filed Jan. 31, 2011. |
Takeda et al., U.S. Appl. No. 13/019,924, filed Feb. 2, 2011. |
Takeda et al., U.S. Appl. No. 13/019,928, filed Feb. 2, 2011. |
Takeda et al., U.S. Appl. No. 13/145,690, filed Dec. 19, 2011. |
Ito et al., U.S. Appl. No. 13/198,251, filed Aug. 4, 2011. |
Ashida et al., U.S. Appl. No. 13/206,059, filed Aug. 9, 2011. |
Ashida et al., U.S. Appl. No. 13/206,767, filed Aug. 10, 2011. |
Ashida et al., U.S. Appl. No. 13/206,914, filed Aug. 10, 2011. |
Ashida et al., U.S. Appl. No. 13/207,867, filed Aug. 11, 2011. |
Ito et al., U.S. Appl. No. 13/208,719, filed Aug. 12, 2011. |
Ohta et al., U.S. Appl. No. 13/209,756, filed Aug. 15, 2011. |
Nishida et al., U.S. Appl. No. 13/211,679, filed Aug. 17, 2011. |
Nishida et al., U.S. Appl. No. 13/212,648, filed Aug. 18, 2011. |
Takeda et al., U.S. Appl. No. 13/244,685, filed Sep. 26, 2011. |
Takeda et al., U.S. Appl. No. 13/244,710, filed Sep. 26, 2011. |
Ohta et al., U.S. Appl. No. 13/354,000, filed Jan. 19, 2012. |
Takeda et al., U.S. Appl. No. 13/541,282, filed Jul. 3, 2012. |
European Search Report issued in EP Appl. 1177775.1 dated Feb. 1, 2012. |
European Search Report issued in EP Appl. 11739553.3 dated May 10, 2012. |
Extended European Search Report issued in EP Appl. 11176479.1 dated Nov. 24, 2011. |
Extended European Search Report issued in EP Appl. 11176478.3 dated Nov. 24, 2011. |
Extended European Search Report issued in EP Appl. 11176477.5 dated Nov. 24, 2011. |
Extended European Search Report issued in EP Appl. 11176475.9 dated Nov. 24, 2011. |
Extended European Search Report issued in EP Appl. 12000320.7 dated Nov. 13, 2012. |
International Search Report for PCT/JP2011/000565 dated Mar. 8, 2011. |
International Search Report for PCT/JP2011/000566 dated Mar. 8, 2011. |
Office Action dated Mar. 16, 2012, in U.S. Appl. No. 13/019,924. |
Office Action dated Apr. 26, 2012, in U.S. Appl. No. 13/019,928. |
Office Action dated Sep. 10, 2012 in Australian Application No. 2011213765. |
Office Action dated Sep. 18, 2012 in Australian Application No. 2011213764. |
Office Action dated Oct. 16, 2012, in Australian Application No. 2011204815. |
Rob Aspin et al., “Augmenting the CAVE: An initial study into close focused, inward looking, exploration in IPT systems,” 11th IEEE International Symposiumon Distributed Simulation and Real-Time Applications, pp. 217-224 (Oct. 1, 2007). |
G.W. Fitzmaurice et al, “Virtual Reality for Palmtop Computers,” ACM Transactions on Information Systems, ACM, New York, NY, vol. 11, No. 3, pp. 197-218 (Jul. 1, 1993). |
Johan Sanneblad et al, “Ubiquitous graphics,” Proceedings of the Working Conference on Advanced Visual Interfaces, AVI '06, pp. 373-377 (0/01/2006). |
D. Weidlich et al., “Virtual Reality Approaches for Immersive Design,” CIRP Annals, Elsevier BV, NL, CH, FR, vol. 56, No. 1, pp. 139-142 (Jan. 1, 2007). |
English-language machine translation of JP 2008-067875. |
Xbox 360 Controller, Wikipedia, page as revised on Feb. 2, 2010 (6 pages). |
Kenji Saeki, “Both the appearance and the function are improved! The report on the evolved Nintendo DSi.”, [online], Dec. 1, 2008, Impress Watch Co., Ltd. , Game Watch, [searched on Jul. 23, 2014]. Internet <URL: http://game.watch.impress.co.jpl docs/20081101/dsi1.htm> and partial English-language translation. |
English-language machine translation of http://game.watch.impress.co.jp/docs/20081101/dsi1.htm [retrieved on Oct. 5, 2014]. |
Ascension Technology Corporation, Flock of Birds, Real-Time Motion Tracking, retrieved from Internet on Dec. 30, 2003, www.ascension-tech.com, 3 pages. |
English-language machine-translation for JP 2001-034247. |
English-language machine-translation for JP 2010-066899. |
English-language machine-translation for JP 07-068052. |
English-language machine-translation for JP 07-088251. |
English-language machine-translation for JP 08-045392. |
English-language machine-translation for JP 08-095669. |
English-language machine-translation for JP 2002-177644. |
English-language machine-translation for JP 2007-260409. |
English-language machine-translation for JP 2007-313354. |
English-language machine-translation for JP 2009-200799. |
English-language machine-translation for JP 09-099173. |
English-language machine-translation for JP 2007-310840. |
English-language machine-translation for JP 2001-340641. |
English-language machine-translation for JP 2009-230753. |
English-language machine-translation for JP 3108313U. |
English-language machine-translation for JP 2007-274836. |
English-language machine-translation for JP 2003-189484. |
English-language machine translation of JP 2009-219707. |
English-language machine-translation for CN 201572520U. |
English-language machine-translation of JP 2006-301654. |
English-language machine-translation of JP 2005-137405. |
Takeda et al., U.S. Appl. No. 13/019,924, filed Feb. 2, 2011—now U.S. Pat. No. 8,961,305. |
Takeda et al., U.S. Appl. No. 13/145,690, filed Dec. 19, 2011—now U.S. Pat. No. 8,684,842. |
Taekda et al., U.S. Appl. No. 13/153,106, filed Jun. 3, 2011—now U.S. Pat. No. 8,913,009. |
Ito et al., U.S. Appl. No. 13/198,251, filed Aug. 4, 2011—now U.S. Pat. No. 9,199,168. |
Ashida et al., U.S. Appl. No. 13/206,059, filed Aug. 9, 2011—now U.S. Pat. No. 8,827,818. |
Ashida et al., U.S. Appl. No. 13/206,767, filed Aug. 10, 2011—now U.S. Pat. No. 8,814,680. |
Ashida et al., U.S. Appl. No. 13/206,914, filed Aug. 10, 2011—now U.S. Pat. No. 8,702,514. |
Ashida et al., U.S. Appl. No. 13/207,867, filed Aug. 11, 2011—now U.S. Pat. No. 8,804,326. |
Ohta et al., U.S. Appl. No. 13/209,756, filed Aug. 15, 2011—response to office action filed Oct. 23, 2015. |
Nishida et al., U.S. Appl. No. 13/211,679, filed Aug. 17, 2011—now U.S. Pat. No. 8,956,209. |
Nishida et al., U.S. Appl. No. 13/212,648, filed Aug. 18, 2011—now U.S. Pat. No. 9,132,347. |
Takeda et al., U.S. Appl. No. 13/244,710, filed Sep. 26, 2011—response to office action filed Sep. 4, 2015. |
Ohta et al., U.S. Appl. No. 13/354,000, filed Jan. 19, 2012—now U.S. Pat. No. 8,845,426. |
Takeda et al., U.S. Appl. No. 13/541,282, filed Jul. 3, 2012—now U.S. Pat. No. 8,814,686. |
Takeda et al., U.S. Appl. No. 13/672,862, filed Nov. 9, 2012—now U.S. Pat. No. 8,896,534. |
Ito et al., U.S. Appl. No. 13/687,057, filed Nov. 28, 2012—now U.S. Pat. No. 8,690,675. |
Ashida et al., U.S. Appl. No. 14/302,248, filed Jun. 11, 2014—QPIDS filed Jan. 18, 2016. |
Ashida et al., U.S. Appl. No. 14/983,173, filed Dec. 29, 2015—awaiting USPTO action. |
Summons to attend oral proceedings in counterpart EP application No. 12000320.7 (Dec. 22, 2016). |
Number | Date | Country | |
---|---|---|---|
20150062122 A1 | Mar 2015 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 13153106 | Jun 2011 | US |
Child | 13244685 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 13153106 | Jun 2011 | US |
Child | 14537654 | US | |
Parent | 13672862 | Nov 2012 | US |
Child | 13019924 | US | |
Parent | 13244685 | Sep 2011 | US |
Child | 13672862 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 13019924 | Feb 2011 | US |
Child | 13153106 | US |