Position-dependent gaming, 3-D controller, and handheld as a remote

Information

  • Patent Grant
  • 10981055
  • Patent Number
    10,981,055
  • Date Filed
    Friday, April 5, 2019
    5 years ago
  • Date Issued
    Tuesday, April 20, 2021
    3 years ago
Abstract
Methods and systems for using a position of a mobile device with an integrated display as an input to a video game or other presentation are presented. Embodiments include rendering an avatar on a mobile device such that it appears to overlay a competing user in the real world. Using the mobile device's position, view direction, and the other user's mobile device position, an avatar (or vehicle, etc.) is depicted at an apparently inertially stabilized location of the other user's mobile device or body. Some embodiments may estimate the other user's head and body positions and angles and reflect them in the avatar's gestures.
Description
BACKGROUND

Embodiments of the present invention relate to video display and video game entertainment devices in general and, in particular, to the rendering of avatars, vehicles, game pieces, etc. on a user's mobile device based on its look angle and/or position with respect to another user's mobile device and/or a fixed video display.


Video games are typically played by users sitting in front of a video screen. Multi-player video games are often played by users sitting in front of a common, shared video screen. The shared video screen is sometimes a large television that is connected with a video game console, such as a Sony PlayStation® 3. Wired or wireless game controllers serve as input devices to send commands from the users to the console. In some instances, data is sent from the console to the controllers to, for example, switch on and off lights on a controller, give tactile signals (e.g. force feedback) to the user, calibrate the controllers, etc.


Networked multi-player games are often played by users sitting in front of their own, personal video screens. These video games are often played from a personal computer (PC) or a video game console using a keyboard or game controllers described above. Some networked multi-player games are played from a portable handheld, smart phone, or other mobile device with its own embedded display, such as a Sony PlayStation Portable® (PSP). The display shares the same plastic housing with buttons, joysticks, rollerballs, trigger switches, and/or other input components. Some displays that are touch or stylus-sensitive also serve as input devices in addition to or in conjunction with physical buttons, etc.


To play a handheld game on a mobile device with an integrated display, a user sometimes stares down into his screen without moving. Some players attempt to stay as motionless as possible, avoiding jarring by others around them, in order to concentrate and maintain hand-eye coordination to correctly select inputs in response to the game. This head-down, motionless poise can make for a solitary experience, even when a user is playing against another human opponent. Even if the opposing, or cooperating, players are seated next to each other, physical interaction between the players can be minimal because they look with their heads down at their screens instead of toward each other. This heads-down posture can also result in getting motion sickness if one in a moving vehicle such as an automobile.


There may, therefore, be a need in the art to allow players of single-player games to better interact with their physical surroundings and players of multi-player games to better interact with one another while playing against each other.


BRIEF SUMMARY

Methods, systems, and devices are presented for augmenting video using a relative distance and direction of a mobile display from a fixed display. Movement of the mobile display can be used as an input to a video game and/or to help render graphics associated with the video game. For example, a user driving a video game jeep through a jungle may have a view out the front windshield of the jeep from a fixed display and be able to slew his mobile device up and around to look at things above and behind him in the virtual jungle.


Methods, systems, and devices are described for displaying augmented video on a display integrated in a mobile device held or worn by a first user based on the relative position of another user's mobile device and view direction of the first user's mobile device. In some embodiments, a user can hold up his device in the direction of another user and see an avatar of the other user on the display apparently at the same position in space as the other user. This can give the appearance that the user's display is simply a transparent window with the exception that his opponent's physical body is overlaid with the graphical body of an avatar.


In some embodiments, the mobile devices can be calibrated so that the position of the other user's head, body, etc. are estimated from the orientation and motion of his device so that the avatar's head, body, etc. appear at the same position as the other user. In some embodiments, face and motion tracking of a user's head, body, etc. can be used to measure the location of the user. In other embodiments, the mobile devices are glasses so that real-world head tracking of the opposing player is better measured.


Some embodiments include a system for augmenting video, comprising a video source configured to provide video content to a video display and a mobile device with an integrated video camera and display. The mobile device is configured to track a relative distance and direction of the video display using the video camera, determine a position coordinate of the mobile device using the tracked relative distance and direction, and render, on the integrated display, an object in a position based on the determined position coordinate of the mobile device.


Some embodiments include a method for augmenting video, comprising receiving a first position coordinate corresponding to a first user, the first position coordinate relative to a first video display, receiving a first view direction corresponding to the first user, the first view direction relative to the first video display, and receiving a second position coordinate corresponding to a second user, the second position coordinate relative to a second video display. The method further includes determining a direction and range from the first position coordinate to the second position coordinate and rendering, on an integrated display of a first mobile device, an object based on the determined direction and range from the first position coordinate to the second position coordinate and based on the received first view direction.


Other embodiments relate to machine-readable tangible storage media and computer systems which store or execute instructions for the methods described above.


Some embodiments include a system for augmenting video, comprising a first mobile device having a display, a second mobile device, means for determining a relative direction and range from the first mobile device to the second mobile device, and means for determining a view direction of the first mobile device. The first mobile device is configured to render on its display an avatar or vehicle from a perspective based on the relative direction and range to the second mobile device and view direction of the first mobile device.





BRIEF DESCRIPTION OF THE DRAWINGS

A further understanding of the nature and advantages of the present invention may be realized by reference to the following drawings. In the appended figures, similar components or features may have the same reference label.



FIG. 1 illustrates a first user holding a mobile device at a first position in space relative to a display in accordance with an embodiment.



FIG. 2 illustrates a second user holding a mobile device at a second position in space relative to a display in accordance with an embodiment.



FIG. 3 illustrates a relative direction and range from the user's device of FIG. 1 to the user's device of FIG. 2 in accordance with an embodiment.



FIG. 4 illustrates an avatar displayed as if it were co-located with a second user's mobile device from a vantage point of a first user in accordance with an embodiment.



FIG. 5 illustrates a vehicle avatar displayed as if it were co-located with a second user's mobile device from a vantage point of a first user in accordance with an embodiment.



FIG. 6 illustrates an avatar displayed as if it were co-located with a second user's body from a vantage point of a first user in accordance with an embodiment.



FIG. 7 illustrates a virtual relative direction and range from a first user to a second user in accordance with an embodiment.



FIG. 8 illustrates a screen view of an avatar in the virtual direction and range from the first user to the second user of FIG. 7 in accordance with an embodiment.



FIG. 9 illustrates an off-board camera system for tracking the position of a mobile device in accordance with an embodiment.



FIG. 10 illustrates an on-board camera system for tracking the position of a mobile device in accordance with an embodiment.



FIG. 11 is a flowchart illustrating a process in accordance with an embodiment.



FIG. 12 is an example computer system suitable for use with embodiments of the invention.





The figures will now be used to illustrate different embodiments in accordance with the invention. The figures are specific examples of embodiments and should not be interpreted as limiting embodiments, but rather exemplary forms and procedures.


DETAILED DESCRIPTION

Generally, methods and systems are described for multi-player video games and other interactive video presentations for which augmented video is presented on a user's mobile device display based on the relative position of another user. A user can hold up his device and see an avatar, vehicle, game marker, target crossbars, or other object in the place of where the other user is sitting. In some embodiments, the other user's avatar on the display can move, look, etc. in the same manner as the other user's physical movements. For example, if the other user turns toward the first user, the display will show the avatar turning toward him.


In some embodiments, the users can be located in different rooms across town, but their avatars are rendered on their respective mobile device's screens as if their avatars were seated next to each other in the same room. A common reference point for each of the players can be the center of his or her fixed display.


This description provides examples only, and is not intended to limit the scope, applicability, or configuration of the invention. Rather, the ensuing description of the embodiments will provide those skilled in the art with an enabling description for implementing embodiments of the invention. Various changes may be made in the function and arrangement of elements without departing from the spirit and scope of the invention.


Thus, various embodiments may omit, substitute, or add various procedures or components as appropriate. For instance, it should be appreciated that, in alternative embodiments, the methods may be performed in an order different from that described, and that various steps may be added, omitted, or combined. Also, features described with respect to certain embodiments may be combined in various other embodiments. Different aspects and elements of the embodiments may be combined in a similar manner.


It should also be appreciated that the following systems, methods, and software may individually or collectively be components of a larger system, wherein other procedures may take precedence over or otherwise modify their application. Also, a number of steps may be required before, after, or concurrently with the following embodiments.



FIG. 1 illustrates a first user holding a mobile device at a first position in space relative to a fixed display. A reference frame is defined with origin 104 at the center of fixed display 102 and a polar/cylindrical angle of zero projecting perpendicularly from the plane of the screen. Player 106 (P1) holds mobile device 108 in front of him. The point at the top middle rear of mobile device 108 is reference point 110. The position of reference point 110 is measured, and the coordinates representing the position of reference point 110 in space, (r1, θ1, y1), are stored. For clarity, the figure does not show the third dimensional, vertical measurement, y1. Angle γ1 is P1's view direction or look angle with respect to the fixed frame of reference of the large, fixed display.


A “coordinate” is any of a set of numbers, characters, or symbols used in specifying the location of a point on a one-dimensional line, on a two-dimensional surface, or in three-dimensional space. Coordinates may be orthogonal, such as Cartesian, polar and/or cylindrical, spherical, or non-orthogonal such as those describing a location on the surface of a sphere.


A “view direction” or “look angle” is a direction in space toward which a user's face is pointed or a corresponding user's mobile device is pointed. A view direction can include azimuth and elevation angles relative to the user. A view direction can include a bearing direction in relation to a fixed point.


A mobile device can include a handheld device, such as a Portable Playstation®, a user-worn device, such as glasses with an integrated display, or other electronic devices.


Using the coordinates representing the position and view direction, the mobile display can be used as a secondary display to ‘look around’ the virtual environment. For example, a player driving a virtual tank can slew his mobile device to the left to see enemy troops to the left side outside of the view of the fixed display. As another example, the player can use his mobile device display to zoom into the horizon of the display. The mobile device can act as virtual binoculars to better resolve figures in the distance that might be a threat.



FIG. 2 illustrates a second user holding a mobile device at a second position in space relative to a fixed display. The fixed display may or may not be the same fixed display as in FIG. 1. A reference frame is defined with origin 204 at the center of fixed display 202 and a polar/cylindrical angle of zero projecting perpendicularly from the plane of the screen. Player 206 (P2) holds mobile device 208 in front of her. The point at the top middle rear of mobile device 208 is reference point 210. The position of reference point 210 is measured, and coordinates representing the position of reference point 210 in space, (r2, θ2, y2), are stored. Angle γ2 is P2's view direction with respect to the fixed frame of reference.



FIG. 3 illustrates a relative direction and range from the user's device of FIG. 1 (mobile device 108 of player 106) to the user's device of FIG. 2 (mobile device 208 of player 206). Vector subtracting (r1, θ1, y1) from (r2, θ2, y2) results in (r3, θ3, y3), the direction and range from reference point 110 to reference point 210 with respect to the fixed reference frame from origin 104 (or 204). The distance from reference point 110 to reference point 210 can be calculated as the positive square root of (r32+y32). In this embodiment, y3 is zero, and thus the relative distance is simply r3. Subtracting γ1 from θ3 results in first view direction β1, and subtracting γ2 from θ3 results in second view direction β2.


Although polar/cylindrical coordinates are used here in the examples, other coordinate systems can be used, such as Cartesian and spherical coordinate systems.



FIG. 4 illustrates an avatar displayed as if it were co-located with a second user's mobile device from a vantage point of the first user. Using the relative direction (e.g. β1) and range (e.g. r3) from the first user's handheld device to the second user's handheld device, avatar 414 is rendered on integrated display 112 of mobile device 108. From the vantage point of user 106 (not shown in this figure), avatar 414 in the virtual world appears to overlay the mobile device (occluded in this figure) of user 206 in the real world. Other objects, such as background clouds 416, are rendered with avatar 414. The view from user 206's vantage point is the view on fixed display 102. An overhead view of the positions of the users, mobile devices, and fixed display may be interpreted as that in FIG. 3.


If mobile device 108 is slewed to the right, then avatar 414 disappears off the left side of the display. If mobile device 108 is slewed to the left, then avatar 414 disappears off the right side of the display. In some embodiments, it can appear as if the embedded display is transparent and the view of the room in the background is the same, except for the other player being overlaid with graphics depicting an avatar.



FIG. 5 illustrates vehicle 514 displayed as if it were co-located with a second user's mobile device from a vantage point of the first user. From the vantage point of user 106 (not shown in this figure), vehicle 514 in the virtual world on display 112 of mobile device 108 appears to overlay the mobile device (occluded in this figure) of user 506 in the real world.



FIG. 6 illustrates avatar 614 displayed as if it were co-located with second user's body 206 from a vantage point of the first user 106 (not shown in this figure). Background object 616, a pyramid in the desert sand, is displayed on integrated display 112 of mobile device 108. Using predefined models of how a majority of users hold their mobile devices, an estimate of where the holder's head, body, etc. is located in relation to a reference point on the mobile device can be used to render an avatar so that it appears to better portray the actual position of the user. For example, the bridge of a user's nose may be estimated to be 14 inches away along a line extending perpendicularly from the center of the integrated display or a user's controller that does not have an integrated display. This offset can be used to shift the avatar's head to this position. The rest of the avatar's body can be filled downward to the ground. In some embodiments, facial and motion tracking can be used to track the user's body directly.



FIG. 7 illustrates a virtual relative direction and range from a first user to a second user. Second user 206 can be made to appear as a mirror or 180° rotated image through the center of fixed display 102. Using vector addition and subtraction, a virtual direction β4 and virtual range r4 can be calculated such that opposing player 206 appears to be across from first user 106. If the opposing player steps to forward to move toward the left of his screen (see figure), the display on player 106's mobile device 108 will show player 206's avatar move right.


This mirrored movement can be useful to simulate games in which players play across from one another, such as tennis, handball, chess, etc. This can be used by players in the same room with the same, central fixed display or by players in different rooms with their own displays.



FIG. 8 illustrates a screen view of an avatar in the virtual direction and range from the first user to the second as shown in FIG. 7. As player 206 physically moves in front of his fixed display, player 106 (not shown in this figure) sees avatar 814 representing player 206 move across the display on the integrated display of mobile device 108. If mobile device 108 is moved, the tennis court, avatar 814, and other elements of the view move oppositely so that it appears that the virtual world is inertially stabilized with respect to the real, physical world.



FIG. 9 illustrates an off-board camera system for tracking the position of a mobile device. Video camera 920 sits in a convenient, fixed position atop fixed display 102 and tracks mobile device 908 using infrared, radio frequency, visible light, or other suitable methods. For example, video camera 920 may track a piece of reflective tape on mobile device 908.


Camera 920 can also be enabled to track faces as is known in the art. Facial tracking technology can work to directly determine the position and view direction of a player's head, eyes, nose, etc. A camera on mobile device 908 can also be used to track the player's head.


Video game console 922 connects to camera 920 and fixed display 102. Video game console connects wirelessly, through wireless port 924, with mobile device 908 through wireless link 926. Wireless link 926 can be radio frequency, infrared, etc. The camera may output the position of tracked objects to console 922, or the camera may output raw video to console 922 and console 922 processes the raw video to determine the position, velocity, etc. of tracked objects.


Console 922 can send the coordinates of the tracked objects to mobile device 908 along with the determined view direction of mobile device 908. Mobile device 908 can then use the coordinates and view direction to render an avatar in the correct position on its screen.


In some embodiments, wireless link 926 can be used to send remote control-like commands to the video display. For example, a cellular phone can be used to turn up or down the volume on a television.



FIG. 10 illustrates an on-board camera system for tracking the position of a mobile device. Mobile device 1008 includes video camera 1020. Video camera 1020 tracks the rectangular screen of display 102, markers on the screen, or markers off the screen, such as infrared sources as known in the art. Markers on the screen can be in the corners of the screen and be rendered at a predetermined frequency so that mobile device 1008 can positively track the screen. The markers can include bar codes, two-dimensional codes, or be modulated in time to send information from fixed display 102 to mobile device 1008. Console 922 can send the coordinates of the opposing user's mobile device to mobile device 1008 through wireless port 924 and wireless link 926, and mobile device 1008 can use those coordinates, along with its internally determined coordinates and view direction, to render an avatar in the correct position on its screen. A camera on mobile device 1008 facing the player can also be used to track the player's head.


The position of mobile device 1008 can be used as an input to a video game. For example, a user can pace around his living room floor, marking locations where she will have her battleships for a virtual board game of Battleship®. In another example, a virtual game of ‘Marco Polo’ can be played in which players attempt to guess the location of other players without the use of their eyes. A player could move around his T.V. room in order to simulate his virtual position on a field or in a pool.


In other embodiments, the mobile device can automatically determine its position and view direction using a Global Positioning System (GPS) receiver, accelerometer-based inertial system, mechanical or solid-state gyroscope, electronic magnetic compass, radio frequency triangulation, and/or other methods known in the art.



FIG. 11 shows an example flowchart illustrating process 1100 in accordance with one embodiment. This process can be automated in a computer or other machine and can be coded in software, firmware, or hard coded as machine-readable instructions and run through one or more processors that can implement the instructions. In operation 1102, a first position coordinate corresponding to a first user is received, the first position coordinate being relative to a first video display. In operation 1104, a first view direction corresponding to the first user is received, the view direction being relative to the first video display. In operation 1106, a second position coordinate corresponding to a second user is received, the second position coordinate being relative to a second video display. In operation 1108, a direction and range from the first position coordinate to the second position coordinate are determined. In operation 1110, an object is rendered, on an integrated display of a first mobile device, based on the determined direction and range from the first position coordinate to the second position coordinate and based on the received first view direction. These operations may be performed in the sequence given above or in different orders as applicable.



FIG. 12 illustrates an example of a hardware system suitable for implementing a device in accordance with various embodiments. This block diagram illustrates a computer system 1200, such as a personal computer, video game console and associated display (e.g., video game console 922 and fixed display 102 of FIG. 9, mobile device (e.g., mobile device 108 of FIG. 1), personal digital assistant, or other digital device, suitable for practicing embodiments of the invention. Computer system 1200 includes a central processing unit (CPU) 1205 for running software applications and optionally an operating system. CPU 1205 may be made up of one or more homogeneous or heterogeneous processing cores. Memory 1210 stores applications and data for use by the CPU 1205. Storage 1215 provides non-volatile storage and other computer readable media for applications and data and may include fixed disk drives, removable disk drives, flash memory devices, and CD-ROM, DVD-ROM, Blu-ray, HD-DVD, or other optical storage devices, as well as signal transmission and storage media. User input devices 1220 communicate user inputs from one or more users to the computer system 1200, examples of which may include keyboards, mice, joysticks, touch pads, touch screens, still or video cameras, and/or microphones. Network interface 1225 allows computer system 1200 to communicate with other computer systems via an electronic communications network, and may include wired or wireless communication over local area networks and wide area networks such as the Internet. An audio processor 1230 is adapted to generate analog or digital audio output from instructions and/or data provided by the CPU 1205, memory 1210, and/or storage 1215. The components of computer system 1200, including CPU 1205, memory 1210, data storage 1215, user input devices 1220, network interface 1225, and audio processor 1230 are connected via one or more data buses 1235.


A graphics subsystem 1240 is further connected with data bus 1235 and the components of the computer system 1200. The graphics subsystem 1240 includes a graphics processing unit (GPU) 1245 and graphics memory 1250. Graphics memory 1250 includes a display memory (e.g., a frame buffer) used for storing pixel data for each pixel of an output image. Graphics memory 1250 can be integrated in the same device as GPU 1245, connected as a separate device with GPU 1245, and/or implemented within memory 1210. Pixel data can be provided to graphics memory 1250 directly from the CPU 1205. Alternatively, CPU 1205 provides the GPU 1245 with data and/or instructions defining the desired output images, from which the GPU 1245 generates the pixel data of one or more output images. The data and/or instructions defining the desired output images can be stored in memory 1210 and/or graphics memory 1250. In an embodiment, the GPU 1245 includes 3D rendering capabilities for generating pixel data for output images from instructions and data defining the geometry, lighting, shading, texturing, motion, and/or camera parameters for a scene. The GPU 1245 can further include one or more programmable execution units capable of executing shader programs.


The graphics subsystem 1240 periodically outputs pixel data for an image from graphics memory 1250 to be displayed on display device 1255. Display device 1255 can be any device capable of displaying visual information in response to a signal from the computer system 1200, including CRT, LCD, plasma, and OLED displays. Computer system 1200 can provide the display device 1255 with an analog or digital signal.


In accordance with various embodiments, CPU 1205 is one or more general-purpose microprocessors having one or more processing cores. Further embodiments can be implemented using one or more CPUs with microprocessor architectures specifically adapted for highly parallel and computationally intensive applications, such as media and interactive entertainment applications.


The components of the system 108 of FIG. 1 and system 208 of FIG. 2 may be connected via a network, which may be any combination of the following: the Internet, an IP network, an intranet, a wide-area network (“WAN”), a local-area network (“LAN”), a virtual private network (“VPN”), the Public Switched Telephone Network (“PSTN”), or any other type of network supporting data communication between devices described herein, in different embodiments. A network may include both wired and wireless connections, including optical links. Many other examples are possible and apparent to those skilled in the art in light of this disclosure. In the discussion herein, a network may or may not be noted specifically.


It should be noted that the methods, systems, and devices discussed above are intended merely to be examples. It must be stressed that various embodiments may omit, substitute, or add various procedures or components as appropriate. For instance, it should be appreciated that, in alternative embodiments, the methods may be performed in an order different from that described, and that various steps may be added, omitted, or combined. Also, features described with respect to certain embodiments may be combined in various other embodiments. Different aspects and elements of the embodiments may be combined in a similar manner. Also, it should be emphasized that technology evolves and, thus, many of the elements are examples and should not be interpreted to limit the scope of the invention.


Specific details are given in the description to provide a thorough understanding of the embodiments. However, it will be understood by one of ordinary skill in the art that the embodiments may be practiced without these specific details. For example, well-known circuits, processes, algorithms, structures, and techniques have been shown without unnecessary detail in order to avoid obscuring the embodiments.


Also, it is noted that the embodiments may be described as a process which is depicted as a flow diagram or block diagram. Although each may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be rearranged. A process may have additional steps not included in the figure.


Moreover, as disclosed herein, the term “memory” or “memory unit” may represent one or more devices for storing data, including read-only memory (ROM), random access memory (RAM), magnetic RAM, core memory, magnetic disk storage mediums, optical storage mediums, flash memory devices, or other computer-readable mediums for storing information. The term “computer-readable medium” includes, but is not limited to, portable or fixed storage devices, optical storage devices, wireless channels, a sim card, other smart cards, and various other mediums capable of storing, containing, or carrying instructions or data.


Furthermore, embodiments may be implemented by hardware, software, firmware, middleware, microcode, hardware description languages, or any combination thereof. When implemented in software, firmware, middleware, or microcode, the program code or code segments to perform the necessary tasks may be stored in a computer-readable medium such as a storage medium. Processors may perform the necessary tasks.


Having described several embodiments, it will be recognized by those of skill in the art that various modifications, alternative constructions, and equivalents may be used without departing from the spirit of the invention. For example, the above elements may merely be a component of a larger system, wherein other rules may take precedence over or otherwise modify the application of the invention. Also, a number of steps may be undertaken before, during, or after the above elements are considered. Accordingly, the above description should not be taken as limiting the scope of the invention.

Claims
  • 1. A system for augmented video, the system comprising: one or more processors; andone or more memories storing computer-readable instructions that, upon execution by the one or more processors, cause the system to: determine a distance and a direction between a first mobile device and a first display, wherein the first mobile device comprises a second display that is different from the first display;determine a first position coordinate of the first mobile device using the distance and the direction;determine a position for a displayable object associated with a second mobile device, wherein the position is determined based on the first position coordinate of the first mobile device and a second position coordinate of the second mobile device; andcause the first mobile device to display the displayable object in the position on the second display.
  • 2. The system of claim 1, wherein the first mobile device is configured to control the first display.
  • 3. The system of claim 1, wherein the first mobile device and the second mobile device are video game controllers, and wherein the first display presents first content of a virtual game environment, wherein the second display presents second content of the virtual game environment, and wherein the displayable object comprises an avatar.
  • 4. The system of claim 1, wherein the distance and the direction are determined based on a first reference point on the first mobile device and a first origin on the first display, wherein the first position coordinate of the first mobile device comprises first coordinates of the first reference point defined relative to the first origin.
  • 5. The system of claim 4, wherein the second position coordinate of the second mobile device comprises second coordinates of a second reference point on the second mobile device defined relative to a second origin of a third display.
  • 6. The system of claim 5, wherein determining the position for the displayable object comprises determining a relative distance and a relative direction between the first mobile device and the second mobile device based on the first coordinates, the second coordinates, the first origin, and the second origin.
  • 7. The system of claim 1, wherein the displayable object is displayed as a 180° rotated image through a center of the second display.
  • 8. The system of claim 1, wherein the first mobile device comprises glasses having an integrated display in at least one lens of the glasses.
  • 9. The system of claim 1, wherein the distance is determined based at least in part on a marker on the first display, and wherein the direction is determined based at least in part on a video camera of the first mobile device.
  • 10. The system of claim 1, wherein the distance and the direction are determined based at least in part on an off-board camera system of the system, wherein the off-board camera system is fixed to the first display.
  • 11. A method for augmenting video, the method implemented by a system and comprising: determining a distance and a direction between a first mobile device and a first display, wherein the first mobile device comprises a second display that is different from the first display;determining a first position coordinate of the first mobile device using the distance and the direction;determining a position for a displayable object associated with a second mobile device, wherein the position is determined based on the first position coordinate of the first mobile device and a second position coordinate of the second mobile device; andcausing the first mobile device to display the displayable object in the position on the second display.
  • 12. The method of claim 11, wherein the first mobile device and the second mobile device are video game controllers, and wherein the first display presents first content of a virtual game environment, wherein the second display presents second content of the virtual game environment, and wherein the displayable object comprises an avatar.
  • 13. The method of claim 11, wherein the distance and the direction are determined based on a first reference point on the first mobile device and a first origin on the first display, wherein the first position coordinate of the first mobile device comprises first coordinates of the first reference point defined relative to the first origin.
  • 14. The method of claim 13, wherein the second position coordinate of the second mobile device comprises second coordinates of a second reference point on the second mobile device defined relative to a second origin of a third display.
  • 15. The method of claim 14, wherein determining the position for the displayable object comprises determining a relative distance and a relative direction between the first mobile device and the second mobile device based on the first coordinates, the second coordinates, the first origin, and the second origin.
  • 16. One or more non-transitory computer-readable storage media storing instructions that, upon execution by one or more processors of a system, cause the system to: determine a distance and a direction between a first mobile device and a first display, wherein the first mobile device comprises a second display that is different from the first display;determine a first position coordinate of the first mobile device using the distance and the direction;determine a position for a displayable object associated with a second mobile device, wherein the position is determined based on the first position coordinate of the first mobile device and a second position coordinate of the second mobile device; andcause the first mobile device to display the displayable object in the position on the second display.
  • 17. The one or more non-transitory computer-readable storage media of claim 16, wherein the first mobile device and the second mobile device are video game controllers, and wherein the first display presents first content of a virtual game environment, wherein the second display presents second content of the virtual game environment, and wherein the displayable object comprises an avatar.
  • 18. The one or more non-transitory computer-readable storage media of claim 16, wherein the distance and the direction are determined based on a first reference point on the first mobile device and a first origin on the first display, wherein the first position coordinate of the first mobile device comprises first coordinates of the first reference point defined relative to the first origin.
  • 19. The one or more non-transitory computer-readable storage media of claim 18, wherein the second position coordinate of the second mobile device comprises second coordinates of a second reference point on the second mobile device defined relative to a second origin of a third display.
  • 20. The one or more non-transitory computer-readable storage media of claim 19, wherein determining the position for the displayable object comprises determining a relative distance and a relative direction between the first mobile device and the second mobile device based on the first coordinates, the second coordinates, the first origin, and the second origin.
CROSS-REFERENCES TO RELATED APPLICATIONS

This application is a continuation of U.S. application Ser. No. 14/880,889, filed Oct. 12, 2015, entitled “POSITION-DEPENDENT GAMING, 3-D CONTROLLER, AND HANDHELD AS A REMOTE,” which is a continuation of U.S. application Ser. No. 12/835,671, filed Jul. 13, 2010, entitled “POSITION-DEPENDENT GAMING, 3-D CONTROLLER, AND HANDHELD AS A REMOTE,” which are hereby incorporated by reference in its entirety for all purposes. This application is related to U.S. application Ser. No. 14/860,239, filed Sep. 21, 2015, entitled “OVERLAY NON-VIDEO CONTENT ON A MOBILE DEVICE,” which is a continuation of U.S. application Ser. No. 13/554,958, filed Jul. 20, 2012, now U.S. Pat. No. 9,143,699, entitled “OVERLAY NON-VIDEO CONTENT ON A MOBILE DEVICE,” which is a continuation-in-part of U.S. patent application Ser. No. 12/835,645, filed Jul. 13, 2010, now U.S. Pat. No. 8,730,354, entitled “OVERLAY VIDEO CONTENT ON A MOBILE DEVICE,” and which claims the benefit of U.S. Provisional Application No. 61/527,048, filed Sep. 12, 2011, entitled “OVERLAY NON-VIDEO CONTENT ON A MOBILE DEVICE,” which are hereby incorporated by reference in their entireties for all purposes, each of which is incorporated by reference herein in their entirety for all purposes.

US Referenced Citations (163)
Number Name Date Kind
4787051 Olson Nov 1988 A
4843568 Krueger et al. Jun 1989 A
4884876 Lipton et al. Dec 1989 A
4907860 Noble Mar 1990 A
5128671 Thomas, Jr. Jul 1992 A
5528265 Harrison Jun 1996 A
5805205 Songer Sep 1998 A
5821989 Lazzaro et al. Oct 1998 A
5831664 Wharton et al. Nov 1998 A
5929849 Kikinis Jul 1999 A
6157368 Fager Dec 2000 A
6175379 Uomori et al. Jan 2001 B1
6247022 Yankowski Jun 2001 B1
6375572 Masuyama et al. Apr 2002 B1
6522312 Ohshima et al. Feb 2003 B2
6615268 Philyaw et al. Sep 2003 B1
6727867 Divelbiss et al. Apr 2004 B2
6993573 Hunter Jan 2006 B2
7036083 Zenith Apr 2006 B1
7200857 Rodriguez et al. Apr 2007 B1
7209942 Hori et al. Apr 2007 B1
7398000 Green Jul 2008 B2
7427996 Yonezawa et al. Sep 2008 B2
7536706 Sezan et al. May 2009 B1
7549052 Haitsma et al. Jun 2009 B2
7581034 Polivy et al. Aug 2009 B2
7599580 King et al. Oct 2009 B2
7898504 Fischer Mar 2011 B2
8037496 Begeja et al. Oct 2011 B1
8188969 Morin et al. May 2012 B2
8251290 Bushman et al. Aug 2012 B1
8253649 Imai et al. Aug 2012 B2
8463000 Kaminski, Jr. Jun 2013 B1
8560583 Mallinson Oct 2013 B2
8644842 Arrasvuori et al. Feb 2014 B2
8730156 Weising et al. May 2014 B2
8730354 Stafford et al. May 2014 B2
8838671 Wies et al. Sep 2014 B2
8874575 Mallinson Oct 2014 B2
8907889 Sweetser et al. Dec 2014 B2
9113217 Mallinson Aug 2015 B2
9143699 Osman Sep 2015 B2
9256601 Mallinson Feb 2016 B2
9264785 Mallinson Feb 2016 B2
9473820 Mallinson Oct 2016 B2
9513700 Weising et al. Dec 2016 B2
9703369 Mullen Jul 2017 B1
9762817 Osman Sep 2017 B2
9762819 Jo Sep 2017 B2
9814977 Stafford et al. Nov 2017 B2
9832441 Osman Nov 2017 B2
9901828 Miller et al. Feb 2018 B2
10171754 Osman Jan 2019 B2
10279255 Stafford et al. May 2019 B2
20020028000 Conwell et al. Mar 2002 A1
20020059604 Papagan et al. May 2002 A1
20020078456 Hudson et al. Jun 2002 A1
20020085097 Colmenarez et al. Jul 2002 A1
20020122145 Tung Sep 2002 A1
20020140855 Hayes et al. Oct 2002 A1
20020162118 Levy et al. Oct 2002 A1
20020165028 Miyamoto et al. Nov 2002 A1
20020186676 Milley et al. Dec 2002 A1
20030028873 Lemmons Feb 2003 A1
20030093790 Logan et al. May 2003 A1
20030152366 Kanazawa et al. Aug 2003 A1
20030156144 Morita Aug 2003 A1
20030171096 Ilan et al. Sep 2003 A1
20030185541 Green Oct 2003 A1
20030212762 Barnes et al. Nov 2003 A1
20040001161 Herley Jan 2004 A1
20040210824 Shoff et al. Oct 2004 A1
20040212589 Hall et al. Oct 2004 A1
20050005308 Logan et al. Jan 2005 A1
20050024586 Teiwes et al. Feb 2005 A1
20050057807 Takagi et al. Mar 2005 A1
20050094267 Huber May 2005 A1
20050108026 Brierre et al. May 2005 A1
20050116881 Divelbiss et al. Jun 2005 A1
20050123267 Tsumagari et al. Jun 2005 A1
20050193425 Sull et al. Sep 2005 A1
20050220439 Carton et al. Oct 2005 A1
20050227674 Kopra et al. Oct 2005 A1
20050259323 Fukushima et al. Nov 2005 A1
20050262548 Shimojo et al. Nov 2005 A1
20060015908 Vermola et al. Jan 2006 A1
20060038833 Mallinson et al. Feb 2006 A1
20060038880 Starkweather et al. Feb 2006 A1
20060053472 Goto et al. Mar 2006 A1
20060064734 Ma Mar 2006 A1
20060184960 Horton et al. Aug 2006 A1
20060285772 Hull et al. Dec 2006 A1
20070106551 McGucken May 2007 A1
20070113263 Chatani May 2007 A1
20070124756 Covell et al. May 2007 A1
20070130580 Covell et al. Jun 2007 A1
20070136773 O'Neil et al. Jun 2007 A1
20070143777 Wang Jun 2007 A1
20070143778 Covell et al. Jun 2007 A1
20070162863 Buhrke et al. Jul 2007 A1
20070169115 Ko et al. Jul 2007 A1
20070248158 Vieron et al. Oct 2007 A1
20070250716 Brunk et al. Oct 2007 A1
20080062259 Lipton et al. Mar 2008 A1
20080066111 Ellis et al. Mar 2008 A1
20080084513 Brott et al. Apr 2008 A1
20080215679 Gillo et al. Sep 2008 A1
20080226119 Candelore et al. Sep 2008 A1
20080246694 Fischer Oct 2008 A1
20080267584 Green Oct 2008 A1
20080275763 Tran et al. Nov 2008 A1
20080276278 Krieger et al. Nov 2008 A1
20090019474 Robotham Jan 2009 A1
20090037975 Ishikawa et al. Feb 2009 A1
20090055383 Zalewski Feb 2009 A1
20090063277 Bernosky et al. Mar 2009 A1
20090123025 Deng et al. May 2009 A1
20090154806 Chang et al. Jun 2009 A1
20090228921 Miki et al. Sep 2009 A1
20090285444 Erol et al. Nov 2009 A1
20090327894 Rakib et al. Dec 2009 A1
20100007050 Kagawa et al. Jan 2010 A1
20100007582 Zalewski Jan 2010 A1
20100053164 Imai et al. Mar 2010 A1
20100070501 Walsh et al. Mar 2010 A1
20100086283 Ramachandran et al. Apr 2010 A1
20100091198 Matsuo Apr 2010 A1
20100100581 Landow et al. Apr 2010 A1
20100119208 Davis et al. May 2010 A1
20100122283 Button May 2010 A1
20100149072 Waeller et al. Jun 2010 A1
20100166309 Hull et al. Jul 2010 A1
20100222102 Rodriguez Sep 2010 A1
20100257252 Dougherty et al. Oct 2010 A1
20100275235 Joo et al. Oct 2010 A1
20100309225 Gray et al. Dec 2010 A1
20100318484 Huberman et al. Dec 2010 A1
20100322469 Sharma Dec 2010 A1
20110053642 Lee Mar 2011 A1
20110071838 Li-Chun Wang et al. Mar 2011 A1
20110078729 LaJoie et al. Mar 2011 A1
20110103763 Tse et al. May 2011 A1
20110246495 Mallinson Oct 2011 A1
20120013770 Stafford et al. Jan 2012 A1
20120014558 Stafford et al. Jan 2012 A1
20120059845 Covell et al. Mar 2012 A1
20120086630 Zhu et al. Apr 2012 A1
20120099760 Bernosky et al. Apr 2012 A1
20120143679 Bernosky et al. Jun 2012 A1
20120210349 Campana et al. Aug 2012 A1
20120249531 Jonsson Oct 2012 A1
20130050069 Ota Feb 2013 A1
20130141419 Mount et al. Jun 2013 A1
20130194437 Osman Aug 2013 A1
20130198642 Carney et al. Aug 2013 A1
20150026716 Mallinson Jan 2015 A1
20150156542 Covell et al. Jun 2015 A1
20150358679 Mallinson Dec 2015 A1
20150379043 Hull et al. Dec 2015 A1
20160014350 Osman Jan 2016 A1
20160112762 Mallinson Apr 2016 A1
20170013313 Mallinson Jan 2017 A1
20170013314 Mallinson Jan 2017 A1
Foreign Referenced Citations (46)
Number Date Country
101002475 Jul 2007 CN
101222620 Jul 2008 CN
101374090 Feb 2009 CN
101651834 Feb 2010 CN
103096986 Mar 2015 CN
103561293 Jan 2018 CN
104618779 Feb 2019 CN
19533767 Mar 1997 DE
1053642 Nov 2000 EP
1646167 Apr 2006 EP
2180652 Apr 2010 EP
9135400 May 1997 JP
2000242661 Sep 2000 JP
2000287184 Oct 2000 JP
2001036875 Feb 2001 JP
2001292427 Oct 2001 JP
2002118817 Apr 2002 JP
2002198840 Jul 2002 JP
2002366777 Dec 2002 JP
2005006610 Jan 2005 JP
2005295136 Oct 2005 JP
2005532578 Oct 2005 JP
2006005897 Jan 2006 JP
2007088801 Apr 2007 JP
2008510254 Apr 2008 JP
2008258984 Oct 2008 JP
2008283344 Nov 2008 JP
2009033769 Feb 2009 JP
2009271675 Nov 2009 JP
5576561 Aug 2014 JP
5651231 Nov 2014 JP
5711355 Mar 2015 JP
5908535 Apr 2016 JP
1020080101075 Nov 2008 KR
1020090043526 May 2009 KR
2004004351 Jan 2004 WO
2004034281 Apr 2004 WO
2005006610 Jan 2005 WO
2007064641 Jun 2007 WO
2008024723 Feb 2008 WO
2008025407 Mar 2008 WO
2008051538 May 2008 WO
2008056180 May 2008 WO
2009032707 Mar 2009 WO
2009036435 Mar 2009 WO
2010020739 Feb 2010 WO
Non-Patent Literature Citations (27)
Entry
U.S. Appl. No. 12/835,657, Notice of Allowance dated Jul. 18, 2017, 12 pages.
U.S. Appl. No. 12/835,657, Notice of Allowance dated Sep. 21, 2017, 9 pages.
U.S. Appl. No. 13/554,963, Final Office Action dated May 31, 2016, 26 pages.
U.S. Appl. No. 13/554,963, Notice of Allowance dated Jul. 28, 2017, 10 pages.
U.S. Appl. No. 14/860,239, Notice of Allowance dated Apr. 25, 2017, 9 pages.
U.S. Appl. No. 14/880,889, Non-Final Office Action dated May 24, 2018, 23 pages.
U.S. Appl. No. 14/880,889, Notice of Allowance dated Dec. 21, 2018, 9 pages.
Bolt, “Put-That-There”: Voice and Gesture at the Graphics Interface, ACM SIGGRAPH Conference Proceedings, vol. 14, Issue 3, Jul. 1980, pp. 262-270.
Chinese Application No. 201310454576.0, Office Action dated May 2, 2017, 19 pages (9 pages for the original document and 10 pages for the English translation).
Chinese Application No. 201510087918.9, Office Action dated Mar. 30, 2017, 9 pages (5 pages for the original document and 4 pages for the English translation).
Dewitt et al., Pantomation: A System for Position Tracking, Proceeding of the 2nd Symposium on Small Computers in the Arts, Oct. 15-17, 1982, pp. 61-69.
European Application No. 11807278.4, Examination Report dated Apr. 4, 2019, 8 pages.
Japanese Application No. 2016-058279, Office Action dated Feb. 14, 2017, 6 pages (3 pages for the original document and 3 pages for the English translation).
Mohan et al., Bokode: Imperceptible Visual tags for Camera Based Interaction from a Distance, ACM Transactions on Graphics, vol. 28, No. 3, Article 98, Aug. 2009, 8 pages.
International Application No. PCT/US2011/042456, International Search Report and Written Opinion dated Nov. 4, 2011, 12 pages.
Tanaka et al., Interactive Video Navigation System by Using the Media Fusion Technique of Video/TV and World Wide Web, Information Processing Society of Japan, Feb. 4, 1997, 2 pages.
Tanaka et al., JP 2008-210683 Article, Information Processing Society of Japan, Japanese language, PW080056, vol. 97, No. 1, Feb. 4, 1997, pp. 1-5.
Toner, Abstract of dissertation, Provided by Mr. Toner and Purported to be Maintained at Liverpool University, 1 page.
EP11763481.6 , “Extended European Search Report”, dated Jun. 1, 2016, 12 pages.
EP11763481.6 , “Partial Supplementary European Search Report”, dated Feb. 12, 2016, 8 pages.
EP11763482.4 , “Extended European Search Report”, dated Dec. 12, 2014, 6 pages.
EP11807278.4 , “Extended European Search Report”, dated Feb. 26, 2015, 13 pages.
PCT/US2011/030834 , “International Preliminary Report on Patentability”, dated Oct. 11, 2012, 6 pages.
PCT/US2011/030834 , “International Search Report and Written Opinion”, dated Dec. 26, 2011, 14 pages.
PCT/US2011/030836 , “International Preliminary Report on Patentability”, dated Oct. 2, 2012, 5 pages.
PCT/US2011/030836 , “International Search Report and Written Opinion”, dated Dec. 26, 2011, 14 pages.
PCT/US2011/042456 , “International Preliminary Report on Patentability”, dated Jan. 15, 2013, 7 pages.
Related Publications (1)
Number Date Country
20190232162 A1 Aug 2019 US
Continuations (2)
Number Date Country
Parent 14880889 Oct 2015 US
Child 16377122 US
Parent 12835671 Jul 2010 US
Child 14880889 US