Embodiments of the present invention relate to video display and video game entertainment devices in general and, in particular, to the rendering of avatars, vehicles, game pieces, etc. on a user's mobile device based on its look angle and/or position with respect to another user's mobile device and/or a fixed video display.
Video games are typically played by users sitting in front of a video screen. Multi-player video games are often played by users sitting in front of a common, shared video screen. The shared video screen is sometimes a large television that is connected with a video game console, such as a Sony PlayStation® 3. Wired or wireless game controllers serve as input devices to send commands from the users to the console. In some instances, data is sent from the console to the controllers to, for example, switch on and off lights on a controller, give tactile signals (e.g. force feedback) to the user, calibrate the controllers, etc.
Networked multi-player games are often played by users sitting in front of their own, personal video screens. These video games are often played from a personal computer (PC) or a video game console using a keyboard or game controllers described above. Some networked multi-player games are played from a portable handheld, smart phone, or other mobile device with its own embedded display, such as a Sony PlayStation Portable® (PSP). The display shares the same plastic housing with buttons, joysticks, rollerballs, trigger switches, and/or other input components. Some displays that are touch or stylus-sensitive also serve as input devices in addition to or in conjunction with physical buttons, etc.
To play a handheld game on a mobile device with an integrated display, a user sometimes stares down into his screen without moving. Some players attempt to stay as motionless as possible, avoiding jarring by others around them, in order to concentrate and maintain hand-eye coordination to correctly select inputs in response to the game. This head-down, motionless poise can make for a solitary experience, even when a user is playing against another human opponent. Even if the opposing, or cooperating, players are seated next to each other, physical interaction between the players can be minimal because they look with their heads down at their screens instead of toward each other. This heads-down posture can also result in getting motion sickness if one in a moving vehicle such as an automobile.
There may, therefore, be a need in the art to allow players of single-player games to better interact with their physical surroundings and players of multi-player games to better interact with one another while playing against each other.
Methods, systems, and devices are presented for augmenting video using a relative distance and direction of a mobile display from a fixed display. Movement of the mobile display can be used as an input to a video game and/or to help render graphics associated with the video game. For example, a user driving a video game jeep through a jungle may have a view out the front windshield of the jeep from a fixed display and be able to slew his mobile device up and around to look at things above and behind him in the virtual jungle.
Methods, systems, and devices are described for displaying augmented video on a display integrated in a mobile device held or worn by a first user based on the relative position of another user's mobile device and view direction of the first user's mobile device. In some embodiments, a user can hold up his device in the direction of another user and see an avatar of the other user on the display apparently at the same position in space as the other user. This can give the appearance that the user's display is simply a transparent window with the exception that his opponent's physical body is overlaid with the graphical body of an avatar.
In some embodiments, the mobile devices can be calibrated so that the position of the other user's head, body, etc. are estimated from the orientation and motion of his device so that the avatar's head, body, etc. appear at the same position as the other user. In some embodiments, face and motion tracking of a user's head, body, etc. can be used to measure the location of the user. In other embodiments, the mobile devices are glasses so that real-world head tracking of the opposing player is better measured.
Some embodiments include a system for augmenting video, comprising a video source configured to provide video content to a video display and a mobile device with an integrated video camera and display. The mobile device is configured to track a relative distance and direction of the video display using the video camera, determine a position coordinate of the mobile device using the tracked relative distance and direction, and render, on the integrated display, an object in a position based on the determined position coordinate of the mobile device.
Some embodiments include a method for augmenting video, comprising receiving a first position coordinate corresponding to a first user, the first position coordinate relative to a first video display, receiving a first view direction corresponding to the first user, the first view direction relative to the first video display, and receiving a second position coordinate corresponding to a second user, the second position coordinate relative to a second video display. The method further includes determining a direction and range from the first position coordinate to the second position coordinate and rendering, on an integrated display of a first mobile device, an object based on the determined direction and range from the first position coordinate to the second position coordinate and based on the received first view direction.
Other embodiments relate to machine-readable tangible storage media and computer systems which store or execute instructions for the methods described above.
Some embodiments include a system for augmenting video, comprising a first mobile device having a display, a second mobile device, means for determining a relative direction and range from the first mobile device to the second mobile device, and means for determining a view direction of the first mobile device. The first mobile device is configured to render on its display an avatar or vehicle from a perspective based on the relative direction and range to the second mobile device and view direction of the first mobile device.
A further understanding of the nature and advantages of the present invention may be realized by reference to the following drawings. In the appended figures, similar components or features may have the same reference label.
The figures will now be used to illustrate different embodiments in accordance with the invention. The figures are specific examples of embodiments and should not be interpreted as limiting embodiments, but rather exemplary forms and procedures.
Generally, methods and systems are described for multi-player video games and other interactive video presentations for which augmented video is presented on a user's mobile device display based on the relative position of another user. A user can hold up his device and see an avatar, vehicle, game marker, target crossbars, or other object in the place of where the other user is sitting. In some embodiments, the other user's avatar on the display can move, look, etc. in the same manner as the other user's physical movements. For example, if the other user turns toward the first user, the display will show the avatar turning toward him.
In some embodiments, the users can be located in different rooms across town, but their avatars are rendered on their respective mobile device's screens as if their avatars were seated next to each other in the same room. A common reference point for each of the players can be the center of his or her fixed display.
This description provides examples only, and is not intended to limit the scope, applicability, or configuration of the invention. Rather, the ensuing description of the embodiments will provide those skilled in the art with an enabling description for implementing embodiments of the invention. Various changes may be made in the function and arrangement of elements without departing from the spirit and scope of the invention.
Thus, various embodiments may omit, substitute, or add various procedures or components as appropriate. For instance, it should be appreciated that, in alternative embodiments, the methods may be performed in an order different from that described, and that various steps may be added, omitted, or combined. Also, features described with respect to certain embodiments may be combined in various other embodiments. Different aspects and elements of the embodiments may be combined in a similar manner.
It should also be appreciated that the following systems, methods, and software may individually or collectively be components of a larger system, wherein other procedures may take precedence over or otherwise modify their application. Also, a number of steps may be required before, after, or concurrently with the following embodiments.
A “coordinate” is any of a set of numbers, characters, or symbols used in specifying the location of a point on a one-dimensional line, on a two-dimensional surface, or in three-dimensional space. Coordinates may be orthogonal, such as Cartesian, polar and/or cylindrical, spherical, or non-orthogonal such as those describing a location on the surface of a sphere.
A “view direction” or “look angle” is a direction in space toward which a user's face is pointed or a corresponding user's mobile device is pointed. A view direction can include azimuth and elevation angles relative to the user. A view direction can include a bearing direction in relation to a fixed point.
A mobile device can include a handheld device, such as a Portable Playstation®, a user-worn device, such as glasses with an integrated display, or other electronic devices.
Using the coordinates representing the position and view direction, the mobile display can be used as a secondary display to ‘look around’ the virtual environment. For example, a player driving a virtual tank can slew his mobile device to the left to see enemy troops to the left side outside of the view of the fixed display. As another example, the player can use his mobile device display to zoom into the horizon of the display. The mobile device can act as virtual binoculars to better resolve figures in the distance that might be a threat.
Although polar/cylindrical coordinates are used here in the examples, other coordinate systems can be used, such as Cartesian and spherical coordinate systems.
If mobile device 108 is slewed to the right, then avatar 414 disappears off the left side of the display. If mobile device 108 is slewed to the left, then avatar 414 disappears off the right side of the display. In some embodiments, it can appear as if the embedded display is transparent and the view of the room in the background is the same, except for the other player being overlaid with graphics depicting an avatar.
This mirrored movement can be useful to simulate games in which players play across from one another, such as tennis, handball, chess, etc. This can be used by players in the same room with the same, central fixed display or by players in different rooms with their own displays.
Camera 920 can also be enabled to track faces as is known in the art. Facial tracking technology can work to directly determine the position and view direction of a player's head, eyes, nose, etc. A camera on mobile device 908 can also be used to track the player's head.
Video game console 922 connects to camera 920 and fixed display 102. Video game console connects wirelessly, through wireless port 924, with mobile device 908 through wireless link 926. Wireless link 926 can be radio frequency, infrared, etc. The camera may output the position of tracked objects to console 922, or the camera may output raw video to console 922 and console 922 processes the raw video to determine the position, velocity, etc. of tracked objects.
Console 922 can send the coordinates of the tracked objects to mobile device 908 along with the determined view direction of mobile device 908. Mobile device 908 can then use the coordinates and view direction to render an avatar in the correct position on its screen.
In some embodiments, wireless link 926 can be used to send remote control-like commands to the video display. For example, a cellular phone can be used to turn up or down the volume on a television.
The position of mobile device 1008 can be used as an input to a video game. For example, a user can pace around his living room floor, marking locations where she will have her battleships for a virtual board game of Battleship®. In another example, a virtual game of ‘Marco Polo’ can be played in which players attempt to guess the location of other players without the use of their eyes. A player could move around his T.V. room in order to simulate his virtual position on a field or in a pool.
In other embodiments, the mobile device can automatically determine its position and view direction using a Global Positioning System (GPS) receiver, accelerometer-based inertial system, mechanical or solid-state gyroscope, electronic magnetic compass, radio frequency triangulation, and/or other methods known in the art.
A graphics subsystem 1240 is further connected with data bus 1235 and the components of the computer system 1200. The graphics subsystem 1240 includes a graphics processing unit (GPU) 1245 and graphics memory 1250. Graphics memory 1250 includes a display memory (e.g., a frame buffer) used for storing pixel data for each pixel of an output image. Graphics memory 1250 can be integrated in the same device as GPU 1245, connected as a separate device with GPU 1245, and/or implemented within memory 1210. Pixel data can be provided to graphics memory 1250 directly from the CPU 1205. Alternatively, CPU 1205 provides the GPU 1245 with data and/or instructions defining the desired output images, from which the GPU 1245 generates the pixel data of one or more output images. The data and/or instructions defining the desired output images can be stored in memory 1210 and/or graphics memory 1250. In an embodiment, the GPU 1245 includes 3D rendering capabilities for generating pixel data for output images from instructions and data defining the geometry, lighting, shading, texturing, motion, and/or camera parameters for a scene. The GPU 1245 can further include one or more programmable execution units capable of executing shader programs.
The graphics subsystem 1240 periodically outputs pixel data for an image from graphics memory 1250 to be displayed on display device 1255. Display device 1255 can be any device capable of displaying visual information in response to a signal from the computer system 1200, including CRT, LCD, plasma, and OLED displays. Computer system 1200 can provide the display device 1255 with an analog or digital signal.
In accordance with various embodiments, CPU 1205 is one or more general-purpose microprocessors having one or more processing cores. Further embodiments can be implemented using one or more CPUs with microprocessor architectures specifically adapted for highly parallel and computationally intensive applications, such as media and interactive entertainment applications.
The components of the system 108 of
It should be noted that the methods, systems, and devices discussed above are intended merely to be examples. It must be stressed that various embodiments may omit, substitute, or add various procedures or components as appropriate. For instance, it should be appreciated that, in alternative embodiments, the methods may be performed in an order different from that described, and that various steps may be added, omitted, or combined. Also, features described with respect to certain embodiments may be combined in various other embodiments. Different aspects and elements of the embodiments may be combined in a similar manner. Also, it should be emphasized that technology evolves and, thus, many of the elements are examples and should not be interpreted to limit the scope of the invention.
Specific details are given in the description to provide a thorough understanding of the embodiments. However, it will be understood by one of ordinary skill in the art that the embodiments may be practiced without these specific details. For example, well-known circuits, processes, algorithms, structures, and techniques have been shown without unnecessary detail in order to avoid obscuring the embodiments.
Also, it is noted that the embodiments may be described as a process which is depicted as a flow diagram or block diagram. Although each may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be rearranged. A process may have additional steps not included in the figure.
Moreover, as disclosed herein, the term “memory” or “memory unit” may represent one or more devices for storing data, including read-only memory (ROM), random access memory (RAM), magnetic RAM, core memory, magnetic disk storage mediums, optical storage mediums, flash memory devices, or other computer-readable mediums for storing information. The term “computer-readable medium” includes, but is not limited to, portable or fixed storage devices, optical storage devices, wireless channels, a sim card, other smart cards, and various other mediums capable of storing, containing, or carrying instructions or data.
Furthermore, embodiments may be implemented by hardware, software, firmware, middleware, microcode, hardware description languages, or any combination thereof. When implemented in software, firmware, middleware, or microcode, the program code or code segments to perform the necessary tasks may be stored in a computer-readable medium such as a storage medium. Processors may perform the necessary tasks.
Having described several embodiments, it will be recognized by those of skill in the art that various modifications, alternative constructions, and equivalents may be used without departing from the spirit of the invention. For example, the above elements may merely be a component of a larger system, wherein other rules may take precedence over or otherwise modify the application of the invention. Also, a number of steps may be undertaken before, during, or after the above elements are considered. Accordingly, the above description should not be taken as limiting the scope of the invention.
This application is a continuation of U.S. application Ser. No. 12/835,671, filed Jul. 13, 2010, entitled “POSITION-DEPENDENT GAMING, 3-D CONTROLLER, AND HANDHELD AS A REMOTE,” which is hereby incorporated by reference in its entirety for all purposes. This application is related to U.S. application Ser. No. 14/860,239, filed Sep. 21, 2015, entitled “OVERLAY NON-VIDEO CONTENT ON A MOBILE DEVICE,” which is a continuation of U.S. application Ser. No. 13/554,958, filed Jul. 20, 2012, now U.S. Pat. No. 9,143,699, entitled “OVERLAY NON-VIDEO CONTENT ON A MOBILE DEVICE,” which is a continuation-in-part of U.S. patent application Ser. No. 12/835,645, filed Jul. 13, 2010, now U.S. Pat. No. 8,730,354, entitled “OVERLAY VIDEO CONTENT ON A MOBILE DEVICE,” and which claims the benefit of U.S. Provisional Application No. 61/527,048, filed Sep. 12, 2011, entitled “OVERLAY NON-VIDEO CONTENT ON A MOBILE DEVICE,” which are hereby incorporated by reference in their entireties for all purposes, each of which is incorporated by reference herein in their entirety for all purposes.
Number | Name | Date | Kind |
---|---|---|---|
4787051 | Olson | Nov 1988 | A |
4843568 | Krueger | Jun 1989 | A |
5128671 | Thomas, Jr. | Jul 1992 | A |
5528265 | Harrison | Jun 1996 | A |
5831664 | Wharton et al. | Nov 1998 | A |
5929849 | Kikinis | Jul 1999 | A |
6157368 | Faeger | Dec 2000 | A |
6247022 | Yankowski | Jun 2001 | B1 |
6375572 | Masuyama | Apr 2002 | B1 |
6522312 | Ohshima | Feb 2003 | B2 |
6615268 | Philyaw et al. | Sep 2003 | B1 |
6993573 | Hunter | Jan 2006 | B2 |
7036083 | Zenith | Apr 2006 | B1 |
7200857 | Rodriguez et al. | Apr 2007 | B1 |
7209942 | Hori et al. | Apr 2007 | B1 |
7398000 | Green | Jul 2008 | B2 |
7427996 | Yonezawa | Sep 2008 | B2 |
7536706 | Sezan et al. | May 2009 | B1 |
7549052 | Haitsma et al. | Jun 2009 | B2 |
7581034 | Polivy et al. | Aug 2009 | B2 |
7599580 | King et al. | Oct 2009 | B2 |
7898504 | Fischer | Mar 2011 | B2 |
8037496 | Begeja et al. | Oct 2011 | B1 |
8188969 | Morin et al. | May 2012 | B2 |
8251290 | Bushman et al. | Aug 2012 | B1 |
8253649 | Imai et al. | Aug 2012 | B2 |
8463000 | Kaminski, Jr. | Jun 2013 | B1 |
8560583 | Mallinson | Oct 2013 | B2 |
8644842 | Arrasvuori et al. | Feb 2014 | B2 |
8730156 | Weising | May 2014 | B2 |
8730354 | Stafford et al. | May 2014 | B2 |
8838671 | Wies et al. | Sep 2014 | B2 |
8874575 | Mallinson | Oct 2014 | B2 |
8907889 | Sweetser | Dec 2014 | B2 |
9113217 | Mallinson | Aug 2015 | B2 |
9143699 | Osman | Sep 2015 | B2 |
9264785 | Mallinson | Feb 2016 | B2 |
9473820 | Mallinson | Oct 2016 | B2 |
9513700 | Weising | Dec 2016 | B2 |
9703369 | Mullen | Jul 2017 | B1 |
9901828 | Miller | Feb 2018 | B2 |
20020028000 | Conwell et al. | Mar 2002 | A1 |
20020059604 | Papagan | May 2002 | A1 |
20020078456 | Hudson et al. | Jun 2002 | A1 |
20020085097 | Colmenarez et al. | Jul 2002 | A1 |
20020140855 | Hayes | Oct 2002 | A1 |
20020162118 | Levy et al. | Oct 2002 | A1 |
20020186676 | Milley et al. | Dec 2002 | A1 |
20030028873 | Lemmons | Feb 2003 | A1 |
20030093790 | Logan et al. | May 2003 | A1 |
20030152366 | Kanazawa et al. | Aug 2003 | A1 |
20030156144 | Morita | Aug 2003 | A1 |
20030171096 | Ilan et al. | Sep 2003 | A1 |
20030185541 | Green et al. | Oct 2003 | A1 |
20030212762 | Barnes et al. | Nov 2003 | A1 |
20040001161 | Herley | Jan 2004 | A1 |
20040210824 | Shoff et al. | Oct 2004 | A1 |
20040212589 | Hall et al. | Oct 2004 | A1 |
20050005308 | Logan et al. | Jan 2005 | A1 |
20050108026 | Brierre | May 2005 | A1 |
20050123267 | Tsumagari | Jun 2005 | A1 |
20050193425 | Sull et al. | Sep 2005 | A1 |
20050220439 | Carton et al. | Oct 2005 | A1 |
20050227674 | Kopra et al. | Oct 2005 | A1 |
20050262548 | Shimojo et al. | Nov 2005 | A1 |
20060038833 | Mallinson et al. | Feb 2006 | A1 |
20060053472 | Goto et al. | Mar 2006 | A1 |
20060064734 | Ma | Mar 2006 | A1 |
20060184960 | Horton et al. | Aug 2006 | A1 |
20060285772 | Hull et al. | Dec 2006 | A1 |
20070106551 | McGucken | May 2007 | A1 |
20070113263 | Chatani | May 2007 | A1 |
20070124756 | Covell et al. | May 2007 | A1 |
20070130580 | Covell et al. | Jun 2007 | A1 |
20070136773 | O'Neil et al. | Jun 2007 | A1 |
20070143777 | Wang et al. | Jun 2007 | A1 |
20070143778 | Covell et al. | Jun 2007 | A1 |
20070162863 | Buhrke | Jul 2007 | A1 |
20070169115 | Ko et al. | Jul 2007 | A1 |
20070248158 | Vieron et al. | Oct 2007 | A1 |
20070250716 | Brunk et al. | Oct 2007 | A1 |
20080215679 | Gillo et al. | Sep 2008 | A1 |
20080226119 | Candelore et al. | Sep 2008 | A1 |
20080246694 | Fischer | Oct 2008 | A1 |
20080267584 | Green | Oct 2008 | A1 |
20080275763 | Tran et al. | Nov 2008 | A1 |
20080276278 | Krieger et al. | Nov 2008 | A1 |
20090019474 | Robotham | Jan 2009 | A1 |
20090037975 | Ishikawa et al. | Feb 2009 | A1 |
20090055383 | Zalewski | Feb 2009 | A1 |
20090063277 | Bernosky et al. | Mar 2009 | A1 |
20090123025 | Deng et al. | May 2009 | A1 |
20090154806 | Chang et al. | Jun 2009 | A1 |
20090228921 | Miki et al. | Sep 2009 | A1 |
20090285444 | Erol et al. | Nov 2009 | A1 |
20090327894 | Rakib et al. | Dec 2009 | A1 |
20100007050 | Kagawa et al. | Jan 2010 | A1 |
20100053164 | Imai et al. | Mar 2010 | A1 |
20100086283 | Ramachandran et al. | Apr 2010 | A1 |
20100091198 | Matsuo | Apr 2010 | A1 |
20100100581 | Landow et al. | Apr 2010 | A1 |
20100119208 | Davis et al. | May 2010 | A1 |
20100122283 | Button | May 2010 | A1 |
20100149072 | Waeller et al. | Jun 2010 | A1 |
20100166309 | Hull et al. | Jul 2010 | A1 |
20100222102 | Rodriguez | Sep 2010 | A1 |
20100257252 | Dougherty et al. | Oct 2010 | A1 |
20100275235 | Joo et al. | Oct 2010 | A1 |
20100309225 | Gray et al. | Dec 2010 | A1 |
20100318484 | Huberman et al. | Dec 2010 | A1 |
20100322469 | Sharma | Dec 2010 | A1 |
20110053642 | Lee | Mar 2011 | A1 |
20110071838 | Li-Chun Wang et al. | Mar 2011 | A1 |
20110078729 | LaJoie et al. | Mar 2011 | A1 |
20110103763 | Tse et al. | May 2011 | A1 |
20110246495 | Mallinson | Oct 2011 | A1 |
20120013770 | Stafford et al. | Jan 2012 | A1 |
20120059845 | Covell et al. | Mar 2012 | A1 |
20120086630 | Zhu et al. | Apr 2012 | A1 |
20120099760 | Bernosky et al. | Apr 2012 | A1 |
20120143679 | Bernosky et al. | Jun 2012 | A1 |
20120210349 | Campana et al. | Aug 2012 | A1 |
20130141419 | Mount | Jun 2013 | A1 |
20130194437 | Osman | Aug 2013 | A1 |
20130198642 | Carney et al. | Aug 2013 | A1 |
20150026716 | Mallinson | Jan 2015 | A1 |
20150156542 | Covell et al. | Jun 2015 | A1 |
20150358679 | Mallinson | Dec 2015 | A1 |
20150379043 | Hull et al. | Dec 2015 | A1 |
20160014350 | Osman | Jan 2016 | A1 |
20160112762 | Mallinson | Apr 2016 | A1 |
20170013313 | Mallinson | Jan 2017 | A1 |
20170013314 | Mallinson | Jan 2017 | A1 |
Number | Date | Country |
---|---|---|
101002475 | Jul 2007 | CN |
101374090 | Feb 2009 | CN |
101651834 | Feb 2010 | CN |
1053642 | Nov 2000 | EP |
2180652 | Apr 2010 | EP |
P2000242661 | Sep 2000 | JP |
2000-287184 | Oct 2000 | JP |
P2001-036875 | Feb 2001 | JP |
P2002-118817 | Apr 2002 | JP |
P2002-198840 | Jul 2002 | JP |
2005-532578 | Oct 2005 | JP |
2006-5897 | Jan 2006 | JP |
P2007-088801 | Apr 2007 | JP |
2008-283344 | Nov 2008 | JP |
P2009-033769 | Feb 2009 | JP |
2008-0101075 | Nov 2008 | KR |
2009-0043526 | May 2009 | KR |
2004004351 | Jan 2004 | WO |
2004034281 | Apr 2004 | WO |
2005006610 | Jan 2005 | WO |
2007064641 | Jun 2007 | WO |
2008024723 | Feb 2008 | WO |
WO2008025407 | Mar 2008 | WO |
2008051538 | May 2008 | WO |
2008056180 | May 2008 | WO |
2009032707 | Mar 2009 | WO |
2009036435 | Mar 2009 | WO |
2010020739 | Feb 2010 | WO |
Entry |
---|
Japanese Application No. 2016-058279, Office Action dated Feb. 14, 2017, 6 pages. |
U.S. App. No. 12/835,657, Notice of Allowance dated Sep. 21, 2017, 8 pages. |
Chinese Application No. 201510087918.9, Office Action dated Mar. 30, 2017, 9 pages. |
Chinese Application No. 201310454576.0, Office Action dated May 2, 2017, 18 pages. |
U.S. Appl. No. 14/860,239, Notice of Allowance dated Apr. 25, 2017, 9 pages. |
U.S. Appl. No. 12/835,657, Notice of Allowance dated Jul. 18, 2017, 9 pages. |
U.S. Appl. No. 13/554,963, Notice of Allowance dated Jul. 28, 2017, 8 pages. |
Bolt, R.A., “Put-that-there”: voice and gesture at the graphics interface, Computer Graphics, vol. 14, No. 3 (ACM SIGGRAPH Conference Proceedings) Jul. 1980, pp. 262 270. |
DeWitt, Thomas and Edelstein, Phil, “Pantomation: A System for Position Tracking,” Proceedings of the 2nd Symposium on Small Computers in the Arts, Oct. 1982, pp. 61-69. |
Mohan et al., “Bokode: Imperceptible Visual tags for Camera Based Interaction from a Distance,” ACM Transactions on Graphics, Jul. 2009, vol. 28(3), Article No. 98, pp. 1-2. |
Mark Toner, Abstract of dissertation provided by Mr. Toner and purported to be maintained at Liverpool University, 1 page. |
Tanaka et al., JP 2008-210683 article, Japanese language, PW080056, vol. 97, No. 1, Feb. 4, 1997, Information Processing Society of Japan, pp. 1-5. |
Tanaka et al., Partial Translation of Ref., “Interactive Video Navigation System by Using the Media Fusion Technique of Video/TV and World Wide Web,” Information Processing Society of Japan, Feb. 4, 1997, pp. 3-4. |
PCT Application No. PCT/US2011/042456, International Search Report and Written Opinion, dated Nov. 4, 2011, 8 pages. |
Number | Date | Country | |
---|---|---|---|
20160030845 A1 | Feb 2016 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 12835671 | Jul 2010 | US |
Child | 14880889 | US |