This invention relates generally to computer graphics and more particularly to the modification of computer graphics.
An avatar is a graphical element selected by a user of a system that other users can see and which can represent the user. Avatars are frequently used in virtual universes, which are computer-based simulated environments. An avatar often takes the form of a cartoon-like human but may be any other graphical representation. The avatar may be a static image or a computer-generated animation. Many virtual universes are represented using three dimensional graphics and landscapes, and may be populated by many thousands of users, known as residents. Each resident can have one or more avatars. Residents of virtual universes can traverse and inhabit the virtual universes, and interact with one another through the use of avatars. Other terms for virtual universes include virtual worlds, metaverses, 3D internet, virtual realities, massively multiplayer online games and virtual realms.
Users or residents of virtual universes often interact with each other through their avatars using known chat-room technology. For example, in order to mimic the behavior of real life human interactions, when an avatar speaks a text window can appear on a user interface (UI) of the other users whose avatars are within hearing range of the speaking avatar. The hearing range may vary based upon whether an avatar speaks in a normal voice, a raised voice or a whisper. Audio is also sometimes used to convey speech.
Often, virtual universes resemble the real world, such as in terms of physical laws, landscapes, houses and other buildings. Example virtual universes can include: Second Life, Entropia Universe, The Sims Online, There, Red Light Center and several massively multiplayer online games such as EverQuest, Ultima Online, Lineage or World of Warcraft.
Avatars have been used in conjunction with the transmission of animated video messaging. One example of such animated video messaging is associated with Logitech Orbit webcams and Logitech Video Effects technology. An exemplary use of the Logitech technology is disclosed at <http://www.youtube.com/watch ?v=r7Gn2TyEyHw>. This animated video messaging allows the user to convey animated representations of head and facial movements through an avatar.
Avatars as known in the prior art suffer from limitations in their ability to convey facial expression or other body language. Avatars display gross motions and gestures, and provide rough feature characterization. These graphical feature simplifications and limitations are due partly to the need for limiting the communication bandwidth between the server and the clients, and partly due to the need for maintaining a manageable rendering cost of the avatar at the client computing devices. As a result, users interacting through the use of prior art avatars can lose cues typically present in interpersonal communications. Such cues are useful for conveying and understanding emotions.
A method of rendering an electronic graphical representation of a user of a computerized system includes providing a plurality of states for the electronic graphical representation including first and second differing states, monitoring a measurable quantity to provide a monitored quantity, and changing a state of the graphical representation from the first state to the second state based upon the monitored quantity. The graphical representation is an avatar and the method includes defining a receptor point associated with the avatar and associating an object with the receptor point. The receptor point is located on the avatar. The plurality of states includes a non-hybrid state. The plurality of states includes a hybrid state. The hybrid state includes a static image hybrid state and a video hybrid state. The video hybrid state includes a live video hybrid state and a pre-recorded video hybrid state.
The measurable quantity includes a quantity representative of a movement of a user including movement of a human facial feature. The measurable quantity includes a quantity representative of a movement of a human hand. The changing of the state of the avatar includes displaying the change to a limited subset of other users of the computerized system. The changing of the state of the avatar is based upon network communication capabilities. The changing of the state of the avatar is based upon measured network communication performance. The changing of the state of the avatar is based upon user computer processing capabilities. The changing of the state of the avatar is based upon measured system computer processing performance.
The changing of the state of the avatar is limited to a subset of permitted states in accordance with a user preference. The changing of the resolution of the avatar is limited to a subset of permitted states in accordance with the user preference. The limiting of the changing of the state of the avatar to a subset of permitted states is performed in accordance with a service contract. The limiting of the changes of resolution of the avatar to a subset of permitted states is performed in accordance with a service contract.
Furthermore, the subject invention is directed to a system for enabling multiple-state hybrid avatars that may be adaptively rendered as traditional avatars, portions of which are merged with a video feed. The video feed may, for example, be a video of the face of a real person superimposed on the face of an avatar. A switch between a traditional mode and a video mode is performed automatically according to several criteria. The video of an actual face provides facial and other cues to improve communication compared with those avatars known in the prior art.
Through various embodiments of the subject invention, virtual universe users are presented with the typical visual cues (e.g., posture, subtle facial expressions and hand gestures) that are useful in face-to-face communication. Virtual universe users can benefit from communicating these life-like cues and expressions.
Referring now to
Referring now to
As also shown in
In one embodiment of the invention, the receptor points 26a-n of a multiple-state avatar 22 may be associated with a static image, a live video, a pre-recorded video or any other kind of object. A rendering of a multiple-state avatar 22 having no objects associated with any of its receptor points 26a-n can be referred to as a rendering of a non-hybrid state avatar. A rendering of a multiple-state avatar 22 having an object associated with one or more receptor points 26a-n can be understood to be a rendering of a hybrid state avatar.
Furthermore, many types of hybrid state avatars are possible. For example, the state of a hybrid state avatar having a static image associated with one or more of its receptor points 26a-n can be understood to be a static image hybrid state. As a further example, the state of a hybrid state avatar having a live video associated with one or more receptor points 26a-n can be understood to be a live video hybrid state. The state of a hybrid state avatar having a pre-recorded video associated with one or more receptor points 26a-n can be understood to be a pre-recorded video hybrid state, and so on for any other kind of object associated with a receptor point 26a-n of a multiple-state avatar 22. Both the live video hybrid state and the pre-recorded video hybrid state can sometimes be referred to as video hybrid states for convenience.
A multiple-state avatar 22 can have different kinds of objects, for example a static image and a live or pre-recorded video, associated with different receptor points 26a-n. Even though an object can be associated with a multiple-state avatar 22 in the foregoing manner, in an alternate embodiment it can be rendered for display in the vicinity of the multiple-state avatar 22, rather than on the multiple-state avatar 22, or in any other location. Furthermore, objects can move from one receptor point 26a-n to another, or from the multiple-state avatar 22 to the vicinity of the multiple-state avatar 22, and back depending on any predetermined triggers.
The live video or pre-recorded video associated with a multiple-state avatar 22 can be accompanied by audio or visual effects when it is triggered and displayed. The replacing video may be a video of an actual person and can correspond to the part of the multiple-state avatar 22 being replaced. For example, in a multiple-state avatar 22 having a receptor point 26a-n on its face 24, a substituted or superimposed video element of the multiple-state avatar 22 may be a live video or a recorded video of the face of the user controlling the multiple-state avatar 22, or any other object.
In another example, arm 25 or hand 29 movements of the multiple-state avatar 22 may be replaced by video images of the user's arms or hands. Alternatively, to reduce network bandwidth use, several static snapshots or other representations of a real person's face or other feature, or other objects, may be saved in a database or on a file system and displayed on the multiple-state avatar 22 at an appropriate receptor point 26a-n. For example, a static image of a smiling face and a frowning face of a user or other human or another object, may be stored and alternately superimposed upon a receptor point 26a-n on the face 24 of the multiple-state avatar 22 when triggered.
Multiple-state avatars 22 within the disclosed system can change state in response to any kind of triggers such as specified, measurable actions, events or parameters in the virtual universe or elsewhere. The monitored events or parameters can be associated with a user of a multiple state avatar 22, some other avatars or other entities in a virtual universe or elsewhere.
State changes can include changes from a traditional graphical avatar such as the prior art avatar 20 to a hybrid multiple-state avatar 22 (such as a live video multiple-state avatar 22), and changes back to a traditional graphical state. State changes in response to detected triggers can also include a change from a state where an image is superimposed on one receptor point 26a-n, to a state where the same or a different image is displayed on a different receptor point 26a-n. The system can monitor any measurable quantities to detect a trigger for changing the state of an avatar receptor point 26a-n.
The measurable quantities for triggering state changes in the multiple-state avatar 22 can include, but are not limited to, physical movements by a human user controlling the multiple-state avatar 22, or movements by any other person. Additionally, the measurable quantities used for triggering the state changes can include, but are not limited to, movement of any physical object, the actuation of a monitored device, the occurrence on any event, for example the occurrence of a sound or the appearance of light, communication network functioning, user computer functioning, or the passage of time. Prior to the measured quantities satisfying the specified criteria, the multiple-state avatar 22 may be displayed using any standard virtual universe rendering methods. When a measured quantity or quantities meet the specified criteria or requirements the system can initiate a state change. Additionally, the system can initiate a reverse state change or other state change when a measured quantity or quantities no longer satisfies the specified criteria. In a preferred embodiment the quantities and the thresholds of the quantities may be configurable. By way of example, the system may be configured to activate state changes based upon a measured virtual universe distance between two or more virtual universe multiple-state avatars 22. The system may monitor that distance and cause a state change in one or more of the multiple-state avatars 22 when the distance falls below a configured threshold or meets some other configurable criteria. The threshold distance can then be modified as desired. Alternate embodiments of the system may allow a virtual universe, a user or a third party to specify and change the configurable values.
The use of multiple-state avatars such as the multiple-state avatar 22 allows users of a virtual universe to experience and interact with other users of the virtual universe in a more life-like manner. Further increasing the life-like quality of the experience, the disclosed system may allow a state change of a multiple-state avatar 22 to be viewed by one or more subsets of virtual universe users, and not allow it to be viewed by other subsets of virtual universe users. Virtual universe users outside of a viewing subset or subsets can thus be unable to observe some or all of the state changes of the multiple-state avatar 22.
A preferred embodiment of the invention can determine the subsets that can view the state change based upon any configurable criteria, such as distance between the two multiple-state avatars 22. The configurable criteria can be changed at any time according to any event or parameter. In this way, the multiple-state avatars 22 can recreate the real-life ability of those in close proximity being able to see or hear a person or observe specific movements or expressions of the person, while those further away may not see or hear the same person, movements or expressions. Similarly, in a preferred embodiment, users whose perspective would reveal only the back of an active multiple-state avatar 22 may not see facial expressions or other changes on the front or side of the multiple-state avatar 22. In the preferred embodiment, the system is configurable to define the criteria related to this functionality.
Thus, further replicating real life experience, in an embodiment of the system in which multiple-state avatars 22 have many states, the system may calculate multiple user subsets and show different avatar states to those different user subsets. For example, the system may be configured to display an avatar's full video state, including facial expressions and body movements to users within a specified distance. Simultaneously, users within (or outside if desired) a greater specified distance may see only gross body movements, such as arm motions, but not facial expressions.
In one preferred embodiment, the system can monitor a user's face, using known prior-art facial scanning software and known cameras associated with computing devices. If the monitored person in real life is smiling or manifesting some other predetermined cue, a video of the user or a photograph of a stored smiling face can be superimposed on a facial receptor point 26a-n of the user's multiple-state avatar 22. A similar methodology may be employed to monitor other features of the user or another person or object, such as hands or arms, and movement of those features. If the amount of movement of a user, another person, or an object exceeds a specified threshold, the system can trigger a state change corresponding to the type and level of movement. The determination of the expressions or movements of a person or an object being monitored can be based upon any type of videographing techniques know to those skilled in the art.
In a preferred embodiment, the user may also cause state changes of a multiple-state avatar 22 using input devices such as a standard computer keyboard or mouse. For example, in one embodiment of the system the user may press a defined combination of keys such as Alt-S to cause the multiple-state avatar 22 to change state, for example to display a live video, a stored video, or a stored image associated with smiling or some other expression. The system may be configured to permit other keyboard shortcuts to be used to control state changes of the multiple-state avatar 22. In alternate embodiments, any other input devices and methods may be used to cause state changes. Examples of such input devices whose actuation can be monitored in order to provide a trigger for changing a state of a multiple-state avatar 22 may include, but are not limited to, joysticks, potentiometers, audio detector and audio commands, voice recognition, temperature detectors, timers and any type of motion detecting methods.
A preferred embodiment of the system and method of the present invention can further include a method to minimize the system's impact upon network performance and to maintain operational speed via networked communications. Virtual universe users typically interact in real-time across network connections. If a user connects to a virtual universe via a slow or congested connection, or if the user connection undergoes a slow-down, real-time transmission of multimedia content to the user may be compromised. In order to minimize any lag in content delivery, a preferred embodiment can detect the speed of the connection for a user to whom content is to be sent. Based upon the connection speed, the system may determine that transmission to a user with a slow connection to the system should be modified. For example, the system may determine that the user should receive transmission of content that requires less bandwidth, and therefore less time to implement. The system may thus substitute some type of content requiring less bandwidth. For example, a user who, according to system rules, should normally observe live video corresponding to another avatar's hybrid state, may receive static images representing the other user's multiple-state avatar 22 instead, at least temporarily. Other examples of lower bandwidth content can include, but are not limited to, low frame-rate video and low-resolution video rather than high-resolution video.
In one embodiment of the invention, the system may allow users to select whether the functionality of a multiple-state avatar 22 should be disabled in order to minimize the time required to receive transmission of virtual universe content. Similarly, one embodiment can permit a virtual universe administrator to configure a system to forego transmission of hybrid content under certain circumstances. Such a configuration may be desirable to overcome unexpected network slow-downs or other network events. Individual users may also desire the reduced content when their personal computers lack the processing resources necessary to keep up with data representing real-time communication requirement, such as the requirements for supporting the multiple-state avatar 22.
In a preferred embodiment a user can cause the state changes of a multiple-state avatar 22 to continue in effect for a limited period of time within a session, until the end of a session, or past the end of a session and into a future session. For example, a business transaction or a business meeting can be held in a virtual universe. A user may want to trigger a state in which high definition details of a multiple-state avatar 22 is available for a potion of a meeting or until the end of a meeting. This can improve mutual trust if low resolution avatars are considered a way of concealing emotions or expressions thereby triggering mistrust. Additionally, when a user gives a presentation in a virtual universe the user may want to trigger a high resolution mode for the duration of the presentation regardless of any other triggers. In another example of a business meeting, the method and system of the invention can normally maintain the highest level of resolution for all multiple-state avatars 22, and not try to determine when to increase the avatar resolution. Rather, it can determine which avatar resolution can be decreased in order to minimize the impact on the discussion.
In an alternate embodiment of the disclosed invention, the system may maximize available bandwidth by distributing avatar information to users via peer-to-peer (P2P) systems. In this embodiment, users can download avatar content from one or more other system users. This direct download from other users may, but is not required to, occur in conjunction with downloads from the system.
Referring now to
The system can continuously or periodically evaluate whether the monitored quantity or quantities meet specified criteria as shown in decision 36. If the measured quantities do not meet the specified criteria as determined in decision 36, the system can continue to monitor the specified quantities as shown in block 34. If the quantities monitored in block 34 meet the specified criteria as determined in decision 36 the system can change the state of the multiple-state avatar 22 to an alternate state as shown in block 38. The state change can be a change of any kind with respect to any receptor point 26a-n of the multiple-state avatar 22. The state to which the multiple-state avatar 22 is changed may be a static image hybrid state, a live video hybrid state, a pre-recorded video hybrid state or any other type of avatar state as previously described.
Having caused the avatar state to change in block 38, the system can continue to monitor the specified quantities with relation to the specified criteria as shown in block 40 of the flow chart 30. The system can periodically or continuously evaluate whether the measured quantities satisfy the specified criteria as shown in decision 42. If the measured quantities continue to meet the specified criteria, the system can continue to measure the specified quantities in block 40. If the measured quantities no longer meet the specified criteria, the system can cause the multiple-state avatar 22 to return to a previous state as shown in block 44. Furthermore, the multiple-state avatar 22 can be caused to change to any other state at that point. The system can return to block 34 and continue monitoring the specified measurable quantities.
Thus, while the flow chart 30 can illustrate a process suitable for controlling a multiple-state avatar 22 having two states (e.g. hybrid and not hybrid states or two different hybrid states), a person of ordinary skill in the art will understand that similar measurement, comparison and state change steps may be implemented for multiple-state avatars 22 having any number of states. This will allow the system to change the multiple-state avatar 22, for example, sequentially from a static image state, to one type of hybrid state, to another type of hybrid state.
In practice, by way of example, the disclosed system may be employed in an online virtual retail store. A user multiple-state avatar 22 may approach a service avatar (which can also be a multiple-state avatar 22) in the online store and engage in interaction related to a sales transaction. All or selected portions of the service multiple-state avatar 22 may be include receptor points 26a-n. As such, a more realistic conversation can take place between the user and the service personnel given the facial movements, gestures, lip movements, and other aspects of their respective multiple-state avatars 22. The advantages of the system over traditional avatar technology may allow an online merchant to derive greater financial return as a result of the more life-like interaction.
In a further example, the subject system may be employed to facilitate online conferencing. In this situation, the multiple-state avatar 22 of the person speaking may be switched to a hybrid state. This allows the speaking user to communicate more effectively and realistically with the other users. Similarly, the multiple-state avatar 22 of the users not speaking may also trigger the hybrid state, thereby allowing the speaking user to view the real time reactions of participating users through their respective multiple-state avatars 22. The system may also trigger a state change by analyzing where other multiple-state avatars 22 are looking. A multiple-state avatar 22 receiving the focus of another user avatar can undergo a state change. As a result, the user experience and the realism of the conference can be enhanced for all users of the system.
In another example, the system may be used in an online virtual party. Any number of users can attend the party, each being represented by an avatar such as the multiple-state avatar 22. The system's methods for determining user subsets to observe avatar states may calculate any number of multiple user subsets. Each user subset can view different avatar behavior relative to other multiple-state avatars 22. In such a virtual party, a user A may see the full live video state of the multiple-state avatar 22 of a user B with whom he is speaking directly. User A may see different states of other users at the party. Possible examples include user A observing the gross body movements of a multiple state avatar C across the room, but not the facial expressions of avatar C. Further still, user A may observe only a traditional mode of a fourth avatar D, whose back is to A.
In any of the preceding examples, the system's methods for reducing the impact on network performance may be employed if necessary to maintain the appearance of real-time interaction between users when network performance is reduced or when users connect to the system via a slow internet connection. The states of multiple-state avatars 22 can be restricted to a subset of the possible states either by service contracts, by user preferences, or in any other agreed upon way. The restrictions can be imposed on a specific session or more generally.
While the invention has been described in detail and with reference to specific examples thereof, it will be apparent to one skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope thereof.
Under 35 U.S.C. 120, this application is a Continuation Application and claims priority to U.S. application Ser. No. 12/174,985, filed Jul. 17, 2008, entitled “SYSTEM AND METHOD FOR ENABLING MULTIPLE-STATE AVATARS,” which is incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
5319758 | Arai et al. | Jul 1994 | A |
5736982 | Suzuki | Apr 1998 | A |
5852672 | Lu | Dec 1998 | A |
5880788 | Bregler | Mar 1999 | A |
6069632 | Mullaly et al. | May 2000 | A |
6088698 | Lipkin | Jul 2000 | A |
6227974 | Eilat et al. | May 2001 | B1 |
6332163 | Bowman-Amuah | Dec 2001 | B1 |
6340332 | Rimoto | Jan 2002 | B1 |
6377263 | Falacara et al. | Apr 2002 | B1 |
6421047 | de Groot | Jul 2002 | B1 |
6522333 | Hatlelid et al. | Feb 2003 | B1 |
6553006 | Kalliokulju et al. | Apr 2003 | B1 |
6564250 | Nguyen | May 2003 | B1 |
6572662 | Manohar et al. | Jun 2003 | B2 |
6629112 | Shank et al. | Sep 2003 | B1 |
6629129 | Bookspan et al. | Sep 2003 | B1 |
6677976 | Parker et al. | Jan 2004 | B2 |
6714987 | Amin et al. | Mar 2004 | B1 |
6798407 | Benman | Sep 2004 | B1 |
6907395 | Hunt et al. | Jun 2005 | B1 |
6912565 | Powers et al. | Jun 2005 | B1 |
7027975 | Pazandak et al. | Apr 2006 | B1 |
7155380 | Hunt et al. | Dec 2006 | B2 |
7155680 | Akazawa et al. | Dec 2006 | B2 |
7240067 | Timmons | Jul 2007 | B2 |
7269632 | Edeker et al. | Sep 2007 | B2 |
7337208 | Gall et al. | Feb 2008 | B2 |
7342587 | Danzig et al. | Mar 2008 | B2 |
7412422 | Shiloh | Aug 2008 | B2 |
7445549 | Best | Nov 2008 | B1 |
7564476 | Coughlan et al. | Jul 2009 | B1 |
7685237 | Weaver et al. | Mar 2010 | B1 |
7882222 | Dolbier et al. | Feb 2011 | B2 |
7990384 | Cosatto et al. | Aug 2011 | B2 |
7996818 | Venugopal | Aug 2011 | B1 |
8026918 | Murphy | Sep 2011 | B1 |
8527625 | Dolbier et al. | Sep 2013 | B2 |
8957914 | Dolbier et al. | Feb 2015 | B2 |
9324173 | Castelli et al. | Apr 2016 | B2 |
20010030667 | Kelts | Oct 2001 | A1 |
20010037316 | Shiloh | Nov 2001 | A1 |
20010037508 | Hindus | Nov 2001 | A1 |
20020129106 | Gutfreund | Sep 2002 | A1 |
20020152268 | Kureshy et al. | Oct 2002 | A1 |
20020188678 | Edecker et al. | Dec 2002 | A1 |
20030014524 | Tormasov | Jan 2003 | A1 |
20030115132 | Iggland | Jun 2003 | A1 |
20030117485 | Mochizuki et al. | Jun 2003 | A1 |
20030177187 | Levine et al. | Sep 2003 | A1 |
20030177195 | Han et al. | Sep 2003 | A1 |
20040017385 | Cosman et al. | Jan 2004 | A1 |
20040030741 | Wolton et al. | Feb 2004 | A1 |
20040054740 | Daigle et al. | Mar 2004 | A1 |
20040068518 | McDowell | Apr 2004 | A1 |
20050021625 | Fujimura et al. | Jan 2005 | A1 |
20050137015 | Rogers et al. | Jun 2005 | A1 |
20050204287 | Wang | Sep 2005 | A1 |
20050216558 | Flesch et al. | Sep 2005 | A1 |
20060031080 | Mallya et al. | Feb 2006 | A1 |
20060053380 | Spataro et al. | Mar 2006 | A1 |
20060115157 | Mori | Jun 2006 | A1 |
20060123127 | Littlefield | Jun 2006 | A1 |
20060181535 | Watt | Aug 2006 | A1 |
20060256135 | Aoyama et al. | Nov 2006 | A1 |
20070013691 | Jung et al. | Jan 2007 | A1 |
20070035831 | Gutierrez Novelo | Feb 2007 | A1 |
20070126733 | Yang et al. | Jun 2007 | A1 |
20070130001 | Jung et al. | Jun 2007 | A1 |
20070188502 | Bishop | Aug 2007 | A1 |
20070203828 | Jung et al. | Aug 2007 | A1 |
20070220435 | Sriprakash et al. | Sep 2007 | A1 |
20070233605 | Mueller et al. | Oct 2007 | A1 |
20070233839 | Gaos | Oct 2007 | A1 |
20070248261 | Zhou et al. | Oct 2007 | A1 |
20070260984 | Marks | Nov 2007 | A1 |
20070294171 | Sprunk | Dec 2007 | A1 |
20080005237 | Borys et al. | Jan 2008 | A1 |
20080052242 | Merritt | Feb 2008 | A1 |
20080059570 | Bill | Mar 2008 | A1 |
20080104079 | Craig | May 2008 | A1 |
20080155019 | Wallace et al. | Jun 2008 | A1 |
20080215434 | Jung et al. | Sep 2008 | A1 |
20080215972 | Zalewski et al. | Sep 2008 | A1 |
20080215975 | Harrison et al. | Sep 2008 | A1 |
20080215994 | Harrison et al. | Sep 2008 | A1 |
20080228607 | Jung et al. | Sep 2008 | A1 |
20080263460 | Altberg et al. | Oct 2008 | A1 |
20080267282 | Kalipatnapu | Oct 2008 | A1 |
20080267449 | Dumas et al. | Oct 2008 | A1 |
20080274769 | Linden | Nov 2008 | A1 |
20080307066 | Amidon et al. | Dec 2008 | A1 |
20090002479 | Sangberg | Jan 2009 | A1 |
20090024636 | Shiloh | Jan 2009 | A1 |
20090027337 | Hildreth | Jan 2009 | A1 |
20090046094 | Hamilton, II et al. | Feb 2009 | A1 |
20090049385 | Blinnikka | Feb 2009 | A1 |
20090055484 | Vuong et al. | Feb 2009 | A1 |
20090077475 | Koster et al. | Mar 2009 | A1 |
20090089684 | Boss et al. | Apr 2009 | A1 |
20090100352 | Huang et al. | Apr 2009 | A1 |
20090106347 | Harwood et al. | Apr 2009 | A1 |
20090110352 | Schorpp et al. | Apr 2009 | A1 |
20090113319 | Dawson et al. | Apr 2009 | A1 |
20090138943 | Kawanaka | May 2009 | A1 |
20090144173 | Mo et al. | Jun 2009 | A1 |
20090164518 | Ghafoor | Jun 2009 | A1 |
20090187604 | Guo et al. | Jul 2009 | A1 |
20090241049 | Bates et al. | Sep 2009 | A1 |
20090251457 | Walker et al. | Oct 2009 | A1 |
20090288015 | Fujioka | Nov 2009 | A1 |
20090300521 | Jerrard-Dunne et al. | Dec 2009 | A1 |
20090307611 | Riley | Dec 2009 | A1 |
20100020100 | Dolbier et al. | Jan 2010 | A1 |
20100026681 | Dolbier et al. | Feb 2010 | A1 |
20100030854 | Dolbier et al. | Feb 2010 | A1 |
20100031164 | Dolbier et al. | Feb 2010 | A1 |
20100070859 | Shuster | Mar 2010 | A1 |
20110039526 | Ait-Ameur | Feb 2011 | A1 |
20120064878 | Castro Castro et al. | Mar 2012 | A1 |
20120113937 | Aramoto et al. | May 2012 | A1 |
20120221955 | Raleigh et al. | Aug 2012 | A1 |
20120257571 | Liao | Oct 2012 | A1 |
20120307798 | Zhou et al. | Dec 2012 | A1 |
20130016657 | Muhanna et al. | Jan 2013 | A1 |
20130149987 | Cheng et al. | Jun 2013 | A1 |
20140161026 | Stojanovski et al. | Jun 2014 | A1 |
20170006091 | Cabrera et al. | Jan 2017 | A1 |
Number | Date | Country |
---|---|---|
10222692 | Aug 1998 | JP |
2007086038 | Aug 2007 | WO |
Entry |
---|
Dr. John Robinson, et al., MVIP-II: A Protocol for Enabling Communication in Collaborative Virtual Environments, Association for Computing Machinery, Inc., 2003, pp. 155-160, 1-58113-644-7/03-0003. |
Mojtaba Hosseini, et al., INSPEC: A Haptic Virtual Environment for Industrial Training, AN-7568979, pp. 25-30, 2002, IEEE. |
John W. Barrus, et al., Locales: Supporting Large Multiuser Virtual Environments, A Mitsubishi Electric Research Laboratory on Virtual Reality Technology, Document No. 0272, Nov. 1996, pp. 50-57. |
Jim Purbrick, et al., An Extensible Event-based Infrastructure for Network Virtual Worlds, Proceedings of the IEEE Virtual Reality 2002 (VR' 02), 7 pages. |
Pedro Ferreira, et al., Security and Privacy in a Middleware for Large Scale Mobile and Pervasive Augmented Reality, Software, Telecommunications and Computer Networks, 2007, SoftCOM 2007, 15th International Conference on; Sep. 27-29, 2007, Conference Publications pp. 1-5. |
Carlos Santos, et al., Interactive Systems, Design, Specification and Verification, International Workshop, 10th, Jun. 11-13, 2003 DSV-IS 2003 Revised Papers (Lecture Notes in Computer Science, vol. 2844), pp. 410-414, A Navigation and Registration System for Mobile and Augmented Environments. |
Matthew Lewis, et al., Interactively Evolving Environment Maps with Continuous Layered Pattern Functions, Advanced Computing Center for the Arts and Design, The Ohio State University, pp. 1-12, Computer Animation 2002. |
Gurminder Singh, et al., Networked Virtual Worlds, Institute of Systems Science. National University of Singapore, Document No. 0-8186-7062, 1995, pp. 44-49. |
Tom Chen, et al., On Integrating Multi-Sensory Components in Virtual Environments, pp. 1-6, Dec. 1998. |
Michael E. Papka, et al., UbiWorld: An Environment Integrating Virtual Reality, Supercomputing and Design, Mathematics and Computer Science Division, Argonne National Laboratory, Heterogeneous Computing Workshop, Apr. 1, 1997, pp. 306-307. |
Richard M. Satava, et al., An Integrated Medical Virtual Reality Program, Mar. / Apr. 1996, IEEE Engineering in Medicine and Biology, pp. 94-97, and 104. |
Horoyuki Tarumi, et al., Communication through Virtual Active Objects Overlaid onto the Real World, Department of Social Informatics, Graduate School Informatics, Kyoto University, CVE '00 Proceedings of the third international conference on Collaborative virtual environments, Sep. 10-12, 2000, 10 pages. |
Tomohiro Tanikawa, et al., A Study for Image-based Integrated Virtual Environment, Proceedings of the International Symposium on Mixed and Augmented Reality (ISMAR'02), 2002, 9 pages. |
Pandzic et al., “A flexible architecture for Virtual Humans in Networked Collaborative Virtual Environments”, 1997, pp. 1-15. |
Purbrick et al., “An Extensible Event-based Infrastructure for Networked Virtual Worlds”, 2002, pp. 1-7. |
McNett et al., “Usher: An Extensible Framework for Managing Clusters of Virtual Machines”, 2007, pp. 1-25. |
Tran et al., “An Open Middleware for Large-scale Networked Virtual Environments”, 2002, pp. 1-8. |
Explicit Conversion of Base Class to Derived Class, Microsoft Developer Network, May 2007, pp. 1-11. |
Xiaojun Shen, et al., “A Heterogeneous Scalable Architecture for Collaborative Haptics Environments,” 2003. |
Jeremie Allard, et al., “FlowVR: A Middleware for Large Scale Virtual Reality Applications,” 2004. |
Qingping Lin, et al., “Addressing Scalability Issues in Large-Scale Collaborative Virtual Environment,” 2006. |
Gabriel Zachmann, “A Language for Describing Behavior of and Interaction with Virtual Worlds,” VRST '96, Proceedings of the ACM Symposium on Virtual Reality Technology, pp. 143-150, 1999, Jul. 1996. |
Shinya Kawanaka, “A Method to do Shopping in Securely by Making a Copy of Virtual World,” Document No. JP920070131, May 28, 2009. |
Mary Lou Maher, et al., Agents for multi-disciplinary design in virtual worlds, Artificial Intelligence for Engineering Design, Analysis and Manufacturing, vol. 21, No. 3, 2007. |
Elhadi Shakshuki, et al., Agent Frameworks in FCVW, Proceedings of the 19th International Conference on Advanced Information Networking and Applications (AINA '05), 2005. |
Tsuneo Yoshikawa, et al., Construction of Virtual World Using Dynamic Modules and Interaction MOdules, Proceedings of the 1996 IEEE International Conference on Robotics and Automation Minneapolis, Minnesota, Apr. 1996, pp. 2358-2364. |
Matias Rauterberg, Entertainment Computing—ICEC 2004, Third International Conference Eindhoven, The Netherlands, Sep. 1-3, 2004, Proceedings, pp. 241-247. |
Wikipedia, “Universally Unique Identifier”, [online], [Retrieved on Sep. 26, 2016]. Retrieved from the Internet at <URL: https://en.wikipedia.org/wiki/Universally_unique_identifier>, page last modified on Sep. 18, 2016, Total 6 pp. |
Number | Date | Country | |
---|---|---|---|
20160239993 A1 | Aug 2016 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 12174985 | Jul 2008 | US |
Child | 15138045 | US |