The field of the present disclosure relates generally to computer systems. In particular, the present invention is directed to a method and system for virtual experiences
Virtual goods are non-physical objects that are purchased for use in online communities or online games. They have no intrinsic value and, by definition, are intangible. Virtual goods include such things as digital gifts and digital clothing for avatars. Virtual goods may be classified as services instead of goods and are sold by companies that operate social networks, community sites, or online games. Sales of virtual goods are sometimes referred to as microtransactions. Virtual reality (VR) is a term that applies to computer-simulated environments that can simulate places in the real world, as well as in imaginary worlds. Most current virtual reality environments are primarily visual experiences, displayed either on a computer screen or through special stereoscopic displays, but some simulations include additional sensory information, such as sound through speakers or headphones. Some advanced haptic systems now include tactile information, generally known as force feedback, in medical and gaming applications.
These and other objects, features and characteristics of the present disclosure will become more apparent to those skilled in the art from a study of the following detailed description in conjunction with the appended claims and drawings, all of which form a part of this specification. In the drawings:
Various examples of the present disclosure will now be described. The following description provides specific details for a thorough understanding and enabling description of these examples. One skilled in the relevant art will understand, however, that the present disclosure may be practiced without many of these details. Likewise, one skilled in the relevant art will also understand that the present disclosure can include many other obvious features not described in detail herein. Additionally, some well-known structures or functions may not be shown or described in detail below, so as to avoid unnecessarily obscuring the relevant description.
The terminology used below is to be interpreted in its broadest reasonable manner, even though it is being used in conjunction with a detailed description of certain specific examples of the present disclosure. Indeed, certain terms may even be emphasized below; however, any terminology intended to be interpreted in any restricted manner will be overtly and specifically defined as such in this Detailed Description section.
According to one embodiment of the present system, virtual goods may be evolved into virtual experiences. Virtual experiences may expand beyond limitations imposed by virtual goods by adding additional dimensions to the virtual goods. By way of example, Participant A using a mobile device transmits flowers as a virtual experience to Participant B accessing a second device. The transmission of the virtual flowers may be enhanced by adding emotion by way of sound, for example. The virtual flowers may also be changed to a virtual experience when Participant B can do something with the flowers. For example, Participant B can affect the flowers through any sort of motion or gesture. Participant A can also transmit the virtual goods to Participant B by making a “throwing” gesture using a mobile device, so as to “toss” the virtual goods to Participant B.
Some key differences from prior art virtual goods and the virtual experiences of the present application may include, for example, physicality, togetherness, real-time, emotion, response time, etc. of the portrayed experience. For example, when a participant wishes to throw a rotten tomato at a video/image that is playing over a social media (in a large display screen in a room that has several participants with personal mobile devices connected to the virtual experience platform) as part of a virtual experience, he may, in the illustrative example, portray the physical action of throwing a tomato (after choosing a tomato that is present as a virtual object) by using physical gestures on his screen. This physical action may cause a tomato to move from the participant's mobile device in an interconnected live-action format, where the virtual tomato first starts from the participant's device, pans across the screen of the participant's mobile device in a direction of the physical gesture, and after leaving the boundary of the screen of the participant's mobile device, is then shown as hurling across through the central larger screen (with appropriate delays to enhance reality of the virtual experience), and finally be splotched on the screen with appropriate virtual displays. The direction and trajectory of the transferred virtual object may be dependent on the physical gesture (in this example).
In addition to the visual experience, accompanying sound effects may further add to the overall virtual experience. For example, when the “tomato throw” starts from the participant's mobile device, a swoosh sound first emanates from the participant's mobile device and then follows the visual cues (e.g., sound is transferred to the larger device when visual display of tomato first appears on the larger device) to provide a more realistic “tomato throw” experience.
In some embodiments, a virtual experience may include a virtual goods component, an animation component, and an accompanying sound component. The animation component and/or the virtual goods component may be indicative of an idea a transmitting participant intended to convey to a recipient participant.
While this example illustrates a very elementary and exemplary illustration of virtual experiences, such principles can be ported to numerous applications that involve, for example, emotions surrounding everyday activities, such as watching sports activities together, congratulating other participants on personal events or accomplishments on a shared online game, etc. Such transfer of emotions and other such factors over the virtual experiences context may pan over multiple computing devices, sensors, displays, displays within displays or split displays, etc. The overall rendering and execution of the virtual experiences may be specific to each local machine or may all be controlled overall over a cloud environment (e.g., Amazon® cloud services), where a server computing unit on the cloud maintains connectivity (e.g., using APIs) with the devices associated with the virtual experience platform. The overall principles discussed herein are directed to synchronous and live experiences offered over a virtual experience platform. Asynchronous experiences are also contemplated as will be discussed further below. Synchronization of virtual experiences may pan across displays of several devices, or several networks connected to a common hub that operates the virtual experience. Monetization of the virtual experience platform is envisioned in several forms. For example, participants may purchase virtual objects that they wish to utilize in a virtual experience (e.g., purchase a tomato to use in the virtual throw experience), or may even purchase virtual events such as the capability of purchasing three tomato throws at the screen. In some aspects, the monetization model may also include use of branded products (e.g., passing around a 1-800-Flowers® bouquet of flowers to convey an emotional experience, where the relevant owner of the brand may also compensate the platform for marketing initiatives. Such virtual experiences may pan from simple to complex scenarios. Examples of complex scenarios may include a virtual birthday party or a virtual football game event where several participants are connected over the Internet to watch a common game or a video of the birthday party. The participants can see each other over video displays and selectively or globally communicate with each other. Participants may then convey emotions by, for example throwing tomatoes at the screen or by causing fireworks to come up over a momentous occasion, which is then propagated as an experience over the screens.
An exemplary overall block diagram of the virtual experience platform is provided in
While there are numerous virtual experiences that can effectively utilize the principles discussed herein, the following sections detail the experiences associated with targeted virtual experiences. A first example, described in
The above description discussed various examples of virtual experiences and a platform that provides synchronous or asynchronous mechanisms for providing such a virtual engine. The description now focuses on the virtual engine that enables such a virtual experience platform. In the prior art, products such as Adobe Flash®, HTML5 3D game animation engines (i.e., Unity®, Crytek®, etc.) were available as potential engineers to provide animation. The key ideas behind a virtual animation engine include provision of high quality animation on a mobile device/screen with limited processing capabilities. In addition to these capabilities, the virtual engine also will have to work other everyday experiences, unlike prior art game engines that assume they will render the whole environment. The devices used for virtual experiences may have limited processing capabilities, especially smart phones that have to use their resources for regular communication capabilities, etc. Accordingly, in embodiments, the virtual engine may utilize a cloud computing environment for the various rendering activities.
In some embodiments, a modeled environment that uses execution-capability of clients by splitting the execution task over the multiple clients (based on their cached availability, for example), may also be utilized for rendering. A purely local execution and rendering environment may be used where performance and instant or seamless delivery is expected. If such local execution is unavailable or is not an option, the local capabilities may be combined with cloud computing capabilities. If limited capabilities are present, then execution or rendering may be split in a selected manner. For example, in embodiments, if a virtual object related to a virtual experience or the virtual experience itself is purchased (as opposed to using something already in a cache), rendering/execution related to the purchase may be performed locally or within a local network and remaining rendering may be performed over the cloud.
In some embodiments, rendering of animations with respect to a virtual experience may be performed over a cloud. For example, in an illustrative environment where one participant throws a tomato on a screen, another participant may be able to receive the thrown tomato on his screen, but may not be able to throw it back or throw another tomato until buying such a tomato. Here, the purchase processing may be performed locally, but the animation rendering related to the animation of the tomato swooshing across the screen and splotching on the screen on a desired target is all performed over the cloud. Each of the connected devices include codecs (e.g., SENTIO codes as defined in U.S. patent application Ser. No. 13/165,710 entitled “Just-In Time Transcoding of Application Content,” which is incorporated herein by reference in its entirety) for direct connection with servers over the cloud and for transparency with the cloud computing environment.
The one or more processor(s) 3220 may include central processing units (CPUs) to control the operations of, for example, the host computer. In some embodiments, the processor(s) 3220 may accomplish the operations by executing software or firmware stored in the one or more memory 3230. The one or more processor(s) 3220 may be, or may include, one or more programmable general-purpose or special-purpose microprocessors, digital signal processors (DSPs), programmable controllers, application specific integrated circuits (ASICs), programmable logic devices (PLDs), or the like, or a combination of such devices.
The one or more memory 3230 may represent any form of random access memory (RAM), read-only memory (ROM), flash memory, or the like, or a combination of such devices. In use, the one or more memory 3230 may contain, among other things, a plurality of machine instructions which, when executed by the one or more processor(s) 3220, causes the one or more processor(s) 3220 to perform the operations to implement embodiments of the present disclosure.
The virtual experience server 3200 may also include a network adapter 3210, which is connected to the one or more processor(s) through the interconnect 3250. The network adapter 3210 may provide the virtual experience server 3200 with the ability to communicate with devices of online participants, remote devices (i.e., the storage clients), and/or other storage servers. The network adapter 3210 may be, for example, an Ethernet adapter or Fiber Channel Adapter.
Unless the context clearly requires otherwise, throughout the description and the claims, the words “comprise,” “comprising,” and the like are to be construed in an inclusive sense (i.e., to say, in the sense of “including, but not limited to”), as opposed to an exclusive or exhaustive sense. As used herein, the terms “connected,” “coupled,” or any variant thereof means any connection or coupling, either direct or indirect, between two or more elements. Such a coupling or connection between the elements can be physical, logical, or a combination thereof. Additionally, the words “herein,” “above,” “below,” and words of similar import, when used in this application, refer to this application as a whole and not to any particular portions of this application. Where the context permits, words in the above Detailed Description using the singular or plural number may also include the plural or singular number respectively. The word “or,” in reference to a list of two or more items, covers all of the following interpretations of the word: any of the items in the list, all of the items in the list, and any combination of the items in the list.
The above Detailed Description of examples of the present disclosure is not intended to be exhaustive or to limit the present disclosure to the precise form disclosed above. While specific examples for the present disclosure are described above for illustrative purposes, various equivalent modifications are possible within the scope of the present disclosure, as those skilled in the relevant art will recognize. While processes or blocks are presented in a given order in this application, alternative implementations may perform routines having steps performed in a different order, or employ systems having blocks in a different order. Some processes or blocks may be deleted, moved, added, subdivided, combined, and/or modified to provide alternative or sub-combinations. Also, while processes or blocks are at times shown as being performed in series, these processes or blocks may instead be performed or implemented in parallel, or may be performed at different times. Further any specific numbers noted herein are only examples. It is understood that alternative implementations may employ differing values or ranges.
The various illustrations and teachings provided herein can also be applied to systems other than the system described above. The elements and acts of the various examples described above can be combined to provide further implementations of the present disclosure.
Any patents and applications and other references noted above, including any that may be listed in accompanying filing papers, are incorporated herein by reference. Aspects of the present disclosure can be modified, if necessary, to employ the systems, functions, and concepts included in such references to provide further implementations of the present disclosure.
These and other changes can be made to the present disclosure in light of the above Detailed Description. While the above description describes certain examples of the present disclosure, and describes the best mode contemplated, no matter how detailed the above appears in text, the present disclosure can be practiced in many ways. Details of the system may vary considerably in its specific implementation, while still being encompassed by the present disclosure disclosed herein. As noted above, particular terminology used when describing certain features or aspects of the present disclosure should not be taken to imply that the terminology is being redefined herein to be restricted to any specific characteristics, features, or aspects of the present disclosure with which that terminology is associated. In general, the terms used in the following claims should not be construed to limit the present disclosure to the specific examples disclosed in the specification, unless the above Detailed Description section explicitly defines such terms. Accordingly, the actual scope of the present disclosure encompasses not only the disclosed examples, but also all equivalent ways of practicing or implementing the present disclosure under the claims.
While certain aspects of the present disclosure are presented below in certain claim forms, the applicant contemplates the various aspects of the present disclosure in any number of claim forms. For example, while only one aspect of the present disclosure is recited as a means-plus-function claim under 35 U.S.C. §112, sixth paragraph, other aspects may likewise be embodied as a means-plus-function claim, or in other forms, such as being embodied in a computer-readable medium. (Any claims intended to be treated under 35 U.S.C. §112, ¶6 will begin with the words “means for.”) Accordingly, the applicant reserves the right to add additional claims after filing the application to pursue such additional claim forms for other aspects of the present disclosure
In addition to the above mentioned examples, various other modifications and alterations of the present disclosure may be made without departing from the present disclosure. Accordingly, the above disclosure is not to be considered as limiting and the appended claims are to be interpreted as encompassing the true spirit and the entire scope of the present disclosure.
This application claims the benefit of U.S. Provisional Application No 61/506,168 entitled “Methods and Systems for Virtual Experiences”, filed Jul. 11, 2011, and is hereby incorporated by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
61506168 | Jul 2011 | US |