The present invention contemplates a variety of improved methods and systems for providing a general purpose tool to enable a wide variety of applications referred to experiential computing.
Some of the attributes of “experiential computing” are: 1) pervasive—it assumes multi-screen, multi-device, multi-sensor computing environments both personal and public; this is in contrast to “personal computing” paradigm where computing is defined as one person interacting with one device (such as a laptop or phone) at any given time; 2) the applications focus on invoking feelings and emotions as opposed to consuming and finding information or data processing; 3) multiple dimensions of input and sensor data—such as physicality; 4) people connected together—live, synchronously: multi-person social real-time interaction allowing multiple people interact with each other live using voice, video, gestures and other types of input.
The experience platform may be provided by a service provider to enable an experience provider to compose and direct a participant experience. The service provider monetizes the experience by charging the experience provider and/or the participants for services. The participant experience can involve one or more experience participants. The experience provider can create an experience with a variety of dimensions and features. As will be appreciated, the following description provides one paradigm for understanding the multi-dimensional experience available to the participants. There are many suitable ways of describing, characterizing and implementing the experience platform contemplated herein.
These and other objects, features and characteristics of the present invention will become more apparent to those skilled in the art from a study of the following detailed description in conjunction with the appended claims and drawings, all of which form a part of this specification. In the drawings:
In general, services are defined at an API layer of the experience platform. The services are categorized into “dimensions.” The dimension(s) can be recombined into “layers.” The layers form to make features in the experience.
By way of example, the following are some of the dimensions that can be supported on the experience platform.
Video—is the near or substantially real-time streaming of the video portion of a video or film with near real-time display and interaction.
Audio—is the near or substantially real-time streaming of the audio portion of a video, film, karaoke track, song, with near real-time sound and interaction.
Live—is the live display and/or access to a live video, film, or audio stream in near real-time that can be controlled by another experience dimension. A live display is not limited to single data stream.
Encore—is the replaying of a live video, film or audio content. This replaying can be the raw version as it was originally experienced, or some type of augmented version that has been edited, remixed, etc.
Graphics—is a display that contains graphic elements such as text, illustration, photos, freehand geometry and the attributes (size, color, location) associated with these elements. Graphics can be created and controlled using the experience input/output command dimension(s) (see below).
Input/Output Command(s)—are the ability to control the video, audio, picture, display, sound or interactions with human or device-based controls. Some examples of input/output commands include physical gestures or movements, voice/sound recognition, and keyboard or smart-phone device input(s).
Interaction—is how devices and participants interchange and respond with each other and with the content (user experience, video, graphics, audio, images, etc.) displayed in an experience. Interaction can include the defined behavior of an artifact or system and the responses provided to the user and/or player.
Game Mechanics—are rule-based system(s) that facilitate and encourage players to explore the properties of an experience space and other participants through the use of feedback mechanisms. Some services on the experience Platform that could support the game mechanics dimensions include leader boards, polling, like/dislike, featured players, star-ratings, bidding, rewarding, role-playing, problem-solving, etc.
Ensemble—is the interaction of several separate but often related parts of video, song, picture, story line, players, etc. that when woven together create a more engaging and immersive experience than if experienced in isolation.
Auto Tune—is the near real-time correction of pitch in vocal and/or instrumental performances. Auto Tune is used to disguise off-key inaccuracies and mistakes, and allows singer/players to hear back perfectly tuned vocal tracks without the need of singing in tune.
Auto Filter—is the near real-time augmentation of vocal and/or instrumental performances. Types of augmentation could include speeding up or slowing down the playback, increasing/decreasing the volume or pitch, or applying a celebrity-style filter to an audio track (like a Lady Gaga or Heavy-Metal filter).
Remix—is the near real-time creation of an alternative version of a song, track, video, image, etc. made from an original version or multiple original versions of songs, tracks, videos, images, etc.
Viewing 360°/Panning—is the near real-time viewing of the 360° horizontal movement of a streaming video feed on a fixed axis. Also the ability to for the player(s) to control and/or display alternative video or camera feeds from any point designated on this fixed axis.
Layer Composition: The first, layer composition, is described herein with reference to
Layer Mobility: The next feature is the mobility and the ability of the layer to move across devices and, for example, be present simultaneously across different consumer devices. In embodiments, such mobility allow for flare virtualization across the devices. Layer mobility is further discussed with reference to
Consider, for instance, that in the above example all three devices are located in the network. Once the device connected to the personal service, personal service can send the device (or each device) this information so in local network they can communicate to each other and they can just simply establish direct links and initiate all this movement and other operations by conveying personal service. Another example is where the devices are located in the same geographical location but just connected to different WIFI or wired networks and there is no direct link. Here, the devices would instead communicate through the personal service. Anyway, regardless of this mechanism, the end result will be the device 3 initiates the signal both to destination device (in this case device 2), and the source device. It basically informs the device that they should prepare for layered movement. They, those who communicate through the service wishes according service to this layer. As an additional example, let's say we have some layer created in the cloud. In this case, device number 3 tells device number 2 to accept the specific request which indicate a particular layer, and device number 1 will be ready for layer movement. This device can talk to each other to accept and to identify how layer movement is progressing. The devices may synchronize the animation between device 1 and device 2, how the layer disappears on the first and appears on the second device. And finally the layer on device 1 moves to device number 2. The important part is both devices are connected to the service which is responsible for work under this layer, so this communication, this transmission can happen in either direction. As far as the layer is concerned, the same mechanism can be applied from layer application—it just simply communicates to according service in the cloud to send identical or device specific stream which contains this layer to another device. The layer movement feature is important because if an application is composed of multiple layers, layers can be split so one layer can go on the tablet device and another layer can go on the TV device and they can adapt based on the devices and this allows for experiences that can be split across devices.
Layer Computation: The next feature is the computation model of the layer, where all the computation for this layer happens and the data exchange mechanism for this layer. Reference is made to
Data layers may be sent as data streams and event streams that can be, for example, transmitted through the cloud. In embodiments, each layer transmits to a remote computing node all the data and communicates only to that node. In some examples, each layer, and further in some examples, each instance of such layers, may work exactly on the same model so as to offer a central point of communication that is publicly accessible through good bandwidths and other network capabilities. This exemplary embodiment offers a single point of routing. In other embodiments, a “peering network” may be used, where devices talk with each other for such communication offerings. In a third embodiment, a possible approach is combination of the two previous approaches, e.g., in a scenario where we have three persons located, and each of the persons has three devices. Each device is located, for example, in the same wi-fi network, so the devices can communicate locally, but when it comes to communication across the local network the cloud model may be adopted.
Layer Outputs:
In this illustrated scenario, the conveyed idea is that even if the game or application or layer wasn't designed for that such type of an input, it can be meta-schema implemented to be able to work with such type of input. So, without any modification to the layer or application agent, and with just a configuration file which basically establishes the map between the input and parameters of the unit to the application specific actions, the required objectives are easily met. In another example, we have a PC based game and it knows how to react to the mouse movement, but then we add an accelerated method from the mobile phone and we can just simply map the rotation of the view of the main character in the game to accelerated motion of the phone. So when a user rotates the phone, it will rotate the main view. This illustrates how this schema is important to extend input capabilities for initial application layers.
Turning back to
Each device 12 has an experience agent 32. The experience agent 32 includes a sentio codec and an API. The sentio codec and the API enable the experience agent 32 to communicate with and request services of the components of the data center 40. The experience agent 32 facilitates direct interaction between other local devices. Because of the multi-dimensional aspect of the experience, the sentio codec and API are required to fully enable the desired experience. However, the functionality of the experience agent 32 is typically tailored to the needs and capabilities of the specific device 12 on which the experience agent 32 is instantiated. In some embodiments, services implementing experience dimensions are implemented in a distributed manner across the devices 12 and the data center 40. In other embodiments, the devices 12 have a very thin experience agent 32 with little functionality beyond a minimum API and sentio codec, and the bulk of the services and thus composition and direction of the experience are implemented within the data center 40.
Data center 40 includes an experience server 42, a plurality of content servers 44, and a service platform 46. As will be appreciated, data center 40 can be hosted in a distributed manner in the “cloud,” and typically the elements of the data center 40 are coupled via a low latency network. The experience server 42, servers 44, and service platform 46 can be implemented on a single computer system, or more likely distributed across a variety of computer systems, and at various locations.
The experience server 42 includes at least one experience agent 32, an experience composition engine 48, and an operating system 50. In one embodiment, the experience composition engine 48 is defined and controlled by the experience provider to compose and direct the experience for one or more participants utilizing devices 12. Direction and composition is accomplished, in part, by merging various content layers and other elements into dimensions generated from a variety of sources such as the service provider 42, the devices 12, the content servers 44, and/or the service platform 46.
The content servers 44 may include a video server 52, an ad server 54, and a generic content server 56. Any content suitable for encoding by an experience agent can be included as an experience layer. These include well know forms such as video, audio, graphics, and text. As described in more detail earlier and below, other forms of content such as gestures, emotions, temperature, proximity, etc., are contemplated for encoding and inclusion in the experience via a sentio codec, and are suitable for creating dimensions and features of the experience.
The service platform 46 includes at least one experience agent 32, a plurality of service engines 60, third party service engines 62, and a monetization engine 64. In some embodiments, each service engine 60 or 62 has a unique, corresponding experience agent. In other embodiments, a single experience 32 can support multiple service engines 60 or 62. The service engines and the monetization engines 64 can be instantiated on one server, or can be distributed across multiple servers. The service engines 60 correspond to engines generated by the service provider and can provide services such as audio remixing, gesture recognition, and other services referred to in the context of dimensions above, etc. Third party service engines 62 are services included in the service platform 46 by other parties. The service platform 46 may have the third-party service engines instantiated directly therein, or within the service platform 46 these may correspond to proxies which in turn make calls to servers under control of the third-parties.
Monetization of the service platform 46 can be accomplished in a variety of manners. For example, the monetization engine 64 may determine how and when to charge the experience provider for use of the services, as well as tracking for payment to third-parties for use of services from the third-party service engines 62.
The sentio codec 104 is a combination of hardware and/or software which enables encoding of many types of data streams for operations such as transmission and storage, and decoding for operations such as playback and editing. These data streams can include standard data such as video and audio. Additionally, the data can include graphics, sensor data, gesture data, and emotion data. (“Sentio” is Latin roughly corresponding to perception or to perceive with one's senses, hence the nomenclature “sensio codec.”)
The sentio codec 200 can be designed to take all aspects of the experience platform into consideration when executing the transfer protocol. The parameters and aspects include available network bandwidth, transmission device characteristics and receiving device characteristics. Additionally, the sentio codec 200 can be implemented to be responsive to commands from an experience composition engine or other outside entity to determine how to prioritize data for transmission. In many applications, because of human response, audio is the most important component of an experience data stream. However, a specific application may desire to emphasize video or gesture commands.
The sentio codec provides the capability of encoding data streams corresponding to many different senses or dimensions of an experience. For example, a device 12 may include a video camera capturing video images and audio from a participant. The user image and audio data may be encoded and transmitted directly or, perhaps after some intermediate processing, via the experience composition engine 48, to the service platform 46 where one or a combination of the service engines can analyze the data stream to make a determination about an emotion of the participant. This emotion can then be encoded by the sentio codec and transmitted to the experience composition engine 48, which in turn can incorporate this into a dimension of the experience. Similarly a participant gesture can be captured as a data stream, e.g., by a motion sensor or a camera on device 12, and then transmitted to the service platform 46, where the gesture can be interpreted, and transmitted to the experience composition engine 48 or directly back to one or more devices 12 for incorporation into a dimension of the experience.
In addition to the above mentioned examples, various other modifications and alterations of the invention may be made without departing from the invention. Accordingly, the above disclosure is not to be considered as limiting and the appended claims are to be interpreted as encompassing the true spirit and the entire scope of the invention.
The present application is a Continuation of U.S. patent application Ser. No. 13/136,869, entitled SYSTEM ARCHITECTURE AND METHODS FOR EXPERIENTIAL COMPUTING, filed on Aug. 12, 2011 and U.S. patent application Ser. No. 13/367,146, filed Feb. 6, 2012, entitled SYSTEM ARCHITECTURE AND METHODS FOR COMPOSING AND DIRECTING PARTICIPANT EXPERIENCES which claims priority to U.S. Provisional Application No. 61/373,193, entitled “SYSTEM ARCHITECTURE AND METHODS FOR COMPOSING AND DIRECTING PARTICIPANT EXPERIENCES,” filed on Aug. 12, 2010, all of which are incorporated in their entirety herein by this reference.
Number | Name | Date | Kind |
---|---|---|---|
5491743 | Shiio et al. | Feb 1996 | A |
6144991 | England | Nov 2000 | A |
7171485 | Roach et al. | Jan 2007 | B2 |
7516255 | Hobbs | Apr 2009 | B1 |
7529259 | Van Acker et al. | May 2009 | B2 |
7760640 | Brown et al. | Jul 2010 | B2 |
8132111 | Baron et al. | Mar 2012 | B2 |
8171154 | Vonog et al. | May 2012 | B2 |
8234398 | Vonog et al. | Jul 2012 | B2 |
8255552 | Witt et al. | Aug 2012 | B2 |
8429704 | Vonog et al. | Apr 2013 | B2 |
8463677 | Vonog et al. | Jun 2013 | B2 |
20020051493 | Shin et al. | May 2002 | A1 |
20020073155 | Anupam et al. | Jun 2002 | A1 |
20030074474 | Roach et al. | Apr 2003 | A1 |
20030074554 | Roach et al. | Apr 2003 | A1 |
20030188320 | Shing | Oct 2003 | A1 |
20030217170 | Nelson et al. | Nov 2003 | A1 |
20040128350 | Topfl et al. | Jul 2004 | A1 |
20050100100 | Unger | May 2005 | A1 |
20070094691 | Gazdzinski | Apr 2007 | A1 |
20070162434 | Alessi et al. | Jul 2007 | A1 |
20070217436 | Markley et al. | Sep 2007 | A1 |
20070271580 | Tischer et al. | Nov 2007 | A1 |
20080004888 | Davis et al. | Jan 2008 | A1 |
20080039205 | Ackley et al. | Feb 2008 | A1 |
20080056718 | Parker | Mar 2008 | A1 |
20080068449 | Wu et al. | Mar 2008 | A1 |
20080112045 | Sasaki | May 2008 | A1 |
20080134235 | Kalaboukis | Jun 2008 | A1 |
20080134239 | Knowles et al. | Jun 2008 | A1 |
20080139301 | Holthe | Jun 2008 | A1 |
20080158373 | Chu | Jul 2008 | A1 |
20080181260 | Vonog et al. | Jul 2008 | A1 |
20080195956 | Baron et al. | Aug 2008 | A1 |
20080205426 | Grover et al. | Aug 2008 | A1 |
20080267447 | Kelusky et al. | Oct 2008 | A1 |
20090013263 | Fortnow et al. | Jan 2009 | A1 |
20090046139 | Cutler et al. | Feb 2009 | A1 |
20090100452 | Hudgeons et al. | Apr 2009 | A1 |
20090115843 | Sohmers | May 2009 | A1 |
20090118017 | Perlman et al. | May 2009 | A1 |
20090158370 | Li et al. | Jun 2009 | A1 |
20090183205 | McCartie et al. | Jul 2009 | A1 |
20090186700 | Konkle | Jul 2009 | A1 |
20090239587 | Negron et al. | Sep 2009 | A1 |
20090320077 | Gazdzinski | Dec 2009 | A1 |
20100004977 | Marci et al. | Jan 2010 | A1 |
20100064324 | Jenkin et al. | Mar 2010 | A1 |
20100103943 | Walter | Apr 2010 | A1 |
20100107364 | Shen | May 2010 | A1 |
20100113140 | Kelly et al. | May 2010 | A1 |
20100149096 | Migos et al. | Jun 2010 | A1 |
20100156812 | Stallings et al. | Jun 2010 | A1 |
20100185514 | Glazer et al. | Jul 2010 | A1 |
20100191859 | Raveendran | Jul 2010 | A1 |
20100257251 | Mooring et al. | Oct 2010 | A1 |
20100268843 | Van Wie et al. | Oct 2010 | A1 |
20100278508 | Aggarwal | Nov 2010 | A1 |
20110046980 | Metzler et al. | Feb 2011 | A1 |
20110103374 | Lajoie et al. | May 2011 | A1 |
20110116540 | O'Connor et al. | May 2011 | A1 |
20110145817 | Grzybowski | Jun 2011 | A1 |
20110151976 | Holloway et al. | Jun 2011 | A1 |
20110225515 | Goldman et al. | Sep 2011 | A1 |
20110244954 | Goldman et al. | Oct 2011 | A1 |
20110246908 | Akram et al. | Oct 2011 | A1 |
20110271208 | Jones et al. | Nov 2011 | A1 |
20110292181 | Acharya et al. | Dec 2011 | A1 |
20110298827 | Perez | Dec 2011 | A1 |
20110302532 | Missig | Dec 2011 | A1 |
20120038550 | Lemmey et al. | Feb 2012 | A1 |
20120039382 | Vonog et al. | Feb 2012 | A1 |
20120041859 | Vonog et al. | Feb 2012 | A1 |
20120060101 | Vonog et al. | Mar 2012 | A1 |
20120078788 | Gandhi | Mar 2012 | A1 |
20120082226 | Weber | Apr 2012 | A1 |
20120084738 | Sirpal | Apr 2012 | A1 |
20120131458 | Hayes | May 2012 | A1 |
20120191586 | Vonog et al. | Jul 2012 | A1 |
20120206262 | Grasso | Aug 2012 | A1 |
20120206558 | Setton | Aug 2012 | A1 |
20120206560 | Setton | Aug 2012 | A1 |
20130019184 | Vonog et al. | Jan 2013 | A1 |
Number | Date | Country |
---|---|---|
2010-016662 | Jan 2010 | JP |
10-2002-0038738 | May 2002 | KR |
10-2006-0062381 | Jun 2006 | KR |
10-2006-0083034 | Jul 2006 | KR |
10-2008-0008340 | Jan 2008 | KR |
10-2009-0113158 | Oct 2009 | KR |
10-2010-0098668 | Sep 2010 | KR |
WO 02-41121 | May 2002 | WO |
WO 2007084994 | Jul 2007 | WO |
WO 2008-072923 | Jun 2008 | WO |
Entry |
---|
Mobile content and service delivery platforms: a technology classification model; Ghezzi, Antonio; Marcelo Nogueira Cortimiglia; Balocco, Rafaello; Emerald Group Publishing, Limited; 2012. |
Personalized Multimedia News Agents in a Broadband Environment; Wen Bin Kwan; Ottawa-Carleton Institute of Electrical Engineering; Sep. 1996. |
Non-Final Office Action mailed Aug. 15, 2014, in Co-Pending U.S. Appl. No. 13/221,801 of Vonog, S., et al., filed Aug. 30, 2011. |
Ettin, Scott A.; “Band of the Hand: Handheld Media Platforms from Palm OS to Pocket PC Pose New Challenges to Video Producers, But They Also Present New Opportunities”; Emedia, The Digital Studio Magazine; 16, 1, 44(2) (Jan. 2003). |
Moote, et al.; “Moving Toward a Dual-Infrastructure Facility”; Broadcast Engineering; vol. 46, No. 9, pp. 72-78 (Sep. 2004). |
“Broadcom Announces Industry's Most Advanced High Definition AVC Encoder/Transcoder that Transforms the PC into a Multimedia Server”; PR Newswire (Jan. 7, 2008). |
International Search Report and Written Opinion of PCT Application No. PCT/US2011/001424, mailed Apr. 6, 2012. |
International Search Report and Written Opinion of PCT Application No. PCT/US2011/001425, mailed Apr. 6, 2012. |
International Search Report and Written Opinion of PCT Application No. PCT/US2011/057385, mailed May 16, 2012. |
International Search Report and Written Opinion of PCT Application No. PCT/US2011/047815, mailed Apr. 9, 2012. |
International Search Report and Written Opinion of PCT Application No. PCT/US2011/047814, mailed Apr. 9, 2012. |
Co-Pending U.S. Appl. No. 13/136,869 of Vonog, S., et al., filed Aug. 12, 2011. |
Co-Pending U.S. Appl. No. 13/367,146 of Vonog, S., et al., filed Feb. 6, 2012. |
Co-Pending U.S. Appl. No. 13/136,870 of Vonog, S., et al., filed Aug. 12, 2011. |
Co-Pending U.S. Appl. No. 13/363,187 of Vonog, S., et al., filed Jan. 31, 2012. |
Co-Pending U.S. Appl. No. 13/221,801 of Vonog, S., et al., filed Aug. 30, 2011. |
Co-Pending U.S. Appl. No. 13/279,096 of Vonog, S., et al., filed Oct. 21, 2011. |
Co-Pending U.S. Appl. No. 13/762,149 of Vonog. S., filed Feb. 7, 2013. |
Co-Pending U.S. Appl. No. 13/210,370 of Lemmey, T., et al., filed Aug. 15, 2011. |
Co-Pending U.S. Appl. No. 13/461,680 of Vonog, S., et al., filed May 1, 2012. |
Co-Pending U.S. Appl. No. 13/546,906 of Vonog, S., et al., filed Jul. 11, 2012. |
Co-Pending U.S. Appl. No. 13/694,582 of Lemmey, T., et al., filed Dec. 12, 2012. |
Co-Pending U.S. Appl. No. 13/694,581 of Vonog, S., et al., filed Dec. 12, 2012. |
Non-Final Office Action mailed Sep. 11, 2012, in Co-Pending U.S. Appl. No. 13/136,869 of Vonog, S., et al., filed Sep. 7, 2012. |
Notice of Allowance mailed Apr. 3, 2013, in Co-Pending U.S. Appl. No. 13/163,869 of Vonog, S., et al., filed Sep. 7, 2012. |
Non-Final Office Action mailed Mar. 19, 2013, in Co-Pending U.S. Appl. No. 13/367,146 of Vonog, S., et al., filed Feb. 6, 2012. |
Notice of Allowance mailed Sep. 4, 2013, in Co-Pending U.S. Appl. No. 13/367,146 of Vonog, S., et al., filed Feb. 6, 2012. |
Non-Final Office Action mailed Apr. 1, 2013, in Co-Pending U.S. Appl. No. 13/221,801 of Vonog, S., et al., filed Aug. 30, 2011. |
Final Office Action mailed Aug. 15, 2013, in Co-Pending U.S. Appl. No. 13/221,801 of Vonog, S., et al., filed Aug. 30, 2011. |
Notice of Allowance mailed Feb. 4, 2013, in Co-Pending U.S. Appl. No. 13/279,096 of Vonog, S., et al., filed Oct. 21, 2011. |
Non-Final Office Action mailed Jul. 9, 2014, in Co-Pending U.S. Appl. No. 13/461,680, filed May 1, 2012. |
Non-Final Office Action mailed Jun. 19, 2014, in Co-Pending U.S. Appl. No. 13/210,370, filed Aug. 15, 2011. |
Number | Date | Country | |
---|---|---|---|
20130339222 A1 | Dec 2013 | US |
Number | Date | Country | |
---|---|---|---|
61363193 | Aug 2010 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 13136869 | Aug 2011 | US |
Child | 13874319 | US | |
Parent | 13367146 | Feb 2012 | US |
Child | 13136869 | US | |
Parent | 13136869 | Aug 2011 | US |
Child | 13367146 | US |