The present teaching relates to experience or “sentio” codecs enabling adaptive encoding and transmission for heterogeneous data streams of different nature involving a variety of content and data types including video, audio, physical gesture, geo-location, voice input, synchronization events, computer-generated graphics etc. “Sentio” codec expands the existing concept of codecs by to maximize final Quality of Service/Experience in real-time, heterogeneous network, multi-device, social environment.
The present invention contemplates a variety of experience or “sentio” codecs, and methods and systems for enabling an experience platform, as well as a Quality of Experience (QoE) engine which allows the sentio codec to select a suitable encoding engine or device. “Sentio” codec expands the existing concept of codec to work in real-time, heterogeneous network, multi-device, social environment to maximize final Quality of Service/Experience.
As will be described in more detail below, the sentio codec is capable of encoding and transmitting data streams that correspond to participant experiences with a variety of different dimensions and features. As will be appreciated, the following description provides one paradigm for understanding the multi-dimensional experience available to the participants, and as implemented utilizing a sentio codec. There are many suitable ways of describing, characterizing and implementing the sentio codec and experience platform contemplated herein.
These and other objects, features and characteristics of the present invention will become more apparent to those skilled in the art from a study of the following detailed description in conjunction with the appended claims and drawings, all of which form a part of this specification. In the drawings:
The present invention contemplates a variety of experience or “sentio” codecs, and methods and systems for enabling an experience platform, as well as a Quality of Experience (QoS) engine which allows the sentio codec to select a suitable encoding engine or device. As will be described in more detail below, the sentio codec is capable of encoding and transmitting data streams that correspond to participant experiences with a variety of different dimensions and features. (The term “sentio” is Latin roughly corresponding to perception or to perceive with one's senses, hence the original nomenclature “sensio codec.”)
The primary goal of a video codec is to achieve maximum compression rate for digital video while maintaining great picture quality video; audio codecs are similar. But video and audio codecs alone are insufficient to generate and capture a full experience, such as a real-time experience enabled by hybrid encoding, and encoding of other experience aspects such as gestures, emotions, etc.
3 and 4 illustrate hybrid encoding approaches can be used to accomplish low-latency transmission. The first layer provides an Autodesk 3ds Max image including a rotating teapot, the first layer moving images, static or nearly static images, and graphic and/or text portions. Rather then encoding all the information with a video encoder alone, a hybrid approach encoding some regions with a video encoder, other regions with a picture encoder, and other portions as command, results in better transmission results, and can be optimized based on factors such as the state of the network and the capabilities of end devices. These different encoding regions are illustrated by the different coloring of the red-green-yellow grid of layer 4. One example of this low-latency protocol is described in more detail in Vonog et al.'s U.S. patent application Ser. No. 12/569,876, filed Sep. 29, 2009, and incorporated herein by reference for all purposes including the low-latency protocol and related features such as the network engine and network stack arrangement.
A video codec alone is inadequate to accomplish the hybrid encoding scheme covering video, pictures and commands. While it is theoretically possible to encode the entire first layer using only a video codec, latency and other issues can prohibit real-time and/or quality experiences. A low-latency protocol can solve this problem by efficiently encoding the data.
In another example, a multiplicity of video codecs can be used to improve encoding and transmission. For example, h.264 can be used if a hardware decoder is available, thus saving battery life and improving performance, or a better video codec (e.g., low latency) can be used if the device fails to support h.264.
As yet another example, consider the case of multiple mediums where an ability to take into account the nature of human perception would be beneficial. For example, assume we have video and audio information. If network quality degrades, it could be better to prioritize audio and allow the video to degrade. To do so would require using psychoacoustics to improve the QoE.
Accordingly, the present teaching contemplates an experience or sentio codec capable of encoding and transmitting data streams that correspond to experiences with a variety of different dimensions and features. These dimensions include known audio and video, but further may include any conceivable element of a participant experience, such as gestures, gestures+voice commands, “game mechanics” (which you can use to boost QoE when current conditions (such as network) do not allow you to do so—i.e. apply sound distortion effect specific to a given experience when loss of data happened), emotions (perhaps as detected via voice or facial expressions, various sensor data, microphone input, etc.
It is also contemplated that virtual experiences can be encoded via the sentio codec. According to one embodiment, virtual goods are evolved into virtual experiences. Virtual experiences expand upon limitations imposed by virtual goods by adding additional dimensions to the virtual goods. By way of example, User A transmits flowers as a virtual good to User B. The transmission of the virtual flowers is enhanced by adding emotion by way of sound, for example. The virtual flowers are also changed to a virtual experience when User B can do something with the flowers, for example User B can affect the flowers through any sort of motion or gesture. User A can also transmit the virtual goods to User B by making a “throwing” gesture using a mobile device, so as to “toss” the virtual goods to User B.
The sentio codec improves the QoE to a consumer or experience participant on the device of their choice. This is accomplished through a variety of mechanisms, selected and implemented, possibly dynamically, based on the specific application and available resources. In certain embodiments, the sentio codec encodes multi-dimensional data streams in real-time, adapting to network capability. A QoE engine operating within the sentio codec a makes decisions on how to use different available codecs. The network stack can be implemented as hybrid, as described above, and in further detail with reference to Vonog et al.'s U.S. patent application Ser. No. 12/569,876.
The sentio codec can include 1) a variety of codecs for each segment of experience described above, 2) a hybrid network stack with network intelligence, 3) data about available devices, and 4) a QoE engine that makes decisions on how to encode. It will be appreciated that QoE is achieved through various strategies that work differently for each given experience (say a zombie karaoke game vs. live stadium rock concert experience), and adapt in real-time to the network and other available resources, know the devices involved and take advantages of various psychological tricks to conceal imperfections which inevitably arise, particularly when the provided experience is scaled for many participants and devices.
The sentio codec 200 can be designed to take all aspects of the experience platform into consideration when executing the transfer protocol. The parameters and aspects include available network bandwidth, transmission device characteristics and receiving device characteristics. Additionally, the sentio codec 200 can be implemented to be responsive to commands from an experience composition engine or other outside entity to determine how to prioritize data for transmission. In many applications, because of human response, audio is the most important component of an experience data stream. However, a specific application may desire to emphasize video or gesture commands.
The sentio codec provides the capability of encoding data streams corresponding to many different senses or dimensions of an experience. For example, a device 12 may include a video camera capturing video images and audio from a participant. The user image and audio data may be encoded and transmitted directly or, perhaps after some intermediate processing, via the experience composition engine 48, to the service platform 46 where one or a combination of the service engines can analyze the data stream to make a determination about an emotion of the participant. This emotion can then be encoded by the sentio codec and transmitted to the experience composition engine 48, which in turn can incorporate this into a dimension of the experience. Similarly a participant gesture can be captured as a data stream, e.g. by a motion sensor or a camera on device 12, and then transmitted to the service platform 46, where the gesture can be interpreted, and transmitted to the experience composition engine 48 or directly back to one or more devices 12 for incorporation into a dimension of the experience.
The sentio codec delivers the best QoE to a consumer on the device of their choice through current network. This is accomplished through a variety of mechanisms, selected and implemented based on the specific application and available resources. In certain embodiments, the sentio codec encodes multi-dimensional data streams in real-time, adapting to network capability. A QoE engine operating within the sentio codec a makes decisions on how to use different available codecs. The network stack can be implemented as hybrid, as described above, and in further detail with reference to Vonog et al.'s U.S. patent application Ser. No. 12/569,876.
Additionally, the following description is related to a simple operating system, which follows generally the fundamental concepts discussed above with further distinctions. In a cloud computing environment, a server communicates with a first device, wherein the first device can detect surrounding devices, and an application program is executable by the server, wherein the application program is controlled by the first device and the output of the application program is directed by the server to one of the devices detected by the first device.
According to one embodiment, a minimum set of requirements exists in order for the first device to detect and interact with other devices in the cloud computing environment. A traditional operating system is inappropriate for such enablement because the device does not need full operating system capabilities. Instead, a plurality of codecs is sufficient to enable device interaction.
According to one embodiment, the simple operating system performs minimal input processing to decipher what services are being requested, only to determine where to route the request. The device agent provides information regarding the location of best computing available for a particular request.
According to one embodiment, the simple operating system performs no input processing and automatically routes input for processing to another device or to the cloud.
According to one embodiment, the simple operating system routes requests for services to another device, to a server in the cloud, or to computing capability available locally on the device hosting the simple operating system.
According to one embodiment, the plurality of codecs maintain a network connection and can activate output capabilities.
According to one embodiment, the simple operating system does not include any local services. All requests are sent to the cloud for services.
According to one embodiment, a device hosting the simple operating system can also host a traditional operating system.
Services are defined at the API Layer of the platform. Services are categorized into Dimensions. Dimensions can be recombined into Layers. Layers form to make features in the user experience.
In addition to the above mentioned examples, various other modifications and alterations of the invention may be made without departing from the invention. Accordingly, the above disclosure is not to be considered as limiting and the appended claims are to be interpreted as encompassing the true spirit and the entire scope of the invention.
The present application claims priority to the following U.S. Provisional Applications: U.S. Provisional Patent Application No. 61/373,236, entitled “EXPERIENCE OR “SENTIO” CODECS, AND METHODS AND SYSTEMS FOR IMPROVING QoE AND ENCODING BASED ON QoE FOR EXPERIENCES,” filed on Aug. 12, 2010, and U.S. Provisional Patent Application No. 61/373,229, entitled “METHOD AND SYSTEM FOR A SIMPLE OPERATING SYSTEM AS AN EXPERIENCE CODEC,” filed on Aug. 12, 2010, both of which are incorporated in their entireties herein by this reference.
Number | Name | Date | Kind |
---|---|---|---|
5491743 | Shiio et al. | Feb 1996 | A |
6144991 | England | Nov 2000 | A |
7171485 | Roach et al. | Jan 2007 | B2 |
7351149 | Simon et al. | Apr 2008 | B1 |
7516255 | Hobbs | Apr 2009 | B1 |
7529259 | Van Acker et al. | May 2009 | B2 |
7760640 | Brown et al. | Jul 2010 | B2 |
8132111 | Baron et al. | Mar 2012 | B2 |
8150757 | Sieffert et al. | Apr 2012 | B1 |
8171154 | Vonog et al. | May 2012 | B2 |
8234398 | Vonog et al. | Jul 2012 | B2 |
8255552 | Witt et al. | Aug 2012 | B2 |
8429704 | Vonog et al. | Apr 2013 | B2 |
8463677 | Vonog et al. | Jun 2013 | B2 |
20020051493 | Shin et al. | May 2002 | A1 |
20030074474 | Roach et al. | Apr 2003 | A1 |
20030074554 | Roach et al. | Apr 2003 | A1 |
20030188320 | Shing | Oct 2003 | A1 |
20030217170 | Nelson et al. | Nov 2003 | A1 |
20040128350 | Topfl et al. | Jul 2004 | A1 |
20050100100 | Unger | May 2005 | A1 |
20060183547 | McMonigle | Aug 2006 | A1 |
20070094691 | Gazdzinski | Apr 2007 | A1 |
20070217436 | Markley et al. | Sep 2007 | A1 |
20070271580 | Tischer et al. | Nov 2007 | A1 |
20080004888 | Davis | Jan 2008 | A1 |
20080039205 | Ackley et al. | Feb 2008 | A1 |
20080056718 | Parker | Mar 2008 | A1 |
20080064490 | Ellis | Mar 2008 | A1 |
20080068449 | Wu et al. | Mar 2008 | A1 |
20080134235 | Kalaboukis | Jun 2008 | A1 |
20080139301 | Holthe | Jun 2008 | A1 |
20080158373 | Chu | Jul 2008 | A1 |
20080181260 | Vonog et al. | Jul 2008 | A1 |
20080195956 | Baron | Aug 2008 | A1 |
20080205426 | Grover et al. | Aug 2008 | A1 |
20080267447 | Kelusky et al. | Oct 2008 | A1 |
20090013263 | Fortnow et al. | Jan 2009 | A1 |
20090046139 | Cutler et al. | Feb 2009 | A1 |
20090100452 | Hudgeons et al. | Apr 2009 | A1 |
20090115843 | Sohmers | May 2009 | A1 |
20090118017 | Perlman et al. | May 2009 | A1 |
20090183205 | McCartie et al. | Jul 2009 | A1 |
20090186700 | Konkle | Jul 2009 | A1 |
20090239587 | Negron et al. | Sep 2009 | A1 |
20090309846 | Trachtenberg et al. | Dec 2009 | A1 |
20100004977 | Marci et al. | Jan 2010 | A1 |
20100113140 | Kelly et al. | May 2010 | A1 |
20100149096 | Migos et al. | Jun 2010 | A1 |
20100156812 | Stallings et al. | Jun 2010 | A1 |
20100185514 | Glazer | Jul 2010 | A1 |
20100191859 | Raveendran | Jul 2010 | A1 |
20100257251 | Mooring et al. | Oct 2010 | A1 |
20100268843 | Van Wie et al. | Oct 2010 | A1 |
20100278508 | Aggarwal | Nov 2010 | A1 |
20110046980 | Metzler et al. | Feb 2011 | A1 |
20110103374 | Lajoie et al. | May 2011 | A1 |
20110116540 | O'Connor et al. | May 2011 | A1 |
20110145817 | Grzybowski | Jun 2011 | A1 |
20110151976 | Holloway et al. | Jun 2011 | A1 |
20110163944 | Bilbrey et al. | Jul 2011 | A1 |
20110175822 | Poon et al. | Jul 2011 | A1 |
20110209089 | Hinckley et al. | Aug 2011 | A1 |
20110225515 | Goldman et al. | Sep 2011 | A1 |
20110244954 | Goldman et al. | Oct 2011 | A1 |
20110246908 | Akram et al. | Oct 2011 | A1 |
20110271208 | Jones et al. | Nov 2011 | A1 |
20110292181 | Acharya et al. | Dec 2011 | A1 |
20110298827 | Perez | Dec 2011 | A1 |
20110302532 | Missig | Dec 2011 | A1 |
20120038550 | Lemmey et al. | Feb 2012 | A1 |
20120039382 | Vonog et al. | Feb 2012 | A1 |
20120041859 | Vonog et al. | Feb 2012 | A1 |
20120041869 | Glodjo et al. | Feb 2012 | A1 |
20120060101 | Vonog et al. | Mar 2012 | A1 |
20120078788 | Gandhi | Mar 2012 | A1 |
20120082226 | Weber | Apr 2012 | A1 |
20120084738 | Sirpal | Apr 2012 | A1 |
20120127100 | Forte et al. | May 2012 | A1 |
20120131458 | Hayes | May 2012 | A1 |
20120134409 | Vonog et al. | May 2012 | A1 |
20120191586 | Vonog et al. | Jul 2012 | A1 |
20120206262 | Grasso | Aug 2012 | A1 |
20120206558 | Setton | Aug 2012 | A1 |
20120206560 | Setton | Aug 2012 | A1 |
20120272162 | Surin et al. | Oct 2012 | A1 |
20130019184 | Vonog et al. | Jan 2013 | A1 |
20130050080 | Dahl et al. | Feb 2013 | A1 |
20130339222 | Vonog et al. | Dec 2013 | A1 |
Number | Date | Country |
---|---|---|
2010-016662 | Jan 2010 | JP |
10-2002-0038738 | May 2002 | KR |
10-2006-0062381 | Jun 2006 | KR |
2006-0083034 | Jul 2006 | KR |
10-2008-0008340 | Jan 2008 | KR |
10-2009-0113158 | Oct 2009 | KR |
10-2010-0098668 | Sep 2010 | KR |
WO-02-41121 | May 2002 | WO |
WO 2007084994 | Jul 2007 | WO |
WO-2008072923 | Jun 2008 | WO |
Entry |
---|
International Search Report mailed May 16, 2012 of PCT Application No. PCT/US2011/057385, 9 pages. |
Written Opinion mailed May 16, 2012 of PCT Application No. PCT/US2011/057385, 4 pages. |
Co-Pending U.S. Appl. No. 13/279,096 of Vonog, S., et al., filed Oct. 21, 2011. |
Co-Pending U.S. Appl. No. 13/461,680 of Vonog, S., et al., filed May 1, 2012. |
Co-Pending U.S. Appl. No. 13/546,906 of Vonog, S., et al., filed Jul. 11, 2012. |
Co-Pending U.S. Appl. No. 13/694,582, filed Dec. 12, 2012. |
Co-Pending U.S. Appl. No. 13/694,581, filed Dec. 12, 2012. |
Non-Final Office Action mailed Sep. 11, 2012, in Co-Pending U.S. Appl. No. 13/136,869, filed Sep. 7, 2012. |
Notice of Allowance mailed Feb. 4, 2013, in Co-Pending U.S. Appl. No. 13/279,096, filed Oct. 21, 2011. |
Co-Pending U.S. Appl. No. 13/136,869 of Vonog, S., et al., filed Aug. 12, 2011. |
Co-Pending U.S. Appl. No. 13/367,146 of Vonog, S., et al., filed Feb. 6, 2012. |
Co-Pending U.S. Appl. No. 13/363,187 of Vonog, S., et al., filed Jan. 31, 2012. |
Co-Pending U.S. Appl. No. 13/221,801 of Vonog, S., et al., filed Aug. 30, 2011. |
Ettin, Scott A.; “Band of the Hand: Handheld Media Platforms from Palm OS to Pocket PC Pose New Challenges to Video Producers, But They Also Present New Opportunities”; Emedia, The Digital Studio Magazine; 16, 1, 44(2) (Jan. 2003). |
Moote, et al.; “Moving Toward a Dual-Infrastructure Facility”; Broadcast Engineering; vol. 46, No. 9, pp. 72-78 (Sep. 2004). |
“Broadcom Announces Industry's Most Advanced High Definition AVC Encoder/Transcoder that Transforms the PC into a Multimedia Server”; PR Newswire (Jan. 7, 2008). |
Notice of Allowance mailed Sep. 4, 2013, in Co-Pending U.S. Appl. No. 13/367,146 of Vonog, S., et al., filed Feb. 6, 2012. |
Final Office Action mailed Aug. 15, 2013, in Co-Pending U.S. Appl. No. 13/221,801 of Vonog, S., et al., filed Aug. 30, 2011. |
Co-Pending U.S. Appl. No. 13/210,370 of Lemmey, T., et al., filed Aug. 15, 2011. |
International Search Report PCT/US2011/047814 dated Apr. 9, 2012, pp. 1-3. |
International Search Report PCT/US2011/047815 dated Apr. 9, 2012, pp. 1-3. |
Written Opinion PCT/US2011/047814 dated Apr. 9, 2012, pp. 1-5. |
Written Opinion PCT/US2011/047815 dated Apr. 9, 2012, pp. 1-4. |
International Search Report PCT/US2011/001424 dated Apr. 6, 2012, pp. 1-3. |
Written Opinion PCT/US2011/001424 dated Apr. 6, 2012, pp. 1-3. |
International Search Report PCT/US2011/001425 dated Apr. 6, 2012, pp. 1-3. |
Written Opinion PCT/US2011/001425 dated Apr. 6, 2012, pp. 1-3. |
Co-Pending U.S. Appl. No. 13/762,149 of Vonog. S.,, filed Feb. 7, 2013. |
Notice of Allowance mailed Apr. 3, 2013, in Co-Pending U.S. Appl. No. 13/163,869, filed Sep. 7, 2012. |
Non-Final Office Action mailed Mar. 19, 2013, in Co-Pending U.S. Appl. No. 13/367,146, filed Feb. 6, 2012. |
Non-Final Office Action mailed Apr. 1, 2013, in Co-Pending U.S. Appl. No. 13/221,801, filed Aug. 30, 2011. |
Ghezzi, A. et al. “Mobile Content and Service Delivery Platforms: A Technology Classification Model”; Info: the Journal of Policy, Regulation and Strategy for Telecommunications, Information and Media; vol. 14, No. 2, pp. 72-88 (2012). |
Kwan, W.B., “Personalized Multimedia News Agents in a Broadband Environment”; Ottawa-Carleton Institute of Electrical Engineering (Sep. 1996). |
Notice of Allowance mailed Oct. 3, 2014, for U.S. Appl. No. 13/874,319, filed Apr. 30, 2013. |
Notice of Allowance mailed Mar. 18, 2013, in U.S. Appl. No. 13/279,096 of Vonog, S., et al., filed Oct. 21, 2011. |
Notice of Allowance mailed Oct. 23, 2013, U.S. Appl. No. 13/762,149 of Vonog. S., filed Feb. 7, 2013. |
Notice of Allowance mailed Dec. 5, 2013, U.S. Appl. No. 13/762,149 of Vonog. S., filed Feb. 7, 2013. |
Notice of Allowance mailed Mar. 14, 2014, U.S. Appl. No. 13/762,149 of Vonog. S., filed Feb. 7, 2013. |
Final Office Action mailed Apr. 23, 2015, in U.S. Appl. No. 13/210,370 with a filing date of Aug. 15, 2011. |
Non-Final Office Action mailed Feb. 26, 2015, in U.S. Appl. No. 13/461,680 with a filing date of May 1, 2012. |
Restriction Requirement mailed on Sep. 24, 2013, for U.S. Appl. No. 13/694,582 of Lemmey, T., et al., filed Dec. 12, 2012. |
Restriction Requirement mailed on Apr. 21, 2014, for U.S. Appl. No. 13/694,582 of Lemmey, T., et al., filed Dec. 12, 2012. |
Non-Final Office Action mailed on Mar. 31, 2015, for U.S. Appl. No. 13/694,582 of Lemmey, T., et al., filed Dec. 12, 2012. |
Non-Final Office Action mailed Oct. 29, 2014, for U.S. Appl. No. 13/694,581 of Vonog, S., et al., filed Dec. 12, 2012. |
Non-Final Office Action of U.S. Appl. No. 13/874,319 with a mailing date of Jan. 6, 2014. |
Final Office Action of U.S. Appl. No. 13/874,319 with a mailing date of Jan. 6, 2014. |
Non-Final Office Action mailed Jun. 19, 2014, in Co-Pending U.S. Appl. No. 13/210,370 with a filing date of Aug. 15, 2011. |
Non-Final Office Action mailed Jul. 9, 2014, in Co-Pending U.S. Appl. No. 13/461,680 with a filing date of May 1, 2012. |
Non-Final Office Action mailed Aug. 15, 2014, in Co-Pending U.S. Appl. No. 13/221,801 of Vonog, S., et al., filed Aug. 30, 2011. |
Number | Date | Country | |
---|---|---|---|
20120039382 A1 | Feb 2012 | US |
Number | Date | Country | |
---|---|---|---|
61373236 | Aug 2010 | US |