The present disclosure relates generally to data networks and communication systems.
Home theater technology is improving very rapidly, eroding movie theater ticket sales as the experience at home begins to rival that of the big screen. Home theater setups often include a high-definition (HD) television screen and a surround sound system with three front speakers, two rear speakers, and one sub-woofer speaker that provide excellent audio-location of sound in a room. As home theater technology continues to improve, more people will choose to view movies in the relative comfort of their home living room rather than in a commercial movie theater.
The present invention will be understood more fully from the detailed description that follows and from the accompanying drawings, which however, should not be taken to limit the invention to the specific embodiments shown, but are for explanation and understanding only.
In the following description specific details are set forth, such as device types, system configurations, protocols, applications, methods, etc., in order to provide a thorough understanding of the present invention. However, persons having ordinary skill in the relevant arts will appreciate that these specific details may not be needed to practice the present invention.
According to one embodiment of the present invention, conferencing technology is utilized to create a virtual audience sound experience for home theater viewers. (In the context of the present application the phrases “movie viewing”, “viewing a movie”, etc., includes the experience of watching the visual aspects of a movie and listening to an audio mix which includes the movie soundtrack and sounds of a virtual audience.) For example, a group of friends may want to view a particular film on a virtual theater basis with each person watching the film on their respective home theater systems. In such a case, the friends would like to talk among themselves freely while the film is playing, with certain sounds (e.g., cellphone ring tones, coughing, background noises, etc.) being selectively filtered out. To create such a virtual theater, each person's home theater system is treated as a virtual theater node by a conferencing system that synchronously mixes together the soundtrack with the voices and other sounds received from a microphone located in each virtual theatergoer's home.
In one embodiment, the start times of a community of home theater viewers are synchronized so that the virtual audience reaction is synchronized with the soundtrack of the movie. This first involves each person registering their home theater system as a virtual theater node with a conferencing system that then creates a virtual seating location for each person in a virtual theater. In a particular implementation, a user (i.e., home theater viewer) may request what type of audience experience he would like to experience. The request may be based on demographics, social themes, number of viewers per virtual theater, or other factors affecting the viewing experience. In some cases, a person may request to create a virtual theater with a group of friends so they may jointly experience the film together in a virtual theater setting. In other cases, the person may simply request to experience a movie on an ad hoc basis with strangers or based on a certain selected set of factors such as those listed above.
It should be understood that the conferencing system that provides the virtual audience reaction may be located anywhere on a packet-based communication network that connects each of the virtual theatergoers together to create a shared experience. In some embodiments the conferencing system may comprise a server that performs the various processing and mixing functions described herein. In other embodiments, the conferencing system (including the sound processing/mixing functions) may be distributed among multiple servers, or, alternatively, implemented in a distributed fashion in the virtual theater devices located in each theatergoer's home theater system.
Virtual theater device 14 performs the audio mixing of the soundtrack with the audience reactions to create a virtual theater experience for the person(s) viewing the movie in a seating area 20. Device 14 is shown having audio and video outputs coupled to home theater receiver 15. Outputs to receiver 15 may be in any form acceptable to the receiver, including, but not limited to, analog audio, composite video, component video, or the High-Definition Multimedia Interface (HDMI). Receiver 15 produces a high-definition output for television display 18, and also generates the amplified audio signals for speakers 17 and 21-25 in order to create a theater surround sound experience for the persons in seating area 20. In one embodiment, receiver 15 may comprise a standard set-top box or receiver modified in accordance with the functionality described herein.
A microphone 25 is positioned to receive sound from seating area 20 and provides a single-channel audio input of the viewer's reaction to virtual theater device 14. In another embodiment, multiple microphones may receive multi-channel sound from the seating area 20. Device 14 separates the viewer's reaction sounds from the movie's soundtrack audio produced by speakers 21-25 for transmission to the other virtual theater nodes included in the virtual theater experience. Device 14 may also perform various filtering functions on the incoming audio streams received from device 12, e.g., to filter out background noises or other undesirable sounds from other virtual theater nodes, before summing and mixing the audience reaction sounds with the movie soundtrack. In a specific implementation, device 14 may also add ambiance, so that the sounds appear to be echoing from a large cavernous theater, and/or spatially manipulating the output with respect to the theatergoer's location in the virtual theater.
In yet another embodiment, a virtual theater provider is also the content provider for the movie such that the virtual audience track is mixed directly into the audio soundtrack delivered to the virtual theater node.
It is appreciated that virtual theater device 14 may apply any one of a wide variety of audio mixing algorithms to produce the output audio stream delivered to home theater receiver 15. In one embodiment, the audio mixer in device 14 may select a small number (e.g., 3) of the loudest input channels, outputting special mixes for return back to the theater nodes that respectively produced those audio input streams. A single generic mix may be returned to the remaining nodes in the virtual theater.
In another embodiment all input channels may be mixed to produce the broadest possible audience reaction. In this embodiment, a special mix is produced for each virtual theater node so that echo and feedback of an individual virtual theater node's reaction doesn't appear in that node's audio output. Note that in certain embodiments, the audio mixer may filter out annoyances by dynamically suppressing audio streams that meet predefined annoyance criteria (e.g., white noise, pink noise, snoring, etc.). It is appreciated that for non-verbal audience reaction, the audio mix can be of relatively low-fidelity. However, for applications where verbal reaction is not to be filtered out as an annoyance, the verbal audience reaction should be of relatively high-fidelity.
Practitioners in the art will further appreciate that devices 12, 14 and receiver 15—or any combination thereof—may be integrated into a single box or unit. That is, the functions performed by each of devices 12, 14 and receiver 15 need not be separated into separate physical boxes or entities. Furthermore, some or all of the functions of device 14 may be implemented by a service provider as part of a service package offered to a customer.
Virtual theater device 14 may include a user interface, e.g., various input selection buttons, for enabling/disabling different optional features. For instance, each node may provide a user with the ability to optionally filter out certain noises on a node-by-node basis. In one embodiment, the audio mixer in device 14 simply halts mixing audio from an offending virtual theater node as soon as the offending noise is detected. A variety of known audio waveform detection and analysis algorithms may be employed for this purpose. Other features may include a pause/resume button that allows a virtual theatergoer to step out of the theater for a while, essentially muting microphone 25. A “group pause” or intermission feature may also be included in the list of options available via the user interface. More sophisticated user interfaces may accommodate such features as a chat room for commenting on the movie, and a tablet input device that allows users to overlay drawings or handwriting on the video output.
Additionally, a camera may be included in the system of
Audio mixer 35 operates to mix the movie soundtrack received from movie storage unit 33 with the audience reaction streams received on connections 43 over packet network 31 from each of virtual nodes 41. For each virtual node 41, mixer 35 outputs an audio stream that represents a mixed movie soundtrack plus an audience reaction packet stream—the audience reaction packet stream being customized for each virtual theater node 41. On the right-hand side of network 31, the individual audience reaction streams produced by nodes 41 are represented by arrows 46. Arrow 43 illustrates the collection of audience reaction streams being received by audio mixer 35. The mixed audio streams output by mixer 35 for transmission to the virtual theater nodes 41 are represented by arrow 39. Arrows 45 represent the customized audio streams being delivered to from network 31 to each of the virtual theater nodes. In addition to receiving a customized audio stream, each virtual theater node 41 receives a movie video packet stream that is synchronized to the corresponding mixed audio packet stream.
In an alternative system configuration, one of the virtual theater nodes can deliver the movie and soundtrack to the service provider, with the service provider then providing the mixed audio output back to each virtual theater node that includes the audience reaction.
In still another alternative embodiment, the movie is downloaded to each of the virtual theater nodes in advance of the scheduled start time of the movie. During playback of the movie, the audience reaction is mixed into the soundtrack either by the service provider, or, alternatively, by a mixer located in each of the virtual theater devices or set-top boxes of the corresponding theater nodes. In this embodiment, playback in the various virtual theater nodes is synchronized by the virtual theater devices or set-top boxes.
In yet another embodiment, high quality (e.g., HD) video may be produced by synchronized digital video disk (DVD) players—one in each of the local virtual theater nodes—with the audio, which includes a mix of the soundtrack and audience reaction, being provided by the service provider.
A virtual theater may be populated in a variety of different manners. For instance, a near-video on-demand (NVOD) service may be provided in which the service provider periodically (e.g., every 5-15 minutes) create a new virtual theater for a particular movie, with all the viewers who requested to view that movie during the lead-time interval prior to the start being included in the new virtual theater population. Another approach to populating a virtual theater is by invitation, wherein a list of invited theatergoers or participants to the movie is provided to the service provider. The service provider then starts the movie when a quorum of the invitees is present. Still another possibility involves the service provider associating theatergoers based on certain specified criteria or characteristics. By way of example, viewers may specify of the audience that they'd like to watch a particular movie with based on age or other demographic information; geographic location; rowdiness of the viewers, the size of the group populating the virtual theater, etc. The movie may begin after the service provider has a quorum of nodes that satisfy the criteria for a specific virtual theater.
The mixed output streams are then transmitted back to the respective virtual theater nodes (block 54). The received audience reaction packet stream received from the virtual theater conferencing system is then mixed directly into the audio delivery channel of the movie (block 55).
It should be understood that elements of the present invention may also be provided as a computer program product which may include a machine-readable medium having stored thereon instructions which may be used to program a computer (e.g., a processor or other electronic device) to perform a sequence of operations. Alternatively, the operations may be performed by a combination of hardware and software. The machine-readable medium may include, but is not limited to, floppy diskettes, optical disks, CD-ROMs, and magneto-optical disks, ROMs, RAMs, EPROMs, EEPROMs, magnet or optical cards, propagation media or other type of media/machine-readable medium suitable for storing electronic instructions. For example, elements of the present invention may be downloaded as a computer program product, wherein the program may be transferred from a remote computer or telephonic device to a requesting process by way of data signals embodied in a carrier wave or other propagation medium via a communication link (e.g., a modem or network connection).
Additionally, although the present invention has been described in conjunction with specific embodiments, numerous modifications and alterations are well within the scope of the present invention. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense.
Number | Name | Date | Kind |
---|---|---|---|
5483587 | Hogan et al. | Jan 1996 | A |
5600366 | Schulman | Feb 1997 | A |
5673253 | Shaffer | Sep 1997 | A |
5729687 | Rothrock et al. | Mar 1998 | A |
5917830 | Chen et al. | Jun 1999 | A |
5963217 | Grayson et al. | Oct 1999 | A |
6044081 | Bell et al. | Mar 2000 | A |
6141324 | Abbott et al. | Oct 2000 | A |
6155840 | Sallette | Dec 2000 | A |
6236854 | Bradshaw | May 2001 | B1 |
6269107 | Jong | Jul 2001 | B1 |
6332153 | Cohen | Dec 2001 | B1 |
6501739 | Cohen | Dec 2002 | B1 |
6505169 | Bhgavath et al. | Jan 2003 | B1 |
6590602 | Fernandez et al. | Jul 2003 | B1 |
6608820 | Bradshaw | Aug 2003 | B1 |
6671262 | Kung et al. | Dec 2003 | B1 |
6675216 | Quatrano et al. | Jan 2004 | B1 |
6718553 | Kenworthy | Apr 2004 | B2 |
6735572 | Landesmann | May 2004 | B2 |
6771644 | Brassil et al. | Aug 2004 | B1 |
6771657 | Elstermann | Aug 2004 | B1 |
6775247 | Shaffer et al. | Aug 2004 | B1 |
6816469 | Kung et al. | Nov 2004 | B1 |
6865540 | Faber et al. | Mar 2005 | B1 |
6876734 | Summers et al. | Apr 2005 | B1 |
6925068 | Stanwood et al. | Aug 2005 | B1 |
6931001 | Deng | Aug 2005 | B2 |
6931113 | Ortel | Aug 2005 | B2 |
6937569 | Sarkar et al. | Aug 2005 | B1 |
6947417 | Laursen et al. | Sep 2005 | B2 |
6956828 | Simard et al. | Oct 2005 | B2 |
6959075 | Cutaia et al. | Oct 2005 | B2 |
6976055 | Shaffer et al. | Dec 2005 | B1 |
6989856 | Firestone et al. | Jan 2006 | B2 |
7003086 | Shaffer et al. | Feb 2006 | B1 |
7007098 | Smyth et al. | Feb 2006 | B1 |
7084898 | Firestone et al. | Aug 2006 | B1 |
20010000540 | Cooper et al. | Apr 2001 | A1 |
20020095679 | Bonini | Jul 2002 | A1 |
20030076850 | Jason, Jr. | Apr 2003 | A1 |
20030198195 | Li | Oct 2003 | A1 |
20040015993 | Yacenda et al. | Jan 2004 | A1 |
20040031063 | Satoda | Feb 2004 | A1 |
20040165710 | DelHoyo et al. | Aug 2004 | A1 |
20040213152 | Matuoka et al. | Oct 2004 | A1 |
20040252851 | Braun | Dec 2004 | A1 |
20050069102 | Chang | Mar 2005 | A1 |
20070153712 | Fry et al. | Jul 2007 | A1 |
Number | Date | Country |
---|---|---|
1 553 735 | Jul 2005 | EP |
Number | Date | Country | |
---|---|---|---|
20080071399 A1 | Mar 2008 | US |