System and method for temporally adaptive media playback

Information

  • Patent Grant
  • 11272264
  • Patent Number
    11,272,264
  • Date Filed
    Thursday, September 10, 2020
    3 years ago
  • Date Issued
    Tuesday, March 8, 2022
    2 years ago
Abstract
Disclosed herein are systems, methods, and computer readable-media for temporally adaptive media playback. The method for adaptive media playback includes estimating or determining an amount of time between a first event and a second event, selecting media content to fill the estimated amount of time between the first event and the second event, and playing the selected media content possibly at a reasonably different speed to fit the time interval. One embodiment includes events that are destination-based or temporal-based. Another embodiment includes adding, removing, speeding up, or slowing down selected media content in order to fit the estimated amount of time between the first event and the second event or to modify the selected media content to adjust to an updated estimated amount of time. Another embodiment bases selected media content on a user or group profile.
Description
BACKGROUND OF THE INVENTION
1. Field of the Invention

The present invention relates to media playback and more specifically to adapting media playback to fill a calculated period of time.


2. Introduction

Media playback has become an important part of everyday life for many people. As media becomes more and more ubiquitous through technology, media playback may be abruptly cut short. For example, a person listening to sports scores on the radio while commuting to work may hear all the baseball scores, but miss half of the basketball highlights because the trip to work was 10 minutes too short. This is only one example of a myriad of situations where portions of media content are cut short and the listener is left with partial or incomplete information.


Another example of this problem is holding queues for reaching customer service or technical support. Often callers are forced to wait in a queue for the next available person when they call for computer help or with customer service questions, and background music is played while they wait. Media content, such as songs or news reports, is often cut short when the holding time is over, leaving the user hanging, and possible slightly irritated when their call is finally answered.


One approach in the art is to create and play back multiple, very short segments about various topics. One of the drawbacks of this approach is that listeners or viewers may dislike frequent topic changes or may desire more depth about a particular topic before moving on to another. Numerous short segments that are distilled to sound bites are not conducive to recalling critical information.


Another approach in the art is to record broadcast media for time-shifted consumption, like a digital video recorder (DVR). This approach allows the user to record or to pause media playback so no portions are missed, but it requires that the user return to the same location or at least a nearby location to finish listening to or viewing the content. Also, recording devices are typically limited to homes or other fixed locations. DVRs or their equivalent are not available on cell phones, radios, car stereos, or other mobile devices. A DVR doesn't apply in telephone holding times, either.


These and other shortcomings exist in current approaches of media playback and what is needed in the art is a mechanism to address these issues.


SUMMARY

Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The features and advantages of the invention may be realized and obtained by means of the instruments and combinations particularly pointed out in the appended claims. These and other features of the present invention will become more fully apparent from the following description and appended claims, or may be learned by the practice of the invention as set forth herein.


Disclosed are systems, methods and computer-readable media for temporally adaptive media playback. The system of the present disclosure estimates an amount of time between a first event and a second event, selects media content, and/or playback parameters (such as playback speed) to fill the estimated amount of time between the first event and the second event, and plays the selected media content.


The systems, methods, and computer-readable media may be compatible for use with AM/FM radio, digital satellite radio, television broadcasts, or other content playback schemes. One embodiment includes events that are destination-based or temporal-based. Another embodiment includes adding, removing, speeding up, or slowing down selected media content in order to fit the estimated amount of time between the first event and the second event. Another embodiment bases selected media content on a user or group profile.





BRIEF DESCRIPTION OF THE DRAWINGS

In order to describe the manner in which the above-recited and other advantages and features of the invention can be obtained, a more particular description of the invention briefly described above will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings. Understanding that these drawings depict only exemplary embodiments of the invention and are not therefore to be considered to be limiting of its scope, the invention will be described and explained with additional specificity and detail through the use of the accompanying drawings in which:



FIG. 1 illustrates an example system embodiment;



FIG. 2 illustrates a method embodiment for temporally adaptive media playback;



FIG. 3 illustrates an exemplary user profile;



FIG. 4 illustrates an exemplary group profile; and



FIG. 5 illustrates an adaptation engine.





DETAILED DESCRIPTION

Various embodiments of the invention are discussed in detail below. While specific implementations are discussed, it should be understood that this is done for illustration purposes only. A person skilled in the relevant art will recognize that other components and configurations may be used without parting from the spirit and scope of the invention.


With reference to FIG. 1, an exemplary system includes a general-purpose computing device 100, including a processing unit (CPU) 120 and a system bus 110 that couples various system components including the system memory such as read only memory (ROM) 140 and random access memory (RAM) 150 to the processing unit 120. Other system memory 130 may be available for use as well. It can be appreciated that the invention may operate on a computing device with more than one CPU 120 or on a group or cluster of computing devices networked together to provide greater processing capability. The system bus 110 may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures. A basic input/output (BIOS) stored in ROM 140 or the like, may provide the basic routine that helps to transfer information between elements within the computing device 100, such as during start-up. The computing device 100 further includes storage devices such as a hard disk drive 160, a magnetic disk drive, an optical disk drive, tape drive or the like. The storage device 160 is connected to the system bus 110 by a drive interface. The drives and the associated computer readable media provide nonvolatile storage of computer readable instructions, data structures, program modules and other data for the computing device 100. In one aspect, a hardware module that performs a particular function includes the software component stored in a tangible computer-readable medium in connection with the necessary hardware components, such as the CPU, bus, display, and so forth, to carry out the function. The basic components are known to those of skill in the art and appropriate variations are contemplated depending on the type of device, such as whether the device is a small, handheld computing device, a desktop computer, or a computer server.


Although the exemplary environment described herein employs the hard disk, it should be appreciated by those skilled in the art that other types of computer readable media which can store data that are accessible by a computer, such as magnetic cassettes, flash memory cards, digital versatile disks, cartridges, random access memories (RAMs), read only memory (ROM), a cable or wireless signal containing a bit stream and the like, may also be used in the exemplary operating environment.


To enable user interaction with the computing device 100, an input device 190 represents any number of input mechanisms, such as a microphone for speech, a touch-sensitive screen for gesture or graphical input, keyboard, mouse, motion input, speech and so forth. The input may be used by the presenter to indicate the beginning of a speech search query. The device output 170 can also be one or more of a number of output mechanisms known to those of skill in the art. In some instances, multimodal systems enable a user to provide multiple types of input to communicate with the computing device 100. The communications interface 180 generally governs and manages the user input and system output. There is no restriction on the invention operating on any particular hardware arrangement and therefore the basic features here may easily be substituted for improved hardware or firmware arrangements as they are developed.


For clarity of explanation, the illustrative system embodiment is presented as comprising individual functional blocks (including functional blocks labeled as a “processor”). The functions these blocks represent may be provided through the use of either shared or dedicated hardware, including, but not limited to, hardware capable of executing software. For example the functions of one or more processors presented in FIG. 1 may be provided by a single shared processor or multiple processors. (Use of the term “processor” should not be construed to refer exclusively to hardware capable of executing software.) Illustrative embodiments may comprise microprocessor and/or digital signal processor (DSP) hardware, read-only memory (ROM) for storing software performing the operations discussed below, and random access memory (RAM) for storing results. Very large scale integration (VLSI) hardware embodiments, as well as custom VLSI circuitry in combination with a general purpose DSP circuit, may also be provided.


The logical operations of the various embodiments are implemented as: (1) a sequence of computer implemented steps, operations, or procedures running on a programmable circuit within a general use computer, (2) a sequence of computer implemented steps, operations, or procedures running on a specific-use programmable circuit; and/or (3) interconnected machine modules or program engines within the programmable circuits.



FIG. 2 illustrates a method embodiment for temporally adaptive media playback. The method is implemented via a system or device as would be known in the art. The structure of the system or device will depend on the particular application and implementation of the method. First, the method estimates an amount of time between a first event and a second event (202). The first event is any event that occurs before the second event. The first event and second event may be either destination-based or temporal-based. An example of a destination-based event is a user driving to his grandmother's house. Here, the first event may be the person leaving in the car and the second event is arriving at the house. A temporal-based event can be a user who watches one television show religiously and wants other media presentations to end 1 minute before that one television show starts every week. In this example, the first event is the current time while watching the other presentation and the second event occurs one minute before the television program. In the context of a commuter listening to the car radio, a first event may be when the commuter turns on the radio. In the same context, the second event is arrival at the parking lot at work.


The estimation of time between the two events is based on many things, such as traffic reports, comparative analysis of repeated trip data (such as a daily commute), historical usage trends (such as if the user listens to the radio for 25 minutes every day from 8:20 a.m. to 8:45 a.m.), GPS or trip-planning software, manually entered estimates (via a keypad, voice input, or any other suitable input means), or other sources of information relevant to estimating the amount of time between the two events. The second event can be dynamic, fluid, or subject to change at any moment. The second event can be altered slightly or may be changed dramatically. In an example of a user listening to the car radio while on his lunch break, the user may have originally intended to eat at a first restaurant, but a few blocks before the first restaurant decided to eat at a second restaurant instead. The estimated time changes from 3 minutes to 8 minutes. The time estimation is updated to reflect the change in the second event from the first restaurant to the second restaurant, and any selected media content may be rearranged, added, deleted, squeezed, or stretched to fill the new time estimate. New estimated times are calculated on a regular, scheduled basis, such as every 60 seconds, or only in the event of a major deviation from the current estimated time, such as encountering a traffic jam while driving to work.


Second, the method selects media content to fill the estimated amount of time between the first event and the second event (204). One possible dynamic aspect of selecting media content is that content is chosen or altered in real time to adjust to the remaining estimated time. Media content playback order may change as events change. If more time is available, additional media content may be added or the currently playing media content may be played at a slower speed to stretch out the media to fill the entire estimated time or media content providing additional details or depth may be located and played. If less time is available, then less necessary portions of media content may be abbreviated or the media content may be played at a quicker speed to finish just before the second event.


Third, the method plays the selected media content (206). Media content may be made up of one or more of the following: entertainment content, news content, and advertising content. Often content is contemplated as well. The played media content may be modified dynamically if the estimated time changes. Media segments may be skipped, sped up, slowed down, or additional media segments may be appended to adjust for a new estimated time.


The selected media content may be based on a user profile or a group profile or a combination of both. FIG. 3 illustrates one possible implementation of a user profile 300. A user profile 302 contains a database of likes 304 representing what the user prefers to view or hear. A user profile contains a database of dislikes 306 representing content that should rarely or never be played. The databases of likes and dislikes contains specific media content which is to be preferred or avoided or categories of media content which are to be preferred or avoided, such as PGA Tour news updates or media about Iraq.


A user profile may also contain a calendar 308 showing when, where, and what the user is planning to do so that media content is not scheduled for consumption when it would conflict with another activity. Another use for a calendar is to target specific media content to an event or meeting. For example, if a user who lives in Minnesota has a meeting in the calendar in Texas, media content about the weather forecast in Texas may be selected for playback.


A user profile includes such items as a spending history 302 which may give some insight into user purchasing preferences and trends, similar to how Amazon.com suggests related books based on what a user has already viewed or purchased. A spending history database links to and draws information from multiple payment sources such as credit cards, PayPal, debit cards, checking account information, etc. One user profile can be linked to the user's other profiles 310A, 310B, 310C to share information. For example, the user's profile at the office and at home might be synchronized with the user's profile in his car, so the same preferences and calendar are available to each profile. Another example of linking profiles is linking the profiles of immediate family members, close friends, church congregation members, co-workers, or other individuals with potentially similar interests, likes, dislikes, spending habits, or media consumption habits.



FIG. 4 illustrates a sample group profile 402 in the context of a group event such as a baseball game. The group profile includes the time, date, and location 404 of the group event. The group profile includes other scheduling information 406 about the event, such as an intermission in a play. One group profile contains sub-group profiles 408. In one example of a baseball game, all attendees share the same group profile 402, but spectators sitting in each side of the stadium have their own sub-group profile 408 that is tailored to what they will experience or what is available where they are sitting. Ad hoc group profiles may also be created and shared among individuals who are in close proximity to each other and/or who share some attributes across their profiles. For example, a group of foreign sightseers walking in a crowded city street share an ad hoc group profile of likely common characteristics, while the non-sightseeing people around them do not.


While FIG. 4 is discussed as illustrating a group profile where the users included in the group profile are in close proximity, other group profiles are created where the users are not nearby, but spread out in many different parts of the nation or even the world. For example, participants in a shared conference call may share the same group profile for the conference call. In one aspect, employees who work remotely such as semi-truck drivers, UPS delivery personnel, or travelling salespeople all share the same group profile with other employees.


We turn now to another embodiment that relates to how the media content is collected using an adaptation engine. FIG. 5 illustrates an adaptation engine 500. While the adaptation engine is discussed and illustrated as a method flow chart, the method may be practiced in a system, computing device, network, client device, or any combination thereof, so long as the components are programmed to perform the certain steps of the method. Thus, FIG. 5 may represent a method embodiment, a system or an adaptation engine operating as part of a device or system. The adaptation engine drives the selection and modification of media content to fill the estimated remaining amount of time. First, the adaptation engine selects a plurality of media segments that fit in the estimated amount of time 502. The plurality of media segments may be several clips that fill the remaining time themselves, or the plurality of media segments may be several clips which, if played back to back, would fill the remaining time. An example clip is a previously prepared VIDEO of a destination of a certain length of time. For any particular destination, multiple clips of differing lengths may be generated. For example, clips of 1 minute constitute a summary, another clip is 5 minutes discussing more details about the destination, and so on. Second, the adaptation engine orders the plurality of media segments 504. Potentially based on user profiles or group profiles, the plurality of media segments are organized and prioritized. In the case of audio only media like news reports, the organization is linear because only one thing may be listened to at a time. However, background music or other audio media are combinable with other media for simultaneous playback within the estimated time. In one aspect with video media, the organization of media segments includes playing several clips at once. The effect is similar to showing a stock ticker on the bottom of a news report or picture-in-picture. Third, the adaptation engine alters each of the plurality of media segments attributes to fit the estimated time 506, if necessary. In one aspect, alteration includes adjusting the replay speed to stretch or squeeze the media content to fit the estimated remaining time. If the estimated time suddenly drops, one alteration is jumping straight to the conclusion after the current sentence ends. If the estimated time increases, one alteration is queuing up an additional media segment to follow the currently playing media component. Fourth, the adaptation engine creates a playlist of selected media content containing the plurality of media segments 508. As stated before, the playlist may be dynamically altered or adjusted to reflect the current estimated time remaining.


An example of how these principles may apply to a real life situation is technical support or other telephone queues. The same principles may apply to video media content as well as audio media content. When a user calls a telephone queue, the user is placed on hold and is often entertained while waiting with music or other media content. Call queuing software can determine an estimated holding time, for example 10 minutes. Based on a phone number dialed from, the type of telephone queue the user has called, or other information, relevant media segments are selected which will total approximately 10 minutes. For example, a 3 minute segment about local news and a 6.5 minute segment about local weather is selected and played with a transition or introduction segment sandwiching between the two. Separate databases may exist for media content, introductions to media content, transitions between types of media content, or conclusions of media content. Media content is indexed to indicate where natural stopping or starting points are found. One larger media segment may be broken up into sub-segments which can be assembled together into a program whose time is shorter than that of the larger media segment. For example, a 25 minute news broadcast contains a 4 minute segment on sports, a 5 minute segment on weather, an 8 minute segment on local politics, a 4 minute segment on a special interest story, two 1 minute transitional segments, and a 1 minute conclusion segment. For an expected time of 18 minutes, the 4 minute sports, 5 minute weather, 8 minute politics, and the 1 minute conclusion segments may be played back. For an expected time of 10 minutes, the system plays the 4 minute sports, one 1 minute transitional, and the 5 minute weather segments. For an expected time of 6 minutes, the system plays a 1 minute transition, the 4 minute sports, and a 1 minute conclusion. Similarly, if the expected time increases, additional segments are added. If the expected time decreases, the current segment is brought to an end gracefully, possibly with the help of a transition or conclusion segment. In this regard, the system completes a sentence in a current segment and then stops and inserts or plays a concluding or transition segment.


Returning to the telephone holding queue example, if the queue time continues longer than expected, then media segments are selected for playback which occupy the estimated remaining time until the queue time is finished. Conversely, if queue time is abruptly cut short, then the currently playing media segment is cut short and transitioned to a close with a brief conclusion media segment. If the hold time in the queue is getting shorter and advance notice is provided, a better or less awkward transition to a conclusion is planned and made available for playback. A benefit therefore is that a user may be less inclined to hang up frustrated with the wait if they know they will receive a complete report or full content while on hold.


Another exemplary situation relates to a commuter driving home from work. Assume that media content is selected and arranged to play back for the user and fill the projected remaining commute time of 45 minutes. State information indicates that a traffic jam has effectively blocked all access home and the commuter will remain in traffic for 80 minutes. Those of skill in the art will understand how traffic information may be gathered. As discussed above, additional media content is selected from various sources to fill the updated time of 80 minutes. An alert may be sent automatically to the commuter's home to notify family of the anticipated delay and updated estimated arrival time. The alert may be a prerecorded message, a synthetic speech message, an SMS message, email, or any other form of automatic communication.


Embodiments within the scope of the present invention may also include computer-readable media for carrying or having computer-executable instructions or data structures stored thereon. Such computer-readable media can be any available media that can be accessed by a general purpose or special purpose computer. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to carry or store desired program code means in the form of computer-executable instructions or data structures. When information is transferred or provided over a network or another communications connection (either hardwired, wireless, or combination thereof) to a computer, the computer properly views the connection as a computer-readable medium. A “tangible” computer-readable medium expressly establishes software per se (not stored on a tangible medium) and a wireless, air interface. Thus, any such connection is properly termed a computer-readable medium. Combinations of the above should also be included within the scope of the computer-readable media.


Computer-executable instructions include, for example, instructions and data which cause a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. Computer-executable instructions also include program modules that are executed by computers in stand-alone or network environments. Generally, program modules include routines, programs, objects, components, and data structures, etc. that performs particular tasks or implement particular abstract data types. Computer-executable instructions, associated data structures, and program modules represent examples of the program code means for executing steps of the methods disclosed herein. The particular sequence of such executable instructions or associated data structures represents examples of corresponding acts for implementing the functions described in such steps.


Those of skill in the art will appreciate that other embodiments of the invention may be practiced in network computing environments with many types of computer system configurations, including personal computers, hand-held devices, multi-processor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, and the like. Embodiments may also be practiced in distributed computing environments where tasks are performed by local and remote processing devices that are linked (either by hardwired links, wireless links, or by a combination thereof) through a communications network. In a distributed computing environment, program modules may be located in both local and remote memory storage devices.


The various embodiments described above are provided by way of illustration only and should not be construed to limit the invention. For example, the processes described herein may have application in cable broadcast systems, television programming in waiting rooms, XM Satellite Radio or similar digital audio broadcasts, hotel, resort, airplane or cruise ship media content programming, media playback while on roadtrips, etc. Those skilled in the art will readily recognize various modifications and changes that may be made to the present invention without following the example embodiments and applications illustrated and described herein, and without departing from the true spirit and scope of the present invention.

Claims
  • 1. A method comprising: playing, via an output device, a first portion of a media content and a second portion of the media content;determining, via a processor, a change in a timing of an event to yield a changed time;based on the changed time, altering the first portion of the media content, to yield an altered first portion of the media content, wherein the altered first portion of the media content has an altered run-time based on the changed time;replacing the first portion of the media content being played with the altered first portion of the media content; andgenerating a notification of the changed time.
  • 2. The method of claim 1, wherein the event is related to a route to a destination, and wherein the first portion of the media content comprises an advertisement content.
  • 3. The method of claim 1, further comprising: estimating, via the processor, an amount of time between a first event and a second event;receiving, from a user and via a user interface, a preference; andselecting, based on the preference of the user and the amount of time, the first portion of the media content and the second portion of the media content to fill the amount of time between the first event and the second event.
  • 4. The method of claim 3, wherein the preference of the user is stored with a list of disliked content.
  • 5. The method of claim 3, wherein the first event and the second event are one of destination-based and temporal-based.
  • 6. The method of claim 1, wherein the first portion of the media content comprises audio, and wherein the altering further comprises playing the first portion of the media content at a different rate based on the changed time.
  • 7. The method of claim 6, wherein the second portion of the media content is not altered.
  • 8. The method of claim 1, wherein the first portion of the media content comprises one of entertainment content or news content.
  • 9. A system comprising: a processor;an output device; anda computer-readable storage medium having instructions stored which, when executed by the processor, result in performing operations comprising:playing, via the output device, multiple portions of a media content;determining, via the processor, a change in a timing of an event to yield a changed time;based on the changed time, altering one of the multiple portions of the media content, to yield an altered first portion of the media content, wherein the altered first portion of the media content has an altered run-time based on the changed time;replacing the one of the multiple portions of the media content being played with the altered first portion of the media content; andgenerating a notification of the changed time.
  • 10. The system of claim 9, wherein the event is related to a route to a destination, and wherein the altered first portion of the media content comprises an advertisement content.
  • 11. The system of claim 9, wherein the altering further comprises playing the altered first portion of the media content at a different rate.
  • 12. The system of claim 11, wherein a preference of a user is stored with a list of disliked content.
  • 13. The system of claim 11, wherein the event comprises a first event and a second event that are one of destination-based and temporal-based.
  • 14. The system of claim 9, wherein the one of the multiple portions of the media content comprises audio.
  • 15. The system of claim 14, wherein a second portion of the media content is not altered.
  • 16. The system of claim 9, wherein the altered first portion of the media content comprises one of entertainment content or news content.
  • 17. A non-transitory computer-readable storage device having instructions stored which, when executed by a computing device, result in the computing device performing operations comprising: playing, via an output device, a first portion of a media content and a second portion of the media content;determining, via a processor, a change in a timing of an event to yield a changed time;based on the changed time, altering the first portion of the media content, to yield an altered first portion of the media content, wherein the altered first portion of the media content has an altered run-time based on the changed time, and wherein the altered first portion of the media content comprises an advertisement content;replacing the first portion of the media content being played with the altered first portion of the media content; andgenerating a notification of the changed time.
  • 18. The non-transitory computer-readable storage device of claim 17, wherein the operations further comprise: estimating an amount of time between a first event and a second event;receiving, from a user and via a user interface, a preference; andselecting, based on the preference of the user and the amount of time, the first portion of the media content and the second portion of the media content to fill the amount of time between the first event and the second event.
  • 19. The non-transitory computer-readable storage device of claim 18, wherein the preference of the user is stored with a list of disliked content.
  • 20. The non-transitory computer-readable storage device of claim 19, wherein the first event and the second event are one of destination-based and temporal-based.
CROSS-REFERENCE TO RELATED APPLICATIONS

This application is a continuation of U.S. patent application Ser. No. 16/550,887 filed on Aug. 26, 2019, which is a continuation of U.S. patent application Ser. No. 15/206,755 (now U.S. Pat. No. 10,397,665), filed on Jul. 11, 2016, which is a continuation of U.S. patent application Ser. No. 14/666,511 (now U.S. Pat. No. 9,392,345), filed Mar. 24, 2015, which is a continuation of U.S. patent application Ser. No. 12/177,551 (now U.S. Pat. No. 8,990,848), filed Jul. 22, 2008. All sections of the aforementioned application(s) and/or patent(s) are incorporated herein by reference in their entirety.

US Referenced Citations (189)
Number Name Date Kind
1773980 Farnsworth Aug 1930 A
3795902 Russell Mar 1974 A
4297732 Freudenschuss Oct 1981 A
4627079 Von Dec 1986 A
5014125 Pocock et al. May 1991 A
5333176 Burke et al. Jul 1994 A
5457780 Shaw et al. Oct 1995 A
5802492 Delorme et al. Sep 1998 A
5948040 Delorme et al. Sep 1999 A
6055619 North et al. Apr 2000 A
6071229 Rubins Jun 2000 A
6188905 Rudrapatna et al. Feb 2001 B1
6208799 Marsh et al. Mar 2001 B1
6240183 Marchant May 2001 B1
6321158 Delorme et al. Nov 2001 B1
6404441 Chailleux Jun 2002 B1
6405166 Huang et al. Jun 2002 B1
6526335 Treyz et al. Feb 2003 B1
6587404 Keller et al. Jul 2003 B1
6711474 Treyz et al. Mar 2004 B1
6731625 Eastep et al. May 2004 B1
6782553 Ogawa et al. Aug 2004 B1
6812994 Bubie et al. Nov 2004 B2
6847885 Sato et al. Jan 2005 B2
6868292 Ficco et al. Mar 2005 B2
6975873 Banks et al. Dec 2005 B1
7080392 Geshwind Jul 2006 B1
7107045 Knoop Sep 2006 B1
7145898 Elliott Dec 2006 B1
7149961 Harville et al. Dec 2006 B2
7164410 Kupka Jan 2007 B2
7181321 Schlicker et al. Feb 2007 B2
7205471 Looney et al. Apr 2007 B2
7287032 Attili et al. Oct 2007 B2
7295608 Reynolds et al. Nov 2007 B2
7500010 Harrang et al. Mar 2009 B2
7536705 Boucher et al. May 2009 B1
7546625 Kamangar et al. Jun 2009 B1
7634484 Murata Dec 2009 B2
7664882 Mohammed et al. Feb 2010 B2
7685204 Rogers Mar 2010 B2
7769827 Girouard Aug 2010 B2
7805373 Issa et al. Sep 2010 B1
7827227 Iijima et al. Nov 2010 B2
7849487 Vosseller et al. Dec 2010 B1
7877774 Basso et al. Jan 2011 B1
7895617 Pedlow et al. Feb 2011 B2
7996422 Shahraray et al. Aug 2011 B2
8055688 Giblin Nov 2011 B2
8082279 Weare Dec 2011 B2
8126936 Giblin Feb 2012 B1
8327270 Jones et al. Dec 2012 B2
8401901 Des Jardins et al. Mar 2013 B2
8611428 Ludewig et al. Dec 2013 B1
9047235 Barraclough et al. Jun 2015 B1
9225761 Ficco Dec 2015 B2
20010021995 Hatano Sep 2001 A1
20010029425 Myr et al. Oct 2001 A1
20020013897 Mcternan et al. Jan 2002 A1
20020026281 Shibata et al. Feb 2002 A1
20020100041 Rosenberg et al. Jul 2002 A1
20020118799 Detlef Aug 2002 A1
20020124250 Proehl et al. Sep 2002 A1
20020124258 Fritsch et al. Sep 2002 A1
20020144262 Plotnick et al. Oct 2002 A1
20020144283 Headings et al. Oct 2002 A1
20020183072 Steinbach et al. Dec 2002 A1
20030018714 Mikhailov et al. Jan 2003 A1
20030067554 Klarfeld et al. Apr 2003 A1
20030093790 Logan et al. May 2003 A1
20030097571 Hamilton et al. May 2003 A1
20030101449 Bentolila et al. May 2003 A1
20030106054 Billmaier et al. Jun 2003 A1
20030108331 Plourde et al. Jun 2003 A1
20030110504 Plourde et al. Jun 2003 A1
20030110513 Plourde et al. Jun 2003 A1
20030114968 Sato et al. Jun 2003 A1
20030115150 Hamilton et al. Jun 2003 A1
20030115349 Brinkman et al. Jun 2003 A1
20030149975 Eldering et al. Aug 2003 A1
20030188308 Kizuka Oct 2003 A1
20030208760 Sugai et al. Nov 2003 A1
20030212996 Wolzien Nov 2003 A1
20030221191 Khusheim et al. Nov 2003 A1
20030229900 Reisman et al. Dec 2003 A1
20040003398 Donian Jan 2004 A1
20040030798 Andersson et al. Feb 2004 A1
20040064567 Doss et al. Apr 2004 A1
20040068752 Parker Apr 2004 A1
20040068754 Russ Apr 2004 A1
20040088392 Barrett et al. May 2004 A1
20040117442 Thielen Jun 2004 A1
20040123321 Striemer Jun 2004 A1
20040198386 Dupray et al. Oct 2004 A1
20040226034 Kaczowka et al. Nov 2004 A1
20040230655 Li et al. Nov 2004 A1
20050001940 Layne Jan 2005 A1
20050005292 Kimata et al. Jan 2005 A1
20050010420 Russlies et al. Jan 2005 A1
20050013462 Rhoads Jan 2005 A1
20050022239 Meuleman et al. Jan 2005 A1
20050069225 Schneider et al. Mar 2005 A1
20050071874 Elcock et al. Mar 2005 A1
20050080788 Murata Apr 2005 A1
20050120866 Brinkman et al. Jun 2005 A1
20050143915 Odagawa et al. Jun 2005 A1
20050226601 Cohen et al. Oct 2005 A1
20050245241 Durand et al. Nov 2005 A1
20050249080 Foote et al. Nov 2005 A1
20050257242 Montgomery et al. Nov 2005 A1
20060029109 Moran et al. Feb 2006 A1
20060064716 Sull et al. Mar 2006 A1
20060064721 Del Val et al. Mar 2006 A1
20060068822 Kal Mar 2006 A1
20060100779 Vergin May 2006 A1
20060143560 Gupta et al. Jun 2006 A1
20060156209 Matsuura et al. Jul 2006 A1
20060173974 Tang Aug 2006 A1
20060174293 Ducheneaut et al. Aug 2006 A1
20060174311 Ducheneaut et al. Aug 2006 A1
20060174312 Ducheneaut et al. Aug 2006 A1
20060174313 Ducheneaut et al. Aug 2006 A1
20060184538 Randall et al. Aug 2006 A1
20060195880 Horiuchi et al. Aug 2006 A1
20060218585 Isobe et al. Sep 2006 A1
20060230350 Baluja Oct 2006 A1
20060271658 Beliles et al. Nov 2006 A1
20060276201 Dupray Dec 2006 A1
20070014536 Hellman Jan 2007 A1
20070061835 Klein et al. Mar 2007 A1
20070067315 Hegde et al. Mar 2007 A1
20070073725 Klein et al. Mar 2007 A1
20070073726 Klein et al. Mar 2007 A1
20070118873 Houh et al. May 2007 A1
20070138347 Ehlers Jun 2007 A1
20070150188 Rosenberg Jun 2007 A1
20070173266 Barnes Jul 2007 A1
20070177558 Ayachitula et al. Aug 2007 A1
20070277108 Orgill et al. Nov 2007 A1
20070280638 Aoki et al. Dec 2007 A1
20070283380 Aoki et al. Dec 2007 A1
20080033990 Hutson et al. Feb 2008 A1
20080040328 Verosub Feb 2008 A1
20080040501 Harrang et al. Feb 2008 A1
20080060001 Logan et al. Mar 2008 A1
20080060084 Gappa et al. Mar 2008 A1
20080066111 Ellis et al. Mar 2008 A1
20080072272 Robertson et al. Mar 2008 A1
20080103686 Alberth et al. May 2008 A1
20080103689 Graham et al. May 2008 A1
20080132212 Lemond et al. Jun 2008 A1
20080133705 Lemond et al. Jun 2008 A1
20080177793 Epstein et al. Jul 2008 A1
20080184127 Rafey et al. Jul 2008 A1
20080195744 Bowra et al. Aug 2008 A1
20080195746 Bowra et al. Aug 2008 A1
20080235286 Hutson et al. Sep 2008 A1
20080235741 Ljolje et al. Sep 2008 A1
20080250095 Mizuno Oct 2008 A1
20080270905 Goldman Oct 2008 A1
20080281687 Hurwitz et al. Nov 2008 A1
20080301304 Chitsaz et al. Dec 2008 A1
20080318518 Coutinho et al. Dec 2008 A1
20090030775 Vieri Jan 2009 A1
20090031339 Pickens et al. Jan 2009 A1
20090037947 Patil et al. Feb 2009 A1
20090058683 Becker Mar 2009 A1
20090074003 Accapadi et al. Mar 2009 A1
20090074012 Shaffer et al. Mar 2009 A1
20090109959 Elliott et al. Apr 2009 A1
20090119696 Chow et al. May 2009 A1
20090150925 Henderson et al. Jun 2009 A1
20090158342 Mercer et al. Jun 2009 A1
20090216433 Griesmer et al. Aug 2009 A1
20090217316 Gupta et al. Aug 2009 A1
20090234815 Boerries et al. Sep 2009 A1
20090271819 Cansler et al. Oct 2009 A1
20090287656 Bennett et al. Nov 2009 A1
20100031291 Iwata et al. Feb 2010 A1
20100162330 Herlein et al. Jun 2010 A1
20100208070 Rogers et al. Aug 2010 A2
20100287581 Kamen et al. Nov 2010 A1
20100293598 Collart et al. Nov 2010 A1
20110072466 Basso et al. Mar 2011 A1
20110258049 Ramer et al. Oct 2011 A1
20110296287 Shahraray et al. Dec 2011 A1
20120030702 Joao et al. Feb 2012 A1
20120096490 Barnes, Jr. Apr 2012 A1
20140068662 Kumar Mar 2014 A1
Foreign Referenced Citations (1)
Number Date Country
1146739 May 2006 EP
Non-Patent Literature Citations (6)
Entry
Brassil, Jack et al., “Structuring Internet Media Streams With Cueing Protocols”, IEEE/ACM Transactions on Networking, vol. 10, No. 4, Aug. 2002, pp. 466-476.
Burcli, Ronald C et al., “Automatic Vehicle Location System Implementation”, Published in Position Location and Navigation Symposium, IEEE 1996, Apr. 22-26, 1996, pp. 689-696.
Chi, Huicheng et al., “Efficient Search and Scheduling in P2P-BASED Media-On-Demand Streaming Service”, Published in Selected Areas in Communications, IEEE Jounral, vol. 25, Issue 1, Jan. 15, 2007, pp. 119-130.
Spinellis, Diomidis D. et al., “The Information Furnace Consolidation Home Control”, Pers Ubiquit Comput (Springer-Verlag London Limited 2003) vol. 7, 2003, pp. 53-69.
Yang, Chun-Chuan et al., “Synchronization Modeling and Its Application for SMIL2.0 Presentations”, Journal of Systems and Software, vol. 80, Issue 7, Jul. 2007, pp. 1142-1155.
Zhang, Xinyan et al., “COOLSTREAMING/DONet: A Data-Driven Overlay Network for Peer-To-Peer Live Media Streaming”, INFOCOM 2005, 24 Annuakl Joint Conference of the IEEE Computer and Communications Societies, Proceedings IEEE, vol. 3, Mar. 13-17, 2005, pp. 2102-2111.
Related Publications (1)
Number Date Country
20200413158 A1 Dec 2020 US
Continuations (4)
Number Date Country
Parent 16550887 Aug 2019 US
Child 17017205 US
Parent 15206755 Jul 2016 US
Child 16550887 US
Parent 14666511 Mar 2015 US
Child 15206755 US
Parent 12177551 Jul 2008 US
Child 14666511 US