Altering presentation of received content based on use of closed captioning elements as reference locations

Information

  • Patent Grant
  • 9635436
  • Patent Number
    9,635,436
  • Date Filed
    Friday, May 8, 2015
    9 years ago
  • Date Issued
    Tuesday, April 25, 2017
    7 years ago
Abstract
A content receiver receives an captioning element and positional information regarding segments of a content instance. The captioning element corresponds to a component of captioning data included in content that can be utilized with the positional information to locate where the segments stop and/or start. The content receiver analyzes the content based on the captioning element and the positional information and alters how the content will be presented. Such alteration may involve skipping and/or deleting segments, starting/stopping presentation of content other than at the beginning and/or end of the content, altering recording timers, and/or replacing segments with alternative segments. In some implementations, the content may be recorded as part of recording multiple content instances received via at least one broadcast from a content provider wherein the multiple content instances are all included in a same frequency band of the broadcast and are all encoded utilizing a same control word.
Description
FIELD OF THE INVENTION

This disclosure relates generally to presentation of received content, and more specifically to altering how received content will be presented based on received positional information regarding segments (or portions) of the content relative to closed captioning elements.


SUMMARY OF THE INVENTION

The present disclosure discloses systems and methods for altering presentation of received content based on relative position of closed captioning elements. A content receiver may receive one or more closed captioning elements along with positional information regarding one or more segments of an instance of content. The closed captioning element may correspond to a component of the closed captioning data included in the content that can be utilized with the positional information to locate where the segments (or portions of the content) stop and/or start relative to the component of the closed captioning data. The content receiver may analyze the instance of content based at least on the closed captioning element and the positional information. Based on this analysis, the content receiver may alter how the instance of content will be presented.


In some cases, such alteration may involve skipping and/or deleting one or more segments (portions of the content), such as one or more commercials and/or one or more segments that are designated with a content rating above a content rating threshold set for the content receiver. In other cases, alteration may involve starting presentation of content at a location other than the beginning of the recorded content and/or stopping presentation of content at a location other than the end of the recorded content. In still other cases, alteration may include altering recording timers if the closed captioning element and position information are received before recordation completes. In yet other cases, alteration may involve replacing one or more commercials with alternative commercials, such as commercials specifically targeted to the user of the content receiver.


In various implementations, the instance of content may be recorded as part of recording a plurality of instances of content received via at least one broadcast from one or more content providers. In such implementations, the plurality of instances of content may all be included in a same frequency band of the broadcast and may all encoded utilizing a same control word. However, in other implementations the instance of content may be recorded as part of recording a single audio/visual stream.


In one or more cases, the component of the closed captioning data included in the instance of content corresponding to the closed captioning element may be unique within the total closed captioning data included the instance of content. As such, if the content receiver locates the unique component of the closed captioning data, the content receiver may then utilize the positional information to determine the locations of the segments. However, in some cases, the component of the closed captioning data included in the instance of content corresponding to the closed captioning element may not be unique as it may occur multiple times during the total closed captioning data included in the instance of content. In such cases, the positional information may be selected based on relative temporal position of the segments with respect to the first occurrence of the component of the closed captioning data included in the instance of content corresponding to the closed captioning element. As such, if the content receiver locates the first occurrence of the component of the closed captioning data included in the instance of content corresponding to the closed captioning element, the content receiver may then utilize the positional information to determine the locations of the segments.


In still other cases where the component of the closed captioning data included in the instance of content corresponding to the closed captioning element may not be unique as it occurs multiple times during the total closed captioning data included in the instance of content, the closed captioning element may correspond to the first component of the closed captioning data and an additional component of the closed captioning data that is located within temporal proximity to the first component in the closed captioning data. Although the first component of the closed captioning data included in the instance of content corresponding to the closed captioning element may occur multiple times, there may be only one occurrence of the first component of the closed captioning data that is temporally located proximate to the additional component in the closed captioning data included in the instance of content. As such, if the content receiver locates the occurrence of the first component of the closed captioning data that occurs within the temporal proximity of the additional component of the closed captioning data in the closed captioning data included in the instance of content, the content receiver may then utilize the positional information to determine the locations of the segments.


It is to be understood that both the foregoing general description and the following detailed description are for purposes of example and explanation and do not necessarily limit the present disclosure. The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate subject matter of the disclosure. Together, the descriptions and the drawings serve to explain the principles of the disclosure.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a block diagram illustrating a system for automatically recording multiple instances of content from one or more programming providers.



FIG. 2 is a block diagram illustrating a system for altering presentation of received content based on relative position of closed captioning elements. This system may be interrelated with the system of FIG. 1.



FIG. 3 is a flow chart illustrating a method for altering presentation of received content based on relative position of closed captioning elements. This method may be performed by the system of FIG. 1.



FIGS. 4A-4D are diagrams illustrating alteration of presentation of received content by a system based on relative position of closed captioning elements. The system may be the system of FIG. 2.





DETAILED DESCRIPTION OF THE EMBODIMENTS

The description that follows includes sample systems, methods, and computer program products that embody various elements of the present disclosure. However, it should be understood that the described disclosure may be practiced in a variety of forms in addition to those described herein.


Content receivers (such as set top boxes, television receivers, digital video recorders, mobile computers, cellular telephones, smart phones, tablet computers, desktop computers, and so on) may receive content from one or more programming providers (such as satellite television programming providers, cable television programming providers, Internet service providers, video on demand providers, pay-per-view movie providers, digital music providers, and so on) via one of more communication connections (such as satellite communication connections, coaxial cable communication connections, Internet communication connections, radio-frequency connections, and so on). Such content receivers may transmit such received content to one or more presentation devices and/or store the received content for later presentation.


In some cases, content receivers may be configured utilizing one or more recording timers to automatically record content that is broadcast by one or more programming providers. Content receiver may be configured to automatically record broadcasted content by user input directly, in response to instructions received from content providers, and so on. Such configuration may involve a designation of the source from which to obtain the specified content as well as a time to begin recording and a time to stop recording.


However, the content that may be automatically recorded between the time to begin recording and the time to stop recording may not completely correspond to the content that a user may actually desire to access. For example, broadcast television programs may not start exactly at their designated start time, may finish after their designated end time, and/or may be delayed due to previous broadcast programs overrunning their time slots. In some cases, recording timers may be set to include recording buffers at the beginning and/or end of recording (such as starting two minutes before the television program is supposed to start and/or ending three minutes after the television program is supposed to finish broadcasting) in an attempt to ensure that the entire television program is recorded even if the television program does not start and/or end precisely on time. However, such a buffer (though possibly ensuring that the entire television program may be recorded) increases the amount of recorded content that may be presented which is not the television program.


Additionally, broadcast television programs may include one or more commercials and/or one or more objectionable scenes (such as extreme violence, nudity, adult language, and so on). As a result of such commercials, objectionable scenes, altered start and/or end times, delays, and/or recording buffers, a significant portion of the content that is recorded may be content other than the content which users wish to access. As a result, users may have to spend more time fast forwarding, rewinding, and/or performing other operations in order to access the desired content and may become frustrated, particularly when accessed content does not immediately present the desired content and/or when the entire desired content was not recorded at all.


The present disclosure discloses systems and methods for altering presentation of received content based on relative position of closed captioning elements. A content receiver may receive one or more closed captioning elements along with positional information regarding one or more segments (or portions) of an instance of content. The closed captioning element may correspond to a component of the closed captioning data included in the content that can be utilized with the positional information to locate where the segments stop and/or start relative to the component of the closed captioning data. For example, an closed captioning and positional information may specify that the start of a medical drama show included in an instance of content begins exactly five minutes prior to the occurrence of the phrase “spinal meningitis” in the closed captioning data included in the content.


Based at least on the closed captioning element and the positional information, the content receiver may analyze the instance of content and may alter how the instance of content will be presented. Such alteration may involve skipping and/or deleting one or more segments, starting presentation of content at a location other than the beginning of the recorded content, altering recording timers if the closed captioning and position information are received before recordation completes, replacing one or more commercials with alternative commercials, and so on. As a result of the content receiver altering how the content will be presented, the content presented to users when accessed may more closely correspond to the content the users desire to access and the users may be more satisfied with their content accessing experiences.


In some cases, users of content receivers may desire to access different instances of content that are broadcast simultaneously and/or substantially contemporaneously by content providers. For example, many television programming viewers wish to watch different television programs that occupy the same broadcast time slot, such as the different television programs associated with the major television programs that are broadcast between seven PM and ten PM mountain time. Content receivers may attempt to address this issue by utilizing multiple tuners that can each separately present and/or record different, simultaneously broadcast instances of content. However, a separate tuner may still be required for each simultaneous or substantially contemporaneous instance of broadcast or otherwise received content that a content receiver user wishes to view and/or record. Further, in addition to separate tuners required for each instance of content, the content receiver may require sufficient resources to descramble and store each of the instances of content desired by the user.



FIG. 1 is a block diagram illustrating a system 100 for automatically recording multiple instances of content from one or more programming providers. The automatic recording of multiple instances of content provided by the system 100 may enable users of content receivers to access different instances of content that are broadcast simultaneously and/or substantially contemporaneously by content providers.


In various broadcast systems, content providers may broadcast content to a plurality of different content receivers via one or more frequency bands utilizing one or more satellites. Each multiplexed signal contained in the frequency band (sometimes referred to as a transponder) may be configured to include data related to one or more instances of content, such as one or more television programming channels. The data related to each of the programs may include multiple PIDs (packet identifiers), such as a video PID and one or more audio PIDs for a particular instance of content. The data related to each of the instances of content included in each frequency may be scrambled utilizing one or more CWs (control words), which may then be encrypted to generate one or more ECMs (entitlement control messages) which may in turn be included with the data. A content receiver may typically tune to one or more of the frequency bands to receive the multiplexed signal that contains data for a particular programming channel utilizing one or more tuners. The content receiver may process only a subset of the programming channels by keeping the data associated with the particular programming channel and discarding data received via the tuned frequency band and multiplexed signal associated with other programming channels, such as by utilizing a PID filter to keep data identified by PIDs related to the particular programming channel and discard data identified by PIDs not related to that particular programming channel. The content receiver may decrypt the ECM included with the data associated with the particular programming channel to obtain the CW, descramble the data utilizing the CW, and store and/or transmit the data (e.g., decompressed, reconstructed audio and video data) to one or more presentation devices.


As illustrated in FIG. 1, in this implementation, one or more content providers may select multiple instances of content 101 to be automatically recorded such as by utilizing predefined recording parameters. For example, a content provider may select all of the television events defined as “primetime events” associated with all channels defined as “primetime television channels” for a particular period of time defined as “prime time” to be automatically recorded. In other examples, the content provider may select television events associated with programming channels for a particular time period (such as a half hour, multiple hours, and/or an entire programming day) in response to user selections. After the content provider selects the multiple instances of content, the multiple instances of content may be multiplexed utilizing a multiplexer 102. The multiplexed signal (which includes the multiplexed selected multiple instances of content) may then be scrambled by a scrambler 105 utilizing one or more CWs 103. The CW may be encrypted to generate an ECM by an ECM generator 112 which may take the CW as an input (and may also include other information such as access criteria) and outputs the ECM, which may be included with the multiplexed signal. The scrambled multiplexed signal may then be included in a broadcast on a frequency band (e.g., cable, satellite), which may then be transmitted to one or more satellites 106 for broadcast. The satellite 106 may receive the frequency band (uplink frequency band) and then broadcast the multiplexed signal to a number of content receivers on a translated frequency band (downlink frequency band), such as a content receiver that includes a tuner 107.


The tuner 107 may tune to the frequency band that includes the multiple instances of content (which may be performed in response to one or more recording instructions received by the content receiver that includes the tuner from the content provider). The data received via the tuned frequency (which may be filtered by a PID filter, not shown) may be demultiplexed by a demultiplexer 109 and then descrambled by a descrambler 110 utilizing the CW before being stored in a non-transitory storage medium 111 (which may take the form of, but is not limited to, a magnetic storage medium; optical storage medium; magneto-optical storage medium; read only memory; random access memory; erasable programmable memory; flash memory; and so on) based on recording parameters, such as predefined recording parameters. The demultiplexer 109 may obtain the included ECM 104, and the ECM may be provided to a smart card 108 that may decrypt the ECM 104 to obtain the CW 103 for the descrambler 110. Hence, the multiple instances of content may subsequently all be available to a user of the content receiver (until such time as they are removed from the non-transitory storage medium) without requiring multiple tuners to receive each of the multiple instances of content, without requiring the smart card to decrypt multiple ECMs. In some implementations, the multiple instances of content may be stored in a single file.


Although the system 100 is illustrated in FIG. 1 and is described above as including a number of specific components configured in a specific arrangement, it is understood that this is for the purposes of example and other arrangements involving fewer and/or additional components are possible without departing from the scope of the present disclosure. For example, in various implementations, the multiple instances of content may be individually scrambled utilizing the CW prior to multiplexing. In another example, in some implementations, the data received via the tuned frequency may be demultiplexed before being individually descrambled utilizing the CW.


In some implementations of the system of FIG. 1, multiple instances of content may be recorded simultaneously from a single transponder and stored in the non-transitory storage medium 111 of the content receiver as a single file of multiple recorded instances of content. Upon playing of one instance of content from the single file of the multiple recorded instances of content, the content receiver may read the file incrementally so as to play the one instance of content while filtering out the other file contents (e.g., the other instance of content within the file).



FIG. 2 is a block diagram illustrating a system 200 for altering presentation of received content based on relative position of closed captioning elements. The system 200 includes a content receiver 201 that receives content from one or more content providers 202. The content receiver may be any kind of content receiver such as a set top box, a television receiver, a digital video recorder, a mobile computer, a cellular telephone, a smart phone, a tablet computer, a desktop computer, and/or any other kind of electronic device that is capable of receiving content from the content provider. The content provider may be any kind of content provider such as a satellite television programming provider, a cable television programming provider, an Internet service provider, a video on demand provider, a pay-per-view movie provider, a digital music provider, and/or any other kind of entity capable of transmitting content to the content receiver.


The content receiver 201 may include one or more processing units 204, one or more communication components 205, one or more non-transitory storage media 206 (which may take the form of, but is not limited to, a magnetic storage medium; optical storage medium; magneto-optical storage medium; read only memory; random access memory; erasable programmable memory; flash memory; and so on), one or more output components 207, and one or more user interface components 208. The processing unit may execute instructions stored in the non-transitory storage medium to receive content from the content provider 202 via the communication component, store content received from the content provider in the non-transitory storage medium, and/or present content received from the content provider and/or stored in the non-transitory storage medium to one or more presentation devices 203 via the output component. The processing unit may execute such instructions and/or perform various other operations at the direction of instructions received from a user via the user interface component and/or instructions received from the content provider via the communication component.


The processing unit 204 may also receive one or more closed captioning elements corresponding to one or more instances of content from the content provider 202. Along with the closed captioning element, the processing unit may also receive positional information from the content provider regarding one or more segments (or portions) of the instance of content relative to the closed captioning element. In various cases, such closed captioning elements and/or positional information may be received prior to receipt of the instance of content, while the instance of content is being receiver, and/or subsequent to receipt of the instance of content. The positional information may identify the start and/or stop locations of the segments in the instance of content relative to one or more components of closed captioning data included in the instance of content corresponding to the closed captioning element. In some cases, the processing unit may receive one or more redundant copies of the closed captioning element and/or the positional information in case that the first received closed captioning element and/or positional information are corrupt, dropped, and/or are otherwise unusable.


For example, an closed captioning element may correspond to the phrase “I want to order chicken waffles” in closed captioning data included in an instance of content and the positional information may identify that a commercial segment begins in the instance of content five minutes after the occurrence of the phrase “I want to order chicken waffles” and ends ten minutes after the occurrence of the phrase “I want to order chicken waffles.” As such, regardless of how the duration of the instance of content may be changed by buffers during recordation, the precise position of the commercial segment can be identified relative to the located phrase “I want to order chicken waffles.”


The closed captioning element and/or the positional information may be created by the content provider 202. The content provider may create the closed captioning element and/or the positional information by analyzing a content feed that is subsequently broadcast, substantially simultaneously being broadcast, and/or was previously broadcast. In various implementations, the content feed may be analyzed by content provider staff utilizing video and/or audio editing software to select one or more components of closed captioning data included in the instance of content to correspond to the closed captioning element, determine positional information for one or more segments relative to the selected component of the closed captioning data, create and/or transmit the closed captioning element and/or positional information, and so on. In other implementations, these activities may be performed by automated video and/or audio editing software.


In some implementations, the instance of content may be recorded by the processing unit 204 as part of recording a plurality of instances of content received via at least one broadcast from the content provider 202 as described above with respect to FIG. 1. Further, as discussed above with respect to FIG. 1, the plurality of instances of content may all be included in a same frequency band of the broadcast and may all be encoded utilizing a same control word.


In response to receiving such closed captioning elements and positional information, the content receiver 201 may analyze the instance of content. The closed captioning element may be compared against the closed captioning data included in the instance of content. In analyzing the instance of content, the content receiver may identify the location of the component of the closed captioning data corresponding to the closed captioning element and may additionally identify the locations of the segments relative to the component of the closed captioning data. Based on the analysis, the content receiver 201 may alter how the instance of content will be presented via the output component 207. Exactly how the content receiver alters the instance of content may be based on user input received via the user interface component 208, instructions received from the content provider 202, and/or one or more configuration settings and/or defaults stored in the non-transitory storage medium.


By way of a first example, the content receiver 201 may alter how the instance of content will be presented by configuring the instance of content such that a particular segment is not presented, or skipped, when the instance of content is presented. Such a segment may be a portion of the instance of content preceding the start of an event included in the instance of content, a portion of the instance of content subsequent to the end of an event included in the instance of content, one or more commercials, a portion of the instance of content that includes a content rating (such as a content rating indicating the presence of nudity, graphic violence, adult language, and so on) exceeding a content rating setting of the content receiver, and so on. Thus, although the skipped segment is still present in the instance of content, the skipped segment will not be presented to a user when the instance of content is presented and the user may be unaware that the skipped segment is still present in the instance of content.


In a second example, the content receiver 201 may alter how the instance of content will be presented by removing a particular segment from the instance of content. As such, when the instance of content is presented, the removed segment is not presented because the removed segment is no longer part of the instance of content. Such a segment may be a portion of the instance of content preceding the start of an event included in the instance of content, a portion of the instance of content subsequent to the end of an event included in the instance of content, one or more commercials, a portion of the instance of content that includes a content rating (such as a content rating indicating the presence of nudity, graphic violence, adult language, and so on) exceeding a content rating setting of the content receiver, and so on. Thus, when the content is presented, the content does not include the removed segment and the removed segment is not presented to the user.


In cases where the content receiver 201 alters how the instance of content will be presented by removing one or more segments, the content receiver may replace the one or more removed segments with alternative segments and/or otherwise insert the alternative segments. For example, the content receiver may replace the commercials included in the instance of content with an alternative commercial that is targeted to the user and cannot be fast forwarded or otherwise skipped when the instance of content is presented.


In a third example, the content receiver 201 may receive the closed captioning element and positional information prior to completing recordation of the instance of content. Further, based on the analysis of the instance of content, the closed captioning element, and/or the positional information, the content receiver may determine that an event included in the instance of content will complete after the original recording stop time. For example, a previously broadcast instance of content may have overrun a time slot allocated to the previously broadcast instance of content and the air time for the event may have been pushed back beyond any buffer set for recording the instance of content. Based on this determination, the content receiver may extend the original recording stop time so that the entirety of the event is included in the instance of content despite the delayed air time.



FIG. 3 illustrates a method 300 for altering presentation of received content based on relative position of closed captioning elements. The method 300 may be performed by the content receiver 201 of FIG. 2. The flow begins at block 301 and proceeds to block 302 where the content receiver operates. The flow then proceeds to block 303 where the processing unit 204 determines whether or not content is being received. If so, the flow proceeds to block 304. Otherwise, the flow continues to block 311.


At block 304, after the processing unit 204 determines that content is being received, the processing unit determines whether or not to store the content being received. In some implementations, the processing unit may determine whether or not to store the content being received based on one or more recording timers stored in the non-transitory storage medium 206. If not, the flow returns to block 302 and the content receiver 201 continues to operate. Otherwise, the flow proceeds to block 305.


At block 305, after the processing unit 204 determines to store the content being received, the processing unit determines whether or not closed captioning elements and positional information related to the content being received are stored in the non-transitory storage medium 206. If not, the flow proceeds to block 306 where the processing unit records the content being received in the non-transitory storage medium before the flow proceeds to block 309. Otherwise, the flow proceeds to block 307.


At block 307, after the processing unit 204 determines that closed captioning elements and positional information related to the content being received are stored in the non-transitory storage medium 206, the processing unit determines whether or not analysis of the content being received, the closed captioning elements, and the positional information indicates that a recording time associated with the content being received should be altered. If not, the flow proceeds to block 306 where the processing unit records the content being received in the non-transitory storage medium 206 before the flow proceeds to block 309. Otherwise, the flow proceeds to block 308 where the processing unit alters the recording time accordingly before the flow proceeds to block 306.


At block 309, the processing unit 204 determines whether or not analysis of the content being received, the closed captioning elements, and the positional information indicates that how the recorded content will be presented should be altered. If not, the flow returns to block 302 and the content receiver 201 continues to operate. Otherwise, the flow proceeds to block 310 where the processing unit alters how the recorded content will be presented, such as by altering one or more indexes that may be utilized to present the recorded content. Then, the flow returns to block 302 and the content receiver 201 continues to operate.


At block 311, after the processing unit 204 determines that content is not being received, the processing unit determines whether one or more closed captioning elements and/or positional information has been received. If not, the flow proceeds to block 317. Otherwise, the flow proceeds to block 312.


At block 312, after the processing unit 204 determines that one or more closed captioning elements and/or positional information have been received, the processing unit determines whether or not the received closed captioning elements and/or positional information are related to content stored in the non-transitory storage medium 206. If so, the flow proceeds to block 315. Otherwise, the flow proceeds to block 313.


At block 313, after the processing unit determines that the received closed captioning elements and/or positional information are related to content stored in the non-transitory storage medium 206, the processing unit determines whether or not analysis of the related stored content, the closed captioning elements, and the positional information indicates that how the stored content will be presented should be altered. If not, the flow returns to block 302 and the content receiver 201 continues to operate. Otherwise, the flow proceeds to block 314 where the processing unit alters how the stored content will be presented (such as by altering one or more indexes that may be utilized to present the stored content) before the flow returns to block 302 and the content receiver 201 continues to operate.


At block 315, after the processing unit determines that the received closed captioning elements and/or positional information are not related to content stored in the non-transitory storage medium 206, the processing unit determines whether or not to store the closed captioning elements and/or positional information. If not, the flow returns to block 302 and the content receiver 201 continues to operate. Otherwise, the flow proceeds to block 316 where the processing unit stores the closed captioning elements and/or positional information in the non-transitory storage medium 206 before the flow returns to block 302 and the content receiver 201 continues to operate.


Returning to FIG. 2, in some implementations, the component of the closed captioning data included in the instance of content corresponding to the closed captioning element may be unique within the total closed captioning data included in the instance of content. For example, if the component of the closed captioning data included in the instance of content corresponding to the closed captioning element constitutes the phrase “Frank said he's driving to Tulsa,” the phrase may be unique if the phrase only occurs once during the total the closed captioning data included in the instance of content. As such, if the content receiver 201 locates the phrase “Frank said he's driving to Tulsa” anywhere in the closed captioning data included in the instance of content, the content receiver may then utilize the positional information to determine the locations of the segments.


However, in various other implementations, the phrase “Frank said he's driving to Tulsa” may not be unique as it may occur multiple times during the total closed captioning data included in the instance of content. In such cases, the positional information may be selected based on relative temporal position of the segments with respect to the first occurrence of the phrase “Frank said he's driving to Tulsa” in the closed captioning data included in the instance of content. As such, if the content receiver 201 locates the first occurrence of the phrase “Frank said he's driving to Tulsa” in the closed captioning data included in the instance of content, the content receiver may then utilize the positional information to determine the locations of the segments.


In still other implementations where the phrase “Frank said he's driving to Tulsa” may not be unique as it occurs multiple times during the total closed captioning data included in the instance of content, the closed captioning element may correspond to the first component of the closed captioning data and an additional component of the closed captioning data that is located within temporal proximity to the phrase “Frank said he's driving to Tulsa.” Although the phrase “Frank said he's driving to Tulsa” may occur multiple times, there may be only one occurrence of the phrase “Frank said he's driving to Tulsa” that is temporally located exactly thirty second after the occurrence of the phrase “Bob asked what Frank plans to do about losing the farm” in the closed captioning data included in the instance of content. As such, if the content receiver 201 locates the occurrence of the phrase “Frank said he's driving to Tulsa” that occurs exactly thirty seconds after the occurrence of the phrase “Bob asked what Frank plans to do about losing the farm” in the closed captioning data included in the instance of content, the content receiver may then utilize the positional information to determine the locations of the segments.



FIGS. 4A-4D illustrate alteration of presentation of received content 401a-401d by a system based on relative position of closed captioning elements. The system may be the system of FIG. 2. FIG. 4A is a conceptual diagram of an instance of received content 401a. As shown, the instance of received content includes part I of a show 403, part II of the show 405, and part III of the show 407. As also shown, in addition to the parts of the show, the instance of received content also includes a portion of a previous show 402, a commercial block A 404, a commercial block B 406, and a portion of a next show 408. As further illustrated, the instance of received content further includes a component 409 of closed captioning data included in the received content that corresponds to a closed captioning element.


A content receiver that has received the instance of content 401a may also receive the closed captioning element as well as positional information that specifies the start and stop of each of the segments included in the instance of content. The content receiver may then analyze the closed captioning data included in the instance of content based on the closed captioning element and the positional information and may thereupon alter how the instance of content will be presented.


In some cases, the content receiver may configure the instance of content 401a such that one or more of the segments are skipped when the instance of content is presented. For example, the content receiver may configure the instance of content such that playing the instance of content starts at 2:00 and finishes at 32:00. Thus, the portion of the previous show 402 and the portion of the next show 408 would not be presented when the instance of content is played even though both are still present. By way of another example, the content receiver may configure the instance of content such that playing the instance of content starts at 2:00, jumps from 10:00 to 13:00, jumps from 20:00 to 25:00, and finishes at 32:00. Thus, the portion of the previous show 402, the commercial block A 404, the commercial block B 406, and the portion of the next show 408 would not be presented when the instance of content is played even though all are still present.


In other cases, the content receiver may remove one or more segments from the instance of content 401a such that removed segments are not included in the instance of content and are hence not played when the instance of content is presented. For example, as illustrated by FIG. 4B, the content receiver may remove the portion of the previous show 402 and the portion of the next show 408 from the instance of content 401b. As the portion of the previous show 402 and the portion of the next show 408 are no longer a component of the instance of content 401b, the portion of the previous show 402 and the portion of the next show 408 will not be played when the instance of content 401b is presented.


By way of a second example, as illustrated by FIG. 4C, the content receiver may remove the portion of the previous show 402, the commercial block A 404, the commercial block B 406, and the portion of the next show 408 from the instance of content 401c. As the portion of the previous show 402, the commercial block A 404, the commercial block B 406, and the portion of the next show 408 are no longer a component of the instance of content 401c, none of these segments will be played when the instance of content 401c is presented.


In still other cases, as illustrated by FIG. 4D, the content receiver may insert one or more alternative segments into the instance of content 401d as well as remove one or more segments from the instance of content 401d such that removed segments are not included in the instance of content and are hence not played when the instance of content is presented. As illustrated, the portion of the previous show 402, the commercial block A 404, the commercial block B 406, and the portion of the next show 408 are no longer a component of the instance of content 401d and will not be played when the instance of content 401d is presented. Further, as illustrated, a commercial 410 targeted to the user of the set top box has been inserted into the instance of content 401d and will be played prior to part I of the show 403 when the instance of content 401d is played.


In the present disclosure, the methods disclosed may be implemented as sets of instructions or software readable by a device. Further, it is understood that the specific order or hierarchy of steps in the methods disclosed are examples of sample approaches. In other embodiments, the specific order or hierarchy of steps in the method can be rearranged while remaining within the disclosed subject matter. The accompanying method claims present elements of the various steps in a sample order, and are not necessarily meant to be limited to the specific order or hierarchy presented.


The described disclosure may be provided as a computer program product, or software, that may include a non-transitory machine-readable medium having stored thereon instructions, which may be used to program a computer system (or other electronic devices) to perform a process according to the present disclosure. A non-transitory machine-readable medium includes any mechanism for storing information in a form (e.g., software, processing application) readable by a machine (e.g., a computer). The non-transitory machine-readable medium may take the form of, but is not limited to, a magnetic storage medium (e.g., floppy diskette, video cassette, and so on); optical storage medium (e.g., CD-ROM); magneto-optical storage medium; read only memory (ROM); random access memory (RAM); erasable programmable memory (e.g., EPROM and EEPROM); flash memory; and so on.


It is believed that the present disclosure and many of its attendant advantages will be understood by the foregoing description, and it will be apparent that various changes may be made in the form, construction and arrangement of the components without departing from the disclosed subject matter or without sacrificing all of its material advantages. The form described is merely explanatory, and it is the intention of the following claims to encompass and include such changes.


While the present disclosure has been described with reference to various embodiments, it will be understood that these embodiments are illustrative and that the scope of the disclosure is not limited to them. Many variations, modifications, additions, and improvements are possible. More generally, embodiments in accordance with the present disclosure have been described in the context or particular embodiments. Functionality may be separated or combined in blocks differently in various embodiments of the disclosure or described with different terminology. These and other variations, modifications, additions, and improvements may fall within the scope of the disclosure as defined in the claims that follow.

Claims
  • 1. A television receiver, comprising: a tuner;one or more processors; anda non-transitory computer-readable storage media communicatively readable by the one or more processors and having stored therein processor-readable instructions which, when executed by the one or more processors, cause the one or more processors to: receive multiple instances of content via a tuner;record the multiple instances of content to a single file;receive closed captioning elements that correspond to the multiple instances of content via the tuner;record the closed captioning elements that correspond to the multiple instances of content;receive an indication of a closed captioning element and corresponding positional data for an instance of content of the multiple instances of content, wherein the closed captioning element and the corresponding positional data are used to identify a start location of the instance of content in the single file; andoutput, for presentation, the instance of content such that playback begins from the start location based on the received indication of the closed captioning element and the received corresponding positional data.
  • 2. The television receiver of claim 1, wherein the processor-readable instructions, when executed, further cause the one or more processors to: receive a second indication of a second closed captioning element and a second corresponding positional data for the instance of content, wherein the second closed captioning element and the second corresponding positional data are used to identify an end location of the instance of content in the single file.
  • 3. The television receiver of claim 2, wherein the processor-readable instructions, when executed, further cause the one or more processors to: cease outputting for presentation the instance of content when the end location within the single file is reached.
  • 4. The television receiver of claim 1, wherein the processor-readable instructions, when executed, further cause the one or more processors to: receive a second indication of a second closed captioning element and a second corresponding positional data for the instance of content, wherein the received second indication of the second closed captioning element and the second corresponding positional data are used to identify a portion of the single file to be skipped from playback.
  • 5. The television receiver of claim 4, wherein the processor-readable instructions, when executed, further cause the one or more processors to: output, for presentation, the instance of content such that playback skips the portion of the single file based on the received second indication of the second closed captioning element and the received second corresponding positional data.
  • 6. The television receiver of claim 1, wherein a string of text was identified as unique among a total closed captioning dataset that corresponds with the instance of content.
  • 7. The television receiver of claim 1, wherein the corresponding positional data indicates an amount of time.
  • 8. A method for selecting a portion of content to be skipped from presentation, the method comprising: recording, by a television receiver, multiple instances of content received via a single tuner to a single file;recording, by the television receiver, closed captioning elements that correspond to the multiple instances of content;receiving, by the television receiver, an indication of a closed captioning element and corresponding positional data for an instance of content of the multiple instances of content, wherein the closed captioning element and the corresponding positional data are used to identify a start location of the instance of content at which to begin playback in the single file; andoutputting, for presentation, by the television receiver, the instance of content such that playback beings at the start location based on the received indication of the closed captioning element and the received corresponding positional data.
  • 9. The method of claim 8, further comprising: receiving, by the television receiver, a second indication of a second closed captioning element and a second corresponding positional data for the instance of content, wherein the received second indication of the second closed captioning element and the second corresponding positional data are used to identify an end location of the instance of content in the single file.
  • 10. The method of claim 9, further comprising: ceasing, by the television receiver, to output for presentation the instance of content when the end location within the single file is reached.
  • 11. The method of claim 8, further comprising: receiving, by the television receiver, a second indication of a second closed captioning element and a second corresponding positional data for the instance of content, wherein the received second indication of the second closed captioning element and the second corresponding positional data are used to identify a portion of the single file to be skipped from playback.
  • 12. The method of claim 11, wherein outputting, for presentation, the instance of content comprises skipping the portion of the single file based on the received second indication of the second closed captioning element and the received second corresponding positional data.
  • 13. The method of claim 8, wherein the closed caption element is a string of text.
  • 14. The method of claim 12, wherein the content comprises a television program and one or more commercials and the portion comprises the one or more commercials.
  • 15. The method of claim 13, wherein the string of text was identified as unique among a total closed captioning dataset that corresponds with the instance of content.
  • 16. The method of claim 8, wherein the corresponding positional data indicates an amount of time.
  • 17. A non-transitory processor-readable medium comprising processor-readable instructions configured to cause one or more processors to: receive multiple instances of content via a single tuner;record the multiple instances of content to a single file;receive closed captioning elements that correspond to the multiple instances of content via the tuner;record the closed captioning elements that correspond to the multiple instances of content;receive an indication of a closed captioning element and corresponding positional data for an instance of the multiple instances of content, wherein the closed captioning element and the corresponding positional data are used to identify a start location for the instance of content within the single file; andoutput, for presentation, the instance of content such that playback begins at the start location based on the received indication of the closed captioning element and the received corresponding positional data.
  • 18. The non-transitory processor-readable medium of claim 17 wherein the processor-readable instructions are further configured to cause the one or more processors to: receive a second indication of a second closed captioning element and a second corresponding positional data for the instance of content, wherein the received second indication of the second closed captioning element and the second corresponding positional data are used to identify an end location for playback of the instance of content within the single file; andcease playback of the instance of content at the end location within the single file based on the received second indication of the second closed captioning element and the second corresponding positional data for the instance of content.
  • 19. The non-transitory processor-readable medium of claim 17 wherein the processor-readable instructions are further configured to cause the one or more processors to: receive a second indication of a second closed captioning element and a second corresponding positional data for the instance of content, wherein the received second indication of the second closed captioning element and the second corresponding positional data are used to identify a portion of the single file to be skipped from playback.
  • 20. The non-transitory processor-readable medium of claim 19, wherein the processor-readable instructions are further configured to cause the one or more processors to: skip from output the portion of the single file based on the received second indication of the second closed captioning element and the received second corresponding positional data.
CROSS-REFERENCES TO RELATED APPLICATIONS

This Application is a continuation of U.S. application Ser. No. 13/856,752, filed Apr. 4, 2013, entitled “Altering Presentation Of Received Content Based On Use Of Closed Captioning Elements As Reference Locations,” which is a continuation of U.S. application Ser. No. 13/215,916, filed Aug. 23, 2011, entitled “Altering Presentation Of Received Content Based On Use Of Closed Captioning Elements As Reference Locations”, the entire disclosure of which is hereby incorporated by reference for all purposes.

US Referenced Citations (344)
Number Name Date Kind
4706121 Young Nov 1987 A
4723246 Weldon, Jr. Feb 1988 A
4802215 Mason Jan 1989 A
5187589 Kono et al. Feb 1993 A
5335277 Harvey et al. Aug 1994 A
5483277 Granger Jan 1996 A
5488658 Hirashima Jan 1996 A
5541738 Mankovitz Jul 1996 A
5642153 Chaney et al. Jun 1997 A
5682597 Ganek et al. Oct 1997 A
5684969 Ishida Nov 1997 A
5724646 Ganek et al. Mar 1998 A
5805763 Lawler et al. Sep 1998 A
5974218 Nagasaka et al. Oct 1999 A
6049333 LaJoie et al. Apr 2000 A
6263504 Ebisawa Jul 2001 B1
6453115 Boyle Sep 2002 B1
6628891 Vantalon et al. Sep 2003 B1
6701528 Arsenault et al. Mar 2004 B1
6766523 Herley Jul 2004 B2
6798971 Potrebic Sep 2004 B2
6938208 Reichardt Aug 2005 B2
7024676 Klopfenstein Apr 2006 B1
7409140 Rodriguez et al. Aug 2008 B2
7487529 Orlick Feb 2009 B1
7490169 Ogdon et al. Feb 2009 B1
7493312 Liu et al. Feb 2009 B2
7505081 Eshleman Mar 2009 B2
7542656 Cho et al. Jun 2009 B2
7577751 Vinson et al. Aug 2009 B2
7590993 Hendricks et al. Sep 2009 B1
7684672 Matoba Mar 2010 B2
7715552 Pinder et al. May 2010 B2
7730517 Rey et al. Jun 2010 B1
7739711 Finseth et al. Jun 2010 B2
7760986 Beuque Jul 2010 B2
7804861 Kim Sep 2010 B2
7848618 Potrebic et al. Dec 2010 B2
7856557 Beuque Dec 2010 B2
7926078 Arsenault et al. Apr 2011 B2
7929697 McNeely et al. Apr 2011 B2
7962937 Cho et al. Jun 2011 B2
8006268 Sloo Aug 2011 B2
8136140 Hodge Mar 2012 B2
8156520 Casagrande et al. Apr 2012 B2
8165450 Casagrande Apr 2012 B2
8165451 Casagrande Apr 2012 B2
8201194 Wijnands et al. Jun 2012 B2
8321466 Black et al. Nov 2012 B2
8364671 Sinton et al. Jan 2013 B1
8407735 Casagrande et al. Mar 2013 B2
8437617 Casagrande May 2013 B2
8437622 Casagrande May 2013 B2
8447170 Casagrande May 2013 B2
8510771 Casagrande et al. Aug 2013 B2
8566873 Sie et al. Oct 2013 B2
8584167 Vanduyn Nov 2013 B2
8588579 Casagrande et al. Nov 2013 B2
8606085 Gratton Dec 2013 B2
8606088 Kummer et al. Dec 2013 B2
8627349 Kirby et al. Jan 2014 B2
8660412 Kummer et al. Feb 2014 B2
8763027 Martch Jun 2014 B2
8774608 Kummer et al. Jul 2014 B2
8819722 Kummer et al. Aug 2014 B2
8819761 Minnick Aug 2014 B2
8850476 VanDuyn et al. Sep 2014 B2
8867893 Kirby Oct 2014 B2
8959544 Kummer et al. Feb 2015 B2
8959566 Kummer Feb 2015 B2
8965177 Casagrande Feb 2015 B2
8977106 Casagrande Mar 2015 B2
8989562 Kummer et al. Mar 2015 B2
8997153 Templeman Mar 2015 B2
9031385 Casagrande et al. May 2015 B2
9043843 Templeman et al. May 2015 B2
9055274 Casagrande Jun 2015 B2
9088763 Martch et al. Jul 2015 B2
9113222 VanDuyn Aug 2015 B2
9177605 Minnick et al. Nov 2015 B2
9177606 Kirby Nov 2015 B2
9185331 Martch et al. Nov 2015 B2
9191694 Casagrande Nov 2015 B2
9202524 Martch et al. Dec 2015 B2
9264779 Kirby et al. Feb 2016 B2
9269397 Casagrande et al. Feb 2016 B2
9349412 Templeman May 2016 B2
9350937 Kummer et al. May 2016 B2
9357159 Martch et al. May 2016 B2
9361940 Minnick Jun 2016 B2
20010028782 Ohno et al. Oct 2001 A1
20010033736 Yap et al. Oct 2001 A1
20010034787 Takao et al. Oct 2001 A1
20020044658 Wasilewski et al. Apr 2002 A1
20020054752 Wood et al. May 2002 A1
20020055343 Stetzler et al. May 2002 A1
20020087979 Dudkiewicz et al. Jul 2002 A1
20020087983 Son et al. Jul 2002 A1
20020092021 Yap et al. Jul 2002 A1
20020095510 Sie et al. Jul 2002 A1
20020097340 Takagi et al. Jul 2002 A1
20020116705 Perlman Aug 2002 A1
20020120925 Logan Aug 2002 A1
20020126221 Link Sep 2002 A1
20020141431 Tripathy Oct 2002 A1
20020144259 Gutta et al. Oct 2002 A1
20020144266 Goldman et al. Oct 2002 A1
20020152299 Traversat et al. Oct 2002 A1
20020164147 Suda Nov 2002 A1
20020168178 Rodriguez et al. Nov 2002 A1
20020174430 Ellis et al. Nov 2002 A1
20020184638 Agnihotri et al. Dec 2002 A1
20020188943 Freeman et al. Dec 2002 A1
20030005454 Rodriguez et al. Jan 2003 A1
20030026423 Unger et al. Feb 2003 A1
20030078930 Surcouf et al. Apr 2003 A1
20030097659 Goldman May 2003 A1
20030110514 West et al. Jun 2003 A1
20030149988 Ellis et al. Aug 2003 A1
20030152360 Mukai et al. Aug 2003 A1
20030156826 Sonoda et al. Aug 2003 A1
20030177492 Kanou Sep 2003 A1
20030177495 Needham et al. Sep 2003 A1
20030200548 Baran et al. Oct 2003 A1
20030208763 McElhatten et al. Nov 2003 A1
20030208767 Williamson et al. Nov 2003 A1
20030226150 Berberet et al. Dec 2003 A1
20040001087 Warmus et al. Jan 2004 A1
20040003118 Brown et al. Jan 2004 A1
20040015992 Hasegawa et al. Jan 2004 A1
20040015999 Carlucci et al. Jan 2004 A1
20040078829 Patel et al. Apr 2004 A1
20040080672 Kessler et al. Apr 2004 A1
20040103428 Seok et al. May 2004 A1
20040128682 Liga et al. Jul 2004 A1
20040133923 Watson et al. Jul 2004 A1
20040162871 Pabla et al. Aug 2004 A1
20040218905 Green et al. Nov 2004 A1
20040242150 Wright et al. Dec 2004 A1
20040268387 Wendling Dec 2004 A1
20050002640 Putterman Jan 2005 A1
20050034171 Benya Feb 2005 A1
20050083865 Ashley et al. Apr 2005 A1
20050120049 Kanegae et al. Jun 2005 A1
20050125683 Matsuyama et al. Jun 2005 A1
20050147383 Ihara Jul 2005 A1
20050180568 Krause Aug 2005 A1
20050229213 Ellis et al. Oct 2005 A1
20050237435 Potrebic et al. Oct 2005 A1
20050271365 Hisatomi Dec 2005 A1
20050273819 Knudson et al. Dec 2005 A1
20050281531 Unmehopa Dec 2005 A1
20060010464 Azami Jan 2006 A1
20060020962 Stark et al. Jan 2006 A1
20060056800 Shimagami et al. Mar 2006 A1
20060075434 Chaney et al. Apr 2006 A1
20060080716 Nishikawa et al. Apr 2006 A1
20060085828 Dureau et al. Apr 2006 A1
20060206819 Tsuji et al. Sep 2006 A1
20060212900 Ismail et al. Sep 2006 A1
20060215993 Yamada Sep 2006 A1
20060257099 Potrebic et al. Nov 2006 A1
20060274208 Pedlow Dec 2006 A1
20070016546 De Vorchik et al. Jan 2007 A1
20070039032 Goldey et al. Feb 2007 A1
20070061378 Lee et al. Mar 2007 A1
20070154163 Cordray Jul 2007 A1
20070157248 Ellis Jul 2007 A1
20070157253 Ellis et al. Jul 2007 A1
20070165855 Inui Jul 2007 A1
20070183745 White Aug 2007 A1
20070192586 McNeely Aug 2007 A1
20070204288 Candelore Aug 2007 A1
20070234395 Dureau et al. Oct 2007 A1
20070250856 Leavens et al. Oct 2007 A1
20070258596 Kahn et al. Nov 2007 A1
20070300249 Smith et al. Dec 2007 A1
20080022347 Cohen Jan 2008 A1
20080044158 Kido Feb 2008 A1
20080046929 Cho et al. Feb 2008 A1
20080052743 Moore Feb 2008 A1
20080074547 Ida Mar 2008 A1
20080092164 Agarwal et al. Apr 2008 A1
20080092181 Britt Apr 2008 A1
20080101760 Waller May 2008 A1
20080104534 Park et al. May 2008 A1
20080127253 Zhang et al. May 2008 A1
20080137850 Mamidwar Jun 2008 A1
20080141322 Jang et al. Jun 2008 A1
20080144747 Tomizawa Jun 2008 A1
20080152039 Shah et al. Jun 2008 A1
20080184327 Ellis et al. Jul 2008 A1
20080216119 Pfeffer et al. Sep 2008 A1
20080216136 Pfeffer et al. Sep 2008 A1
20080222678 Burke et al. Sep 2008 A1
20080222681 Kwon Sep 2008 A1
20080271077 Kim et al. Oct 2008 A1
20080273698 Manders et al. Nov 2008 A1
20080273856 Bumgardner Nov 2008 A1
20080276284 Bumgardner et al. Nov 2008 A1
20080288461 Glennon et al. Nov 2008 A1
20080291206 Uchimura et al. Nov 2008 A1
20080298585 Maillard et al. Dec 2008 A1
20080301740 Tsutsui Dec 2008 A1
20080307217 Yukimatsu et al. Dec 2008 A1
20090025027 Craner Jan 2009 A1
20090051579 Inaba et al. Feb 2009 A1
20090067621 Wajs Mar 2009 A9
20090080930 Shinotsuka et al. Mar 2009 A1
20090100466 Migos Apr 2009 A1
20090110367 Fukui Apr 2009 A1
20090129741 Kim May 2009 A1
20090129749 Oyamatsu et al. May 2009 A1
20090136206 Aisu et al. May 2009 A1
20090150941 Riedl et al. Jun 2009 A1
20090165057 Miller et al. Jun 2009 A1
20090172722 Kahn et al. Jul 2009 A1
20090178098 Westbrook et al. Jul 2009 A1
20090210912 Cholas et al. Aug 2009 A1
20090235298 Carlberg et al. Sep 2009 A1
20090254962 Hendricks et al. Oct 2009 A1
20090260038 Acton et al. Oct 2009 A1
20090307741 Casagrande Dec 2009 A1
20090320073 Reisman Dec 2009 A1
20090320084 Azam et al. Dec 2009 A1
20090324203 Wiklof Dec 2009 A1
20100020794 Cholas et al. Jan 2010 A1
20100037282 Iwata et al. Feb 2010 A1
20100043022 Kaftan Feb 2010 A1
20100050225 Bennett Feb 2010 A1
20100086277 Craner Apr 2010 A1
20100095323 Williamson et al. Apr 2010 A1
20100100899 Bradbury et al. Apr 2010 A1
20100115121 Roos et al. May 2010 A1
20100135639 Ellis et al. Jun 2010 A1
20100146581 Erk Jun 2010 A1
20100158479 Craner Jun 2010 A1
20100158480 Jung et al. Jun 2010 A1
20100162285 Cohen et al. Jun 2010 A1
20100169926 Westberg et al. Jul 2010 A1
20100195827 Lee et al. Aug 2010 A1
20100217613 Kelly Aug 2010 A1
20100232604 Eklund, II Sep 2010 A1
20100235862 Adachi Sep 2010 A1
20100239228 Sano Sep 2010 A1
20100242079 Riedl et al. Sep 2010 A1
20100246582 Salinger et al. Sep 2010 A1
20100247067 Gratton Sep 2010 A1
20100251304 Donoghue et al. Sep 2010 A1
20100251305 Kimble et al. Sep 2010 A1
20100254386 Salinger et al. Oct 2010 A1
20100265391 Muramatsu et al. Oct 2010 A1
20100284537 Inbar Nov 2010 A1
20100293583 Loebig et al. Nov 2010 A1
20100299528 Le Floch Nov 2010 A1
20100306401 Gilson Dec 2010 A1
20100313222 Lee et al. Dec 2010 A1
20100319037 Kim Dec 2010 A1
20100329645 Sakamoto Dec 2010 A1
20110001879 Goldey et al. Jan 2011 A1
20110007218 Moran et al. Jan 2011 A1
20110008024 Sasaki Jan 2011 A1
20110043652 King et al. Feb 2011 A1
20110078750 Tam et al. Mar 2011 A1
20110080529 Wong Apr 2011 A1
20110099364 Robyr et al. Apr 2011 A1
20110131413 Moon et al. Jun 2011 A1
20110138169 Michel Jun 2011 A1
20110138424 Vlot Jun 2011 A1
20110145854 Bacon et al. Jun 2011 A1
20110150429 Kaneko Jun 2011 A1
20110162011 Hassell et al. Jun 2011 A1
20110179453 Poniatowski Jul 2011 A1
20110225616 Ellis Sep 2011 A1
20110235701 Kim Sep 2011 A1
20110239249 Murison et al. Sep 2011 A1
20110255002 Witheiler Oct 2011 A1
20110286719 Woods Nov 2011 A1
20110311045 Candelore et al. Dec 2011 A1
20120183276 Quan et al. Jul 2012 A1
20120195433 Eppolito et al. Aug 2012 A1
20120198501 Ruan et al. Aug 2012 A1
20120236933 Saitoh Sep 2012 A1
20120278837 Curtis et al. Nov 2012 A1
20120296745 Harper et al. Nov 2012 A1
20120301104 Dove Nov 2012 A1
20120311534 Fox et al. Dec 2012 A1
20120311634 Vanduyn Dec 2012 A1
20120331505 Chun et al. Dec 2012 A1
20130014146 Bhatia et al. Jan 2013 A1
20130014159 Wiser et al. Jan 2013 A1
20130051555 Martch et al. Feb 2013 A1
20130051758 Kummer et al. Feb 2013 A1
20130051764 Casagrande Feb 2013 A1
20130051766 Martch et al. Feb 2013 A1
20130051773 Casagrande Feb 2013 A1
20130054579 Kennedy Feb 2013 A1
20130055304 Kirby et al. Feb 2013 A1
20130055305 Martch et al. Feb 2013 A1
20130055310 Vanduyn et al. Feb 2013 A1
20130055311 Kirby et al. Feb 2013 A1
20130055314 Martch Feb 2013 A1
20130055333 Kummer Feb 2013 A1
20130216208 Kummer et al. Aug 2013 A1
20130223814 Casagrande Aug 2013 A1
20130243397 Minnick et al. Sep 2013 A1
20130243398 Templeman et al. Sep 2013 A1
20130243399 Casagrande et al. Sep 2013 A1
20130243401 Casagrande Sep 2013 A1
20130243402 Kummer et al. Sep 2013 A1
20130243403 Martch Sep 2013 A1
20130243405 Templeman et al. Sep 2013 A1
20130243406 Kirby Sep 2013 A1
20130247089 Kummer et al. Sep 2013 A1
20130247090 Kummer et al. Sep 2013 A1
20130247106 Martch et al. Sep 2013 A1
20130247107 Templeman Sep 2013 A1
20130247111 Templeman et al. Sep 2013 A1
20130247115 Minnick Sep 2013 A1
20130298166 Herrington et al. Nov 2013 A1
20130347037 Soroushian Dec 2013 A1
20140047477 VanDuyn Feb 2014 A1
20140050462 Kummer et al. Feb 2014 A1
20140126889 Kummer et al. May 2014 A1
20140130094 Kirby et al. May 2014 A1
20140147102 Shartzer et al. May 2014 A1
20140201767 Seiden et al. Jul 2014 A1
20140341377 Kummer et al. Nov 2014 A1
20140344858 Minnick Nov 2014 A1
20140363139 Kirby Dec 2014 A1
20140376884 Lovell Dec 2014 A1
20150040166 Tamura et al. Feb 2015 A1
20150095948 Kummer et al. Apr 2015 A1
20150104146 Higuchi et al. Apr 2015 A1
20150121430 Templeman Apr 2015 A1
20150208119 Casagrande et al. Jul 2015 A1
20150208125 Robinson Jul 2015 A1
20150228305 Templeman et al. Aug 2015 A1
20150245089 Potrebic Aug 2015 A1
20150319400 Golyshko Nov 2015 A1
20160073144 Robinson Mar 2016 A1
20160080800 Casagrande Mar 2016 A1
20160105711 Martch et al. Apr 2016 A1
20160134926 Casagrande et al. May 2016 A1
Foreign Referenced Citations (42)
Number Date Country
101202600 Jun 2008 CN
101310532 Nov 2008 CN
101 404 780 Apr 2009 CN
101978690 Jan 2011 CN
0 973 333 Jan 2000 EP
1 001 631 May 2000 EP
1 168 347 Feb 2002 EP
1372339 Dec 2003 EP
1 447 983 Aug 2004 EP
1 667 452 Jul 2006 EP
1 742 467 Jan 2007 EP
2 018 059 Jan 2009 EP
0 903 743 Mar 2009 EP
2 317 767 May 2011 EP
2 357 563 Aug 2011 EP
2 541 929 Jan 2013 EP
2 826 197 Jan 2015 EP
2 826 238 Jan 2015 EP
2 459 705 Nov 2009 GB
9740CHENP2013 Sep 2014 IN
2007 116525 May 2007 JP
2010165058 Jul 2010 JP
9812872 Mar 1998 WO
0241625 May 2002 WO
2004057610 Aug 2004 WO
2007047410 Apr 2007 WO
2008010118 Jan 2008 WO
2008010689 Jan 2008 WO
2008060486 May 2008 WO
2011081729 Jul 2011 WO
2011027236 Oct 2011 WO
2012003693 Jan 2012 WO
2012028824 Feb 2013 WO
2013028829 Feb 2013 WO
2013028835 Feb 2013 WO
2013138606 Sep 2013 WO
2013138608 Sep 2013 WO
2013138610 Sep 2013 WO
2013138638 Sep 2013 WO
2013138689 Sep 2013 WO
2013138740 Sep 2013 WO
2016066443 May 2016 WO
Non-Patent Literature Citations (169)
Entry
Supplementary European Search Report for EP 13760902, mailed Oct. 20, 2015, all pages.
Supplementary European Search Report for EP 13761427, mailed Oct. 19, 2015, all pages.
Office Action dated Jul. 31, 2015 for Mexican Patent Application No. MX/a/2014/009919, 2 pages.
U.S. Appl. No. 13/786,915 filed Mar. 6, 2013, Non Final Rejection mailed Oct. 15, 2015, 59 pages.
U.S. Appl. No. 13/801,994, Non Final Office Action mailed Oct. 7, 2015, 55 pages.
U.S. Appl. No. 14/095,860 filed Dec. 3, 2013, Notice of Allowance mailed Oct. 19, 2015, 14 pages.
U.S. Appl. No. 14/338,114 filed Jul. 22, 2014, Non-Final Office Action mailed Sep. 30, 2015, all pages.
U.S. Appl. No. 14/529,989 filed Oct. 31, 2014, Non-Final Office Action mailed Nov. 4, 2015, all pages.
U.S. Appl. No. 14/043,617 filed Oct. 1, 2013, Non-Final Office Action mailed Oct. 23, 2015, all pages.
U.S. Appl. No. 14/676,137 filed Apr. 1, 2015, Notice of Allowance mailed Sep. 28, 2015, 35 pages.
U.S. Appl. No. 14/340,190 filed Jul. 24, 2014, Final Rejection mailed Feb. 19, 2016, 54 pages.
U.S. Appl. No. 14/154,887 filed Jan. 14, 2014 Notice of Allowance mailed Jan. 21, 2016, 26 pages.
U.S. Appl. No. 13/288,002 filed Nov. 2, 2011 Final Rejection mailed Jan. 13, 2016, 22 pages.
U.S. Appl. No. 13/292,047 filed Nov. 8, 2011 Notice of Allowance mailed Jan. 29, 2016, 45 pages.
U.S. Appl. No. 13/215,598 filed Aug. 23, 2011 Non Final Office Action mailed Dec. 15, 2015, all pages.
U.S. Appl. No. 13/801,968 filed Mar. 13, 2013 Final Office Action mailed Nov. 19, 2015, all pages.
U.S. Appl. No. 14/589,090, Notice of Allowance mailed Feb. 9, 2016, 47 pages.
U.S. Appl. No. 14/591,549, Non Final Office Action mailed Dec. 31, 2015, 19 pages.
U.S. Appl. No. 14/338,114 filed Jul. 22, 2014 Notice of Allowance mailed Feb. 3, 2016, all pages.
Second Office Action for CN 201280031434.7, issued Dec. 23, 2015, 6 pages.
First Office Action issued by State Intellectual Property Office (SIPO) for CN 201280028697.2, issued Dec. 16, 2015, 11 pages.
Notice of Allowance received for Mexican Patent Appln. MX/a/2013/014991, mailed on Dec. 9, 2015, 1 page.
Notice of Allowance mailed Dec. 4, 2015 for Mexican Patent Application No. MX/a/2014/009723, 1 page.
International Search Report and Written Opinion of PCT/US2015/065934 mailed Apr. 8, 2016, all pages.
International Search Report and Written Opinion of PCT/EP2015/073937 mailed Apr. 15, 2016, all pages.
U.S. Appl. No. 14/757,606 filed Dec. 23, 2015, Non Final Rejection mailed Mar. 24, 2016, 33 pages.
U.S. Appl. No. 13/801,968 filed Mar. 13, 2013 Notice of Allowance mailed Apr. 7, 2016, 33 pages.
Notice of Allowance dated Jan. 15, 2016 for Mexican Patent Application No. MX/a/2014/009928, 1 page.
Notice of Allowance dated Dec. 16, 2015 for Mexican Patent Application No. MX/a/2014/009919, 1 page.
U.S. Appl. No. 13/786,915 filed Mar. 6, 2013, Final Rejection mailed May 12, 2016, 27 pages.
U.S. Appl. No. 13/215,598 filed Aug. 23, 2011 Notice of Allowance mailed May 24, 2016, all pages.
U.S. Appl. No. 13/801,994, Final Office Action mailed May 4, 2016, 37 pages.
U.S. Appl. No. 14/529,989 filed Oct. 31, 2014, Final Rejection mailed May 6, 2016, 27 pages.
U.S. Appl. No. 14/043,617 filed Oct. 1, 2013, Final Office Action mailed May 6, 2016, 56 pages.
U.S. Appl. No. 14/340,190 filed Jul. 24, 2014, Non-Final Rejection mailed Aug. 31, 2015, 74 pages.
U.S. Appl. No. 14/154,887 filed Jan. 14, 2014 Non-Final Rejection mailed Jul. 17, 2015, 33 pages.
U.S. Appl. No. 14/467,959 filed Aug. 25, 2014 Notice of Allowance mailed Jun. 22, 2015, 36 pages.
U.S. Appl. No. 13/888,012 filed May 6, 2013 Notice of Allowance mailed Jul. 14, 2015, 18 pages.
U.S. Appl. No. 13/799,604 filed Mar. 13, 2013, Notice of Allowance mailed Jul. 24, 2015, 34 pages.
U.S. Appl. No. 13/288,002 filed Nov. 2, 2011 Non Final Rejection mailed Jul. 28, 2015, 29 pages.
U.S. Appl. No. 13/302,852 filed Nov. 22, 2011, Notice of Allowance mailed Jun. 19, 2015, 26 pages.
U.S. Appl. No. 13/292,047 filed Nov. 8, 2011 Non-Final Office Action mailed Jul. 7, 2015, 28 pages.
U.S. Appl. No. 13/829,350 filed Mar. 14, 2013 Notice of Allowance mailed Jul. 24, 2015, 29 pages.
U.S. Appl. No. 14/095,860 filed Dec. 3, 2013 Notice of Allowance mailed Jul. 13, 2015, 31 pages.
U.S. Appl. No. 14/043,617 filed Oct. 1, 2013 Final Office Action mailed Jul. 16, 2015, 45 pages.
Supplementary European Search Report for EP 13761291.7 mailed Jul. 9, 2015, 8 pages.
Extended European Search Report for EP 13760237.1 received Jul. 21, 2015, 8 pages.
First Office Action and Search Report from the State Intellectual Property Office (SIPO) for CN 201280031434.7, issued Jul. 17, 2015, 12 pages.
Office Action dated May 18, 2015 for Mexican Patent Application No. MX/a/2014/009776, 2 pages.
Office Action dated May 12, 2015 for Mexican Patent Application No. MX/a/2014/009723, 2 pages.
Office Action dated Jul. 31, 2015 for Mexican Patent Application No. MX/a/2014/009928, 2 pages.
Author Unknown, “Move Networks is Delivering the Next Generation of Television,” Move Networks, 2010, obtained online at http://movenetworks.com/, 2 pages.
Author Unknown, “EE Launches home TV service in UK,” dated Oct. 8, 2014, 3 pages. Retrieved on Oct. 13, 2014 from http://www.bbc.com/news/technology-29535279.
Author Unknown, “EE TV It's simply great television,” Accessed on Oct. 13, 2014, 11 pages. Retrieved from https//ee.co.uk/ee-and-me/ee-tv.
Design and implementation of a multi-stream cableCARD with a high-speed DVB-common descrambler; Joonyoung Jung, Ohyung Kwon, Sooin Lee; In proceeding of: Proceedings of the 14th ACM International Conference on Multimedia, Santa Barbara, CA, USA, Oct. 23-27, 2006, 4 pages.
Jensen, Craig, “Fragmentation: the condition, the cause, the cure” 'Online!, Executive Software International, 1994; ISBN: 0964004909; retrieved from Internet: <URL: www.executive.com/fragbook/fragbook.htm> * Chapter: “How a disk works”, Section: “The original problem”. Retrieved on Jan. 9, 2014, 70 pages.
McCann, John, “EE TV set top takes aim at Sky, Virgin Media and YouView,” dated Oct. 8, 2014, 5 pages. Retrieved on Oct. 13, 2014 from http://www.techradar.com/news/television/ee-tv-set-top-box-takes-aim-at-sky-virgin-media-and-youview-1268223.
Williams, Christopher, “EE to launch TV set-top box,” dated Oct. 7, 2014, 2 pages. Retrieved on Oct. 13, 2014 from http://www.telegraph.co.uk/finance/newsbysector/mediatechnologyandtelecoms/telecoms/11147319/EE-to-launch-TV-set-top-box.html.
European Search Report for EP 12825653 dated Mar. 11, 2015, 7 pages.
Extended European Search Report for EP 12825080 mailed Sep. 11, 2014, 10 pages.
Extended European Search Report for EP 12825521 mailed Nov. 24, 2014, 7 pages.
Extended European Search Report for EP 12825474 mailed Jan. 7, 2015, 6 pages.
Extended European Search Report for EP 12825430 mailed Feb. 3, 2015, 9 pages.
International Search Report and Written Opinion of PCT/US2012/51992 mailed Nov. 2, 2012, 15 pages.
International Search Report and Written Opinion of PCT/US2012/51987 mailed Oct. 23, 2012, 20 pages.
International Search Report and Written Opinion of PCT/US2012/051984 mailed Nov. 5, 2012, 13 pages.
International Search Report and Written Opinion of PCT/US2012/52002 mailed Oct. 16, 2012, 17 pages.
International Search Report and Written Opinion of PCT/US2013/031432 mailed May 28, 2013, 10 pages.
International Preliminary Report on Patentability for PCT/US2013/031432 issued Sep. 16, 2014, 9 pages.
International Search Report and Written Opinion of PCT/US2013/031445 mailed May 24, 2013, 11 pages.
International Preliminary Report on Patentability for PCT/US2013/031445 issued Sep. 16, 2014, 10 pages.
International Preliminary Report on Patentability for PCT/US2012/052002 mailed on Apr. 17, 2014, 10 pages.
International Search Report and Written Opinion of PCT/US2012/51964 mailed Nov. 2, 2012, 13 pages.
International Search Report and Written Opinion of PCT/US2012/052011 mailed Dec. 17, 2012, 44 pages.
International Preliminary Report on Patentability, PCT/US2012/052011, mailed on Mar. 6, 2014, 6 pages.
International Preliminary Report on Patentability, PCT/US2012/051984, mailed on Mar. 6, 2014, 8 pages.
International Preliminary Report on Patentability, PCT/US2012/051964, mailed on Apr. 10, 2014, 7 pages.
International Preliminary Report on Patentability, PCT/US2012/051992, mailed on Apr. 3, 2014, 7 pages.
International Preliminary Report on Patentability, PCT/US2012/051987, mailed on Mar. 6, 2014, 7 pages.
International Search Report of PCT/KR2007/003521 mailed on Oct. 23, 2007, 22 pages.
International Search Report of PCT/ib2003/005737 mailed on Mar. 2, 2004, 21 pages.
International Preliminary Report on Patentability for PCT/US2013/032176 mailed Sep. 25, 2014, 7 pages.
International Search Report and Written Opinion of PCT/US2013/32176 mailed on Jun. 25, 2013, 15 pages.
International Search Report and Written Opinion of PCT/US2013/031565 mailed on May 31, 2013, 82 pages.
International Preliminary Report on Patentability for PCT/US2013/031565 issued Sep. 16, 2014, 18 pages.
International Preliminary Report on Patentability for PCT/US2013/031915 issued Sep. 16, 2014, 5 pages.
International Search Report and Written Opinion of PCT/US2013/031915 mailed on Jun. 3, 2013, 7 pages.
International Search Report and Written Opinion of PCT/US2013/031440 mailed May 30, 2013, 14 pages.
International Preliminary Report on Patentability for PCT/US2013/031440 mailed Sep. 25, 2014, 8 pages.
Supplementary European Search Report for Application No. EP 12825147 dated Mar. 27, 2015, 9 pages.
The Notice of Allowance by the Mexican Institute of Industrial Property for Mexican Patent Application No. MX/a/2013/014907 dated Feb. 20, 2015 is not translated into English, 1 page.
The Notice of Allowance by the Mexican Institute of Industrial Property for Mexican Patent Application No. MX/a/2013/014671 dated Apr. 17, 2015 is not translated into English, 1 page.
The Notice of Allowance by the Mexican Institute of Industrial Property for Mexican Patent Application No. MX/a/2013/014677 dated Mar. 19, 2015 is not translated into English, 1 page.
The Office Action dated Nov. 6, 2014 for Mexican Patent Application No. MX/a/2013/014677 is not translated into English, 2 pages.
The Second Office Action dated Feb. 26, 2015 for Mexican Pat. Appln. No. MX/a/2013/014217 is not translated into English, 3 pages.
The Office Action dated Nov. 7, 2014 for Mexican Patent Application No. MX/a/2013/014907 is not translated into English, 3 pages.
The Office Action dated Jan. 23, 2015 for Mexican Patent Application No. MX/a/2013/014671 is not translated into English, 3 pages.
U.S. Appl. No. 14/095,860 filed Dec. 3, 2013, Non-Final Office Action mailed Dec. 26, 2014, 45 pages.
U.S. Appl. No. 14/095,860 filed Dec. 3, 2013, Final Office Action mailed May 1, 2015, 18 pages.
U.S. Appl. No. 14/064,423 filed Oct. 28, 2013, Non-Final Office Action mailed Dec. 20, 2013, 18 pages.
U.S. Appl. No. 14/064,423 filed Oct. 28, 2013, Notice of Allowance mailed Mar. 4, 2013, 37 pages.
U.S. Appl. No. 14/060,388 filed Oct. 22, 2013, Notice of Allowance mailed Apr. 13, 2015, 44 pages.
U.S. Appl. No. 14/043,617 filed Oct. 1, 2013, Non-Final Office Action mailed Jan. 5, 2015, 45 pages.
U.S. Appl. No. 13/888,012 filed May 6, 2013, Non-Final Rejection mailed Apr. 6, 2015, 36 pages.
U.S. Appl. No. 13/856,752 filed Apr. 4, 2013, Non Final Office Action mailed Nov. 5, 2014, 34 pages.
U.S. Appl. No. 13/856,752 filed Apr. 4, 2013, Notice of Allowance mailed Feb. 10, 2015, 20 pages.
U.S. Appl. No. 13/829,350 filed Mar. 14, 2013, Non Final Office Action mailed Feb. 28, 2014, 29 pages.
U.S. Appl. No. 13/829,350 filed Mar. 14, 2013, Non Final Office Action mailed Jul. 29, 2014, 24 pages.
U.S. Appl. No. 13/829,350 filed Mar. 14, 2013, Final Office Action mailed Jan. 23, 2015, 18 pages.
U.S. Appl. No. 13/828,001 filed Mar. 14, 2013, Notice of Allowance mailed Apr. 25, 2014, 43 pages.
U.S. Appl. No. 13/801,968 filed Mar. 13, 2013, Non Final Office Action mailed May 21, 2015, 49 pages.
U.S. Appl. No. 13/800,477 filed Mar. 13, 2013, Non-Final Office Action mailed Sep. 11, 2014, 34 pages.
U.S. Appl. No. 13/800,477 filed Mar. 13, 2013, Notice of Allowance mailed Feb. 18, 2015, 18 pages.
U.S. Appl. No. 13/799,719 filed Mar. 13, 2013, Non Final Office Action mailed Oct. 25, 2013, 79 pages.
U.S. Appl. No. 13/799,719 filed Mar. 13, 2013, Notice of Allowance mailed Apr. 23, 2014, 141 pages.
U.S. Appl. No. 13/799,604 filed Mar. 13, 2013, Notice of Allowance mailed May 29, 2015, 46 pages.
U.S. Appl. No. 13/799,604 filed Mar. 13, 2013, Final Office Action mailed Jan. 14, 2015, 36 pages.
U.S. Appl. No. 13/799,604 filed Mar. 13, 2013, Non Final Office Action mailed Jun. 6, 2014, 24 pages.
U.S. Appl. No. 13/799,653 filed Mar. 13, 2013, Notice of Allowance mailed Nov. 26, 2014, 32 pages.
U.S. Appl. No. 13/799,653 filed Mar. 13, 2013, Non Final Office Action mailed May 8, 2014, 24 pages.
U.S. Appl. No. 13/797,173 filed Mar. 12, 2013, Notice of Allowance mailed Nov. 24, 2014, 37 pages.
U.S. Appl. No. 13/797,173 filed Mar. 12, 2013, Notice of Allowance mailed Feb. 26, 2015, 19 pages.
U.S. Appl. No. 13/797,173 filed Mar. 12, 2013, Non Final Office Action mailed May 15, 2014, 28 pages.
U.S. Appl. No. 13/795,914 filed Mar. 6, 2013, Notice of Allowance mailed Jul. 21, 2014, 13 pages.
U.S. Appl. No. 13/795,914 filed Mar. 6, 2013, Final Office Action mailed Apr. 3, 2014, 17 pages.
U.S. Appl. No. 13/795,914 filed Mar. 6, 2013, Non-Final Office Action mailed Oct. 11, 2013, 17 pages.
U.S. Appl. No. 13/793,636 filed Mar. 11, 2013, Non-Final Office Action mailed Sep. 29, 2014, 27 pages.
U.S. Appl. No. 13/793,636 filed Mar. 11, 2013, Notice of Allowance mailed Jan. 28, 2015, 43 pages.
U.S. Appl. No. 13/757,168 filed Feb. 1, 2013, Notice of Allowance mailed Oct. 14, 2014, 28 pages.
U.S. Appl. No. 13/757,168 filed Feb. 1, 2013, Non Final Office Action mailed Jun. 4, 2014, 23 pages.
U.S. Appl. No. 13/614,899 filed Sep. 13, 2012, Non-Final Office Action mailed Feb. 5, 2013, 17 pages.
U.S. Appl. No. 13/614,899 filed Sep. 13, 2012, Non-Final Office Action mailed May 20, 2014, 25 pages.
U.S. Appl. No. 13/614,899 filed Sep. 13, 2012, Non-Final Office Action mailed Sep. 17, 2013, 17 pages.
U.S. Appl. No. 13/614,899 filed Sep. 13, 2012, Final Office Action mailed Mar. 17, 2014, 41 pages.
U.S. Appl. No. 13/614,899 filed Sep. 13, 2012, Notice of Allowance mailed Mar. 13, 2015, 35 pages.
U.S. Appl. No. 13/592,976 filed Aug. 23, 2012, Notice of Allowance mailed Oct. 7, 2013, 18 pages.
U.S. Appl. No. 13/324,831 filed Dec. 13, 2011, Non-Final Office Action mailed Feb. 28, 2013, 23 pages.
U.S. Appl. No. 13/324,831 filed Dec. 13, 2011, Notice of Allowance mailed Sep. 4, 2013, 22 pages.
U.S. Appl. No. 13/302,852 filed Nov. 22, 2011 Non-Final Rejection mailed May 23, 2013, 19 pages.
U.S. Appl. No. 13/302,852 filed Nov. 22, 2011, Final Rejection mailed Dec. 9, 2013, 23 pages.
U.S. Appl. No. 13/302,852 filed Nov. 22, 2011, Non-Final Rejection mailed Sep. 2, 2014, 28 pages.
U.S. Appl. No. 13/302,852 filed Nov. 22, 2011, Final Rejection mailed Mar. 30, 2015, 29 pages.
U.S. Appl. No. 13/294,005 filed Nov. 11, 2011, Notice of Allowance mailed Oct. 31, 2014, 24 pages.
U.S. Appl. No. 13/294,005 filed Nov. 11, 2011, Non-Final Office Action mailed May 20, 2014, 33 pages.
U.S. Appl. No. 13/294,005 filed Nov. 11, 2011, Non-Final Office Action mailed Aug. 14, 2013, 32 pages.
U.S. Appl. No. 13/294,005 filed Nov. 11, 2011, Final Office Action mailed Jan. 3, 2014, 29 pages.
U.S. Appl. No. 13/292,047 filed Nov. 8, 2011, Non-Final Office Action mailed Jan. 18, 2013, 17 pages.
U.S. Appl. No. 13/292,047 filed Nov. 8, 2011, Final Office Action mailed Aug. 19, 2013, 17 pages.
U.S. Appl. No. 13/292,047 filed Nov. 8, 2011, Final Office Action mailed Jan. 13, 2015, 22 pages.
U.S. Appl. No. 13/291,014 filed Nov. 7, 2011, Non-Final Office Action mailed Mar. 29, 2013, 21 pages.
U.S. Appl. No. 13/291,014 filed Nov. 7, 2011, Notice of Allowance mailed Aug. 7, 2013, 16 pages.
U.S. Appl. No. 13/288,002 filed Nov. 2, 2011, Non-final Office Action mailed Sep. 26, 2013, 15 pages.
U.S. Appl. No. 13/288,002 filed Nov. 2, 2011, Final Office Action mailed Mar. 27, 2014, 20 pages.
U.S. Appl. No. 13/286,157 filed Oct. 31, 2011, Non-Final Office Action mailed Jan. 17, 2013, 20 pages.
U.S. Appl. No. 13/286,157 filed Oct. 31, 2011, Non-Final Office Action mailed Jul. 25, 2013, 49 pages.
U.S. Appl. No. 13/286,157 filed Oct. 31, 2011, Notice of Allowance mailed Feb. 3, 2014, 81 pages.
U.S. Appl. No. 13/215,916 filed Aug. 23, 2011, Notice of Allowance mailed Jan. 4, 2013, 10 pages.
U.S. Appl. No. 13/215,655 filed Aug. 23, 2011, Non-Final Office Action mailed Sep. 6, 2013, 27 pages.
U.S. Appl. No. 13/215,655 filed Aug. 23, 2011, Final Office Action mailed Dec. 18, 2013, 20 pages.
U.S. Appl. No. 13/215,702 filed Aug. 23, 2011, Notice of Allowance mailed Feb. 11, 2013, 13 pages.
U.S. Appl. No. 13/215,598 filed Aug. 23, 2011, Non-Final Office Action mailed Jun. 20, 2013, 15 pages.
U.S. Appl. No. 13/215,598 filed Aug. 23, 2011, Final Office Action mailed Nov. 21, 2013, 23 pages.
U.S. Appl. No. 13/215,598 filed Aug. 23, 2011, Non-Final Office Action mailed Feb. 6, 2014, 12 pages.
U.S. Appl. No. 13/215,598 filed Aug. 23, 2011, Non-Final Office Action mailed Nov. 25, 2014, 18 pages.
U.S. Appl. No. 13/215,598 filed Aug. 23, 2011, Final Office Action mailed Jul. 2, 2014, 22 pages.
U.S. Appl. No. 13/215,598 filed Aug. 23, 2011, Final Office Action mailed May 5, 2015, 17 pages.
U.S. Appl. No. 13/149,852 filed May 31, 2011, Non-Final Office Action mailed Dec. 12, 2012, 9 pages.
U.S. Appl. No. 13/149,852 filed May 31, 2011, Final Office Action mailed Mar. 26, 2013, 13 pages.
U.S. Appl. No. 13/149,852 filed May 31, 2011, Notice of Allowance mailed Jul. 11, 2013, 13 pages.
Related Publications (1)
Number Date Country
20150245113 A1 Aug 2015 US
Continuations (2)
Number Date Country
Parent 13856752 Apr 2013 US
Child 14707748 US
Parent 13215916 Aug 2011 US
Child 13856752 US