A portion of the disclosure of this patent document contains material that is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent files or records, but otherwise reserves all copyright rights whatsoever.
1. Field of Invention
The present invention relates generally to the field of data and content distribution networks. More specifically, the present invention relates in one exemplary aspect to methods and apparatus for automated creation of targeted or focused content extractions and/or compilations (e.g., highlight reel creation).
2. Description of Related Technology
The manual aggregation of highlights or other video shorts associated with various events or content elements (e.g., sports events, news, politics, reality television, etc.) into a series for quick review or montage is well known in the art. Such a manually composed series can be used to summarize the events of note from a past period of time. These series are common in sports broadcasts such as “Sportscenter®” on ESPN®, or for instance in “year in review” type pieces offered by news organizations at the end of a year, decade, etc. Typically, these collections of video shorts are used to inform viewers of events occurring earlier in the day, previous day, previous year, etc.
In the particular context of professional football, NFL (National Football League) RedZone offers game-day highlights and touchdowns on Sunday afternoons. Specifically, NFL RedZone promises to show every touchdown from every game, as well as other highlights. In addition, the service also presents viewers with a live broadcast whenever a team reaches the “red zone” (inside the defender's 20 yard line) of the opposing team. However, this service only applies to football played in the NFL, and the only definable metric the service relies upon is the distance to the goal.
Further, services like NFL Sunday Ticket™ offer wide ranging access to a specific set of content (i.e., NFL football games being played on any given Sunday). However, such services fail to identify which of the available content may be of particular interest to a user. Thus, a user is given access to more content that can be consumed (multiple aired games may be played simultaneously) without guidance as to the optimal content to view. Again, this system only applies to football games played in the NFL.
News services also create their own compilations or montages, such as to fit within a specified time slot in a news broadcast. Again, such compilations are manually created, and based on selection and placement and editing by one or more humans. However, there is an inherent delay associated with human identification of such exciting clips. Thus, moments of excitement are often missed by content consumers with interest in experiencing such content live or in near-real-time.
Further, it is impractical for an individual content provider to maintain the staffing necessary to monitor all incoming content from all of its content sources at all times for excitement. In addition, even more staff would be needed to tailor collections or series of these clips to the interests and viewing desires of individual subscribers.
Thus, methods and architectures for automated means of identifying content of interest are needed to overcome these impracticalities.
The present invention provides, inter alia, apparatus and methods for substantially automated creation of targeted or focused content extractions and/or compilations (e.g., highlight reels).
In a first aspect of the invention, a method for identification of exciting content in a content delivery network is disclosed. In one embodiment, the method includes: receiving metadata from one or more sources, comparing the metadata to data related to a plurality of available content, and based at least in part on the comparison, identifying individual ones of the plurality of available content related to the metadata. In one variant, the metadata includes information identifying an exciting event.
In a second aspect of the invention, a method for identification of exciting content in a content delivery network is disclosed. In one embodiment, the method includes measuring an excitement metric, generating metadata based on results from the act of measuring, adding time data associated with the act of measuring to the metadata, and adding, to the metadata, information enabling identification of one or more of (a) a particular one of a plurality of content, and (b) a content source.
In a third aspect of the invention, a content server apparatus operative to generate content having a particular attribute for delivery over a content distribution network is disclosed. In one embodiment the apparatus includes a storage device, a plurality of interfaces, and a processing entity in data communication with the plurality of interfaces and the storage device. The plurality of interfaces is configured to receive one or more feeds from one or more content sources; receive metadata from an identification source, the identification source configured to identify content having the particular attribute; and transmit a content stream to a target device.
In one variant, the processing entity is in one variant configured to run an application thereon configured to record a portion of the one or more feeds, store the portion on the storage device, compare time data present within the metadata to a time associated with the recorded portion, based on the comparison, associate the metadata with the recorded portion, evaluate at least one criterion based at least in part on the associated metadata, and if the evaluation indicates that the one or more criterion is/are met, add the recorded portion to the content stream.
In a fourth aspect of the invention, a mobile device enabled for receipt and management of exciting content is disclosed. In one embodiment the device includes a user interface configured to: (a) accept input from a user identifying an excitement level threshold, (b) present a list containing a plurality content with an excitement level above the threshold, (c) allow the user to browse the list, and (d) receive from the user a request for a particular one of the plurality of content, and a wireless interface configured to transmit the user request to a content server apparatus.
In a fifth aspect of the invention, an apparatus for generation of highlight reels in a content delivery network is disclosed. In one embodiment, the apparatus includes one or more interfaces for receiving metadata from one or more sources, one or more interfaces for receiving content from one or more feeds, and a content server entity with a processing unit associated therewith, the processing unit in data communication with at least one storage device, the processing unit configured to run a computer program thereon. In one variant, the computer program is configured to, when executed: (a) record at least one content element received over the one or more feeds, (b) store the at least one recorded feed on the storage device, (c) identify metadata related to the at least one recorded feed, the identified metadata having been received over at least one of the one or more sources, (d) compare the metadata to information related to the at least one recorded feed, and (e) based at least in part on the comparison, cause provision of at least of portion of the at least one recorded feed at least one client device of the network. The metadata includes data related to an occurrence of an exciting event.
In a sixth aspect of the invention, a non-transitory computer readable apparatus configured to store a computer program thereon is disclosed. In one embodiment, the computer program includes a plurality of instructions configured to, when executed: measure an excitement metric, generate metadata from a value associated with the measurement, add time data to the metadata, and compare the value to a set of criteria. In one variant, the set of criteria includes at least one threshold value.
In a seventh aspect of the invention, a method for identifying content is disclosed. In one embodiment, the content is exciting content, and the identification is performed using social media by at least: monitoring user posts, translating the user posts into machine readable data, developing an aggregate opinion from the machine readable data, and generating metadata from the aggregate opinion. The aggregate opinion is related to interest expressed, within the user posts, for an individual one of a plurality of available content.
In an eighth aspect of the invention, a system for substantially automated creation of targeted or focused content extractions and/or compilations is disclosed.
These and other aspects of the invention shall become apparent when considered in light of the disclosure provided herein.
All Figures © Copyright 2011-2012 Time Warner Cable, Inc. All rights reserved.
Reference is now made to the drawings, wherein like numerals refer to like parts throughout.
As used herein, the term “application” refers generally to a unit of executable software that implements a certain functionality or theme. The themes of applications vary broadly across any number of disciplines and functions (such as on-demand content management, e-commerce transactions, brokerage transactions, home entertainment, calculator etc.), and one application may have more than one theme. The unit of executable software generally runs in a predetermined environment; for example, the unit could comprise a downloadable Java Xlet™ that runs within the JavaTV™ environment.
As used herein, the terms “client device” and “end user device” include, but are not limited to, set-top boxes (e.g., DSTBs), gateways, personal computers (PCs), and minicomputers, whether desktop, laptop, or otherwise, and mobile devices such as handheld computers, PDAs, personal media devices (PMDs), and smartphones.
As used herein, the term “codec” refers to an video, audio, or other data coding and/or decoding algorithm, process or apparatus including, without limitation, those of the MPEG (e.g., MPEG-1, MPEG-2, MPEG-4, MPEG-4 Part 2, MPEG-4 Part 10, etc.), Real (RealVideo, etc.), AC-3 (audio), DiVX, XViD/ViDX, Windows Media Video (e.g., WMV 7, 8, or 9), ATI Video codec, H.263, H.264, Sorenson Spark, FFmpeg, 3ivx, x264, VP6, VP6-E, VP6-S, VP7, Sorenson 3, Theora, Cinepack, Huffyuv, Lagarith, SheerVideo, Mobiclip or VC-1 (SMPTE standard 421M) families.
As used herein, the term “computer program” or “software” is meant to include any sequence or human or machine cognizable steps which perform a function. Such program may be rendered in virtually any programming language or environment including, for example, C/C++, Fortran, COBOL, PASCAL, assembly language, markup languages (e.g., HTML, SGML, XML, VoXML), and the like, as well as object-oriented environments such as the Common Object Request Broker Architecture (CORBA), Java™ (including J2ME, Java Beans, etc.), Binary Runtime Environment (e.g., BREW), and the like.
The terms “Consumer Premises Equipment (CPE)” and “host device” refer to any type of electronic equipment located within a consumer's or user's premises and connected to a network. The term “host device” refers generally to a terminal device that has access to digital television content via a satellite, cable, or terrestrial network. The host device functionality may be integrated into a digital television (DTV) set. The term “consumer premises equipment” (CPE) includes such electronic equipment such as set-top boxes, televisions, Digital Video Recorders (DVR), gateway storage devices (Furnace), and ITV Personal Computers, as well as client devices.
As used herein, the term “display” means any type of device adapted to display information, including without limitation: CRTs, LCDs, TFTs, plasma displays, LEDs, incandescent and fluorescent devices. Display devices may also include less dynamic rendering devices such as, for example, printers, e-ink devices, and the like.
As used herein, the term “DOCSIS” refers to any of the existing or planned variants of the Data Over Cable Services Interface Specification, including for example DOCSIS versions 1.0, 1.1, 2.0 and 3.0. As used herein, the term “DVR” (digital video recorder) refers generally to any type of recording mechanism and/or software environment, located in the headend, the user premises or anywhere else, whereby content sent over a network can be recorded and selectively recalled. Such DVR may be dedicated in nature, or part of a non-dedicated or multi-function system.
As used herein, the term “headend” refers generally to a networked system controlled by an operator (e.g., an MSO or multiple systems operator) that distributes programming to MSO clientele using client devices. Such programming may include literally any information source/receiver including, inter alia, free-to-air TV channels, pay TV channels, interactive TV, and the Internet. Multiple regional headends may be in the same or different cities.
As used herein, the terms “Internet” and “internet” are used interchangeably to refer to inter-networks including, without limitation, the Internet.
As used herein, the term “memory” includes any type of integrated circuit or other storage device adapted for storing digital data including, without limitation, ROM. PROM, EEPROM, DRAM, SDRAM, DDR/2 SDRAM, EDO/FPMS, RLDRAM, SRAM, “flash” memory (e.g., NAND/NOR), and PSRAM.
As used herein, the terms “microprocessor” and “digital processor” are meant generally to include all types of digital processing devices including, without limitation, digital signal processors (DSPs), reduced instruction set computers (RISC), general-purpose (CISC) processors, microprocessors, gate arrays (e.g., FPGAs), PLDs, reconfigurable compute fabrics (RCFs), array processors, secure microprocessors, and application-specific integrated circuits (ASICs). Such digital processors may be contained on a single unitary IC die, or distributed across multiple components.
As used herein, the terms “MSO” or “multiple systems operator” refer to a cable, fiber to the home (FTTH), fiber to the curb (FTTC), satellite, or terrestrial network provider having infrastructure required to deliver services including programming and data over those mediums.
As used herein, the terms “network” and “bearer network” refer generally to any type of telecommunications or data network including, without limitation, hybrid fiber coax (HFC) networks, fiber networks (e.g., FTTH, Fiber-to-the-curb or FTTC, etc.), satellite networks, telco networks, and data networks (including MANs, WANs, LANs, WLANs, internets, and intranets).
As used herein, the term “network interface” refers to any signal, data, or software interface with a component, network or process including, without limitation, those of the Firewire (e.g., FW400, FW800, etc.), USB (e.g., USB 2.0 or 3.0), Ethernet (e.g., 10/100, 10/100/1000 (Gigabit Ethernet), 10-Gig-E, etc.), Thunderbolt, MoCA, Serial ATA (e.g., SATA, e-SATA, SATAII), Ultra-ATA/DMA, Coaxsys (e.g., TVnet™), radio frequency tuner (e.g., in-band or out-of band, cable modem, etc.), Wi-Fi (e.g., 802.11a,b,g,n,v), WiMAX (802.16), PAN (802.15), or IrDA families.
As used herein, the term “node” refers without limitation to any location, functional entity, or component within a network.
As used herein, the term “QAM” refers to modulation schemes used for sending signals over cable networks. Such modulation scheme might use any constellation level (e.g. QPSK, QAM-16, QAM-64, QAM-256 etc.) depending on details of a cable network. A QAM may also refer to a physical channel modulated according to the schemes.
As used herein, the term “server” refers to any computerized component, system or entity regardless of form which is adapted to provide data, files, applications, content, or other services to one or more other devices or entities on a computer network.
As used herein, the term “service”, “content”, “program” and “stream” are sometimes used synonymously to refer to a sequence of packetized data that is provided in what a subscriber may perceive as a service. A “service” (or “content”, or “stream”) in the former, specialized sense may correspond to different types of services in the latter, non-technical sense. For example, a “service” in the specialized sense may correspond to, among others, video broadcast, audio-only broadcast, pay-per-view, or video-on-demand. The perceivable content provided on such a “service” may be live, pre-recorded, delimited in time, undelimited in time, or of other descriptions. In some cases, a “service” in the specialized sense may correspond to what a subscriber would perceive as a “channel” in traditional broadcast television.
As used herein, the term “service group” refers to either a group of service users (e.g. subscribers) or the resources shared by them in the form of entire cable RF signal, only the RF channels used to receive the service or otherwise treated as a single logical unit by the network for resource assignment.
As used herein, the term “storage device” refers to without limitation computer hard drives, DVR device, memory, RAID devices or arrays, optical media (e.g., CD-ROMs, Laserdiscs, Blu-Ray, etc.), or any other devices or media capable of storing content or other information.
As used herein, the term “user interface” refers to, without limitation, any visual, graphical, tactile, audible, sensory, or other means of providing information to and/or receiving information from a user or other entity.
Overview
In one salient aspect, the present invention discloses methods and apparatus for the automated creation of targeted or focused content extractions and/or compilations (e.g., highlight reels). In one embodiment, the extractions or compilations are created for use in a managed content delivery network such as a cable or satellite television network. In one variant, incoming live feeds are recorded. Time-stamped metadata from sources (either internal or external) able to identify moments and/or “events” of interest is used to parse or select portions of the live feeds to generate clips related thereto. Those clips are then sent to users (including optionally their mobile devices) for viewing. In some embodiments, a recommendation engine is used to select clips matching interests of a particular user or group of users.
Varied sources of metadata may be used consistent with the invention, and networked resources may be utilized to in the implementation of internal “excitement” or other types of monitoring systems. Myriad configurations may be implemented including server or head-end based configurations, consumer premises based deployments, and/or distributed implementations.
Further, clips may be selected, recommended, and/or provisioned to mobile devices (e.g. cell phones, smartphones, tablet computers, etc.) over wired or wireless interfaces.
Service agreements may allow access to clips of limited duration, or expanded agreements may allow access to all material recommended by a given excitement data source.
Detailed Description of Exemplary Embodiments
Exemplary embodiments of the apparatus and methods of the present invention are now described in detail. While these exemplary embodiments are described in the context of the aforementioned hybrid fiber/coax (HFC) cable system architecture having an multiple systems operator (MSO), digital networking capability, IP delivery capability, and plurality of client devices/CPE, the general principles and advantages of the invention may be extended to other types of networks and architectures, whether broadband, narrowband, wired or wireless, or otherwise (including e.g., managed satellite or hybrid fiber/copper (HFCu) networks, unmanaged networks such as the Internet or WLANs or PANs, etc.), the following therefore being merely exemplary in nature. It will also be appreciated that while described generally in the context of a consumer (i.e., home) end user domain, the present invention may be readily adapted to other types of environments (e.g., commercial/enterprise, government/military, etc.) as well. Myriad other applications are possible.
It is further noted that while exemplary embodiments are described primarily in the context of a hybrid fiber/conductor (e.g., cable) system with legacy 6 MHz RF channels, the present invention is applicable to literally any network topology or paradigm, and any frequency/bandwidth. Furthermore, as referenced above, the invention is in no way limited to traditional cable system frequencies (i.e., below 1 GHz), and in fact may be used with systems that operate above 1 GHz band in center frequency or bandwidth, to include without limitation so-called ultra-wideband systems.
Other features and advantages of the present invention will immediately be recognized by persons of ordinary skill in the art given the attached drawings and detailed description of exemplary embodiments as given below.
Network—
The data/application origination point 102 comprises any medium that allows data and/or applications (such as a VOD-based or “Watch TV” application) to be transferred to a distribution server 104. This can include for example a third party data source, application vendor website, CD-ROM, external network interface, mass storage device (e.g., RAID system), etc. Such transference may be automatic, initiated upon the occurrence of one or more specified events (such as the receipt of a request packet or ACK), performed manually, or accomplished in any number of other modes readily recognized by those of ordinary skill.
The application distribution server 104 comprises a computer system where such applications can enter the network system. Distribution servers are well known in the networking arts, and accordingly not described further herein.
The VOD server 105 comprises a computer system where on-demand content can be received from one or more of the aforementioned data sources 102 and enter the network system. These servers may generate the content locally, or alternatively act as a gateway or intermediary from a distant source.
The CPE 106 includes any equipment in the “customers' premises” (or other locations, whether local or remote to the distribution server 104) that can be accessed by a distribution server 104.
Referring now to
The exemplary architecture 150 of
It will also be recognized, however, that the multiplexing operation(s) need not necessarily occur at the headend 150 (e.g., in the aforementioned MEM 162). For example, in one variant, at least a portion of the multiplexing is conducted at a BSA switching node or hub (see discussion of
Content (e.g., audio, video, data, files, etc.) is provided in each downstream (in-band) channel associated with the relevant service group. To communicate with the headend or intermediary node (e.g., hub server), the CPE 106 may use the out-of-band (OOB) or DOCSIS channels and associated protocols. The OCAP 1.0, 2.0, 3.0 (and subsequent) specification provides for exemplary networking protocols both downstream and upstream, although the invention is in no way limited to these approaches.
It will also be recognized that the multiple servers (broadcast, VoD, or otherwise) can be used, and disposed at two or more different locations if desired, such as being part of different server “farms”. These multiple servers can be used to feed one service group, or alternatively different service groups. In a simple architecture, a single server is used to feed one or more service groups. In another variant, multiple servers located at the same location are used to feed one or more service groups. In yet another variant, multiple servers disposed at different location are used to feed one or more service groups.
“Switched” Networks—
Switching architectures allow improved efficiency of bandwidth use for ordinary digital broadcast programs. Ideally, the subscriber is unaware of any difference between programs delivered using a switched network and ordinary streaming broadcast delivery.
Co-owned U.S. patent application Ser. No. 09/956,688 filed Sep. 20, 2001, entitled “Technique for Effectively Providing Program Material in a Cable Television System”, and issued as U.S. Pat. No. 8,713,623 on Apr. 29, 2014, incorporated herein by reference in its entirety, describes one exemplary broadcast switched digital architecture useful with the present invention, although it will be recognized by those of ordinary skill that other approaches and architectures may be substituted.
In addition to “broadcast” content (e.g., video programming), the systems of
Referring again to
The edge switch 194 forwards the packets receive from the CMTS 199 to the QAM modulator 189, which transmits the packets on one or more physical (QAM-modulated RF) channels to the CPE. The IP packets are typically transmitted on RF channels that are different that the RF channels used for the broadcast video and audio programming, although this is not a requirement. The CPE 106 are each configured to monitor the particular assigned RF channel (such as via a port or socket ID/address, or other such mechanism) for IP packets intended for the subscriber premises/address that they serve.
“Packetized” Networks—
While the foregoing network architectures described herein can (and in fact do) carry packetized content (e.g., IP over MPEG for high-speed data or Internet TV, MPEG2 packet content over QAM for MPTS, etc.), they are often not optimized for such delivery. Hence, in accordance with another embodiment of the present invention, a “packet optimized” delivery network is used for carriage of the packet content (e.g., IPTV content).
Automated Highlight Reel Creation Architectures—
Referring now to
In the exemplary embodiment, metadata generated by an identification entity 202 (hereinafter colloquially referred to as the “Excitement Identification Source” (EIS)) is used by the apparatus 200. The metadata may be accompanied by supplemental data supplied by the content source 204. Feeds 206 are fed through a media catcher 208 which is in data communication with an incoming content storage unit (ICS) 210. The media catcher and storage unit is configured to hold an archive of any and all feeds 206 coming through the media catcher 208. The ICS 210 stores feed content from as little as the last few minutes or seconds, up to even multiple years, for feed archiving. TA media clipper 212, being in data communication with the EIS 202, the content source 204, and the media catcher 208, parses the feeds 206 and the archived feeds present in the ICS 210 into clips, the parsing based on the metadata from the EIS 202 (and the supplemental data from the content source 204 when present). It can be appreciated that a given piece of EIS metadata may have a number of associated feeds and clips (i.e., multiple related commentaries, camera shots/angles, media types, or network presentations, etc.). The clips (or some portion of the clips) are then forwarded to a content repository (CR) 214, where they are maintained in A/V storage 216 until needed for viewing.
The media clipper sends the metadata associated with the clips to a Highlight Builder (HB) 218 for holding in metadata storage 220. The HB then uses user preferences and data from a recommendation engine 222 to build a series of highlights for a user or group of users. The HB compares the user preferences and recommendations to the metadata associated with the clips. Thus, the HB selects appropriate clips based on their associated metadata from the EIS 202. The clips are then ordered accordingly and presented in a video stream to a client device 224 for display.
The apparatus of
In other embodiments, a dedicated RSS feed is used as the EIS. The RSS feed generates specific messages to alert the system of content of interest. For example, an RSS feed dedicated to sports, such as THUUZ® or FanVibe®, or a web application such as, “Are You Watching This?!” running on the RUWTbot engine, provide updates that alert the system that a particular game is exciting, or is likely to be exciting at least momentarily. Thus, upon recite of such an update, the exemplary video clipper begins generating clips from the feed showing the appropriate game, so as to capture the points of interest. It will be appreciated that the combination of the media catcher 208, the ICS 210, and the media clipper 212 allow the system to generate clips both from live feeds and archived feeds. Thus, consistent with the present invention, both real-time (or near-real-time) and non-real-time EISs can be used, including in conjunction with one another.
The metadata from the EIS 202 is then used to properly classify the clips. The classification of the clips assists the HB 218 in selecting appropriate clips based on user preferences and/or user targeted recommendations. These user (or group of users) targeted recommendations can be generated using for example the exemplary methods and apparatus discussed in co-owned and co-pending U.S. patent application Ser. No. 12/414,554 filed Mar. 30, 2009 and entitled “PERSONAL MEDIA CHANNEL APPARATUS AND METHODS” previously incorporated herein. As discussed therein, methods and apparatus for “fused” targeted content delivery are presented. Specifically, a substantially user-friendly mechanism for viewing content compiled from various sources selected to align with a user's preferences is disclosed; the content is displayed as a substantially continuous stream as part of a “virtual” user-based channel or a virtual private media channel (VPMC). In one embodiment, a user profile is constructed and targeted content gathered without requiring any user intervention whatsoever; e.g., based on a user's past or contemporaneous interactions with respect to particular types of content. The “virtual channel” acts as a centralized interface for the user and their content selections and preferences, as if the content relevant to a given user were in fact streamed over one program channel.
In another aspect of the present invention, the compiled content is presented to the user in the form of a “playlist” from which a user may select desired content for viewing and/or recording. In one variant, a user is also presented with content having varying degrees or aspects of similarity to that presented in the “playlist” or elsewhere, including content listed in the EPG.
In yet another variant, the user's purchase of recommended (and non-recommended) content is enabled directly from the aforementioned playlist and/or the virtual channel.
Client applications (e.g., those disposed on a subscriber's CPE, mobile devices, and/or network servers) are utilized to compile the playlist based on user-imputed as well as pre-programmed user profiles. Various feedback mechanisms may also be utilized to enable the client application to “learn” from the user's activities in order to update the user profile, and generate more finely-tuned and cogent recommendations. Client applications may also be utilized to manage the seamless presentation of content on the aforementioned virtual channel, and locate/flag various scenes inside selected content for user viewing or editing.
The foregoing disclosures further disclose methods for combining multiple profiles to create a composite set of preferences for different “moods” single user (or for a group of users desiring content of interest to all parties), which may be utilized consistent with the present invention. These techniques may be applied to both primary content and advertising (or other forms of secondary content), or each individually.
In addition, complimentary user-based recommendation techniques are also discussed in co-owned U.S. patent application Ser. No. 12/414,576 filed Mar. 30, 2009, entitled “RECOMMENDATION ENGINE APPARATUS AND METHODS”, and issued as U.S. Pat. No. 9,215,423 on Dec. 15, 2013, previously incorporated herein. As discussed therein, methods and apparatus are presented for the identification and recommendation of content targeted to a particular user (or group of users) within a network. A mechanism for particularly selecting content to align with a user's preferences (the latter which the viewer need not enter manually) is discussed. The content is selected through a mechanism to learn (and unlearn) the user's preferences and which content they are likely to enjoy based on actions taken with regard to the content, so as to more finely tune the recommendations.
Referring again to
It can be appreciated that the system, as discussed, is practiced in multiple architectures. In some embodiments, the system is substantially located on the headend of content distribution network. In these cases, the live feeds and the content source data are externally provided, and the client device may include an intermediary CPE 106 or be disposed on a CPE. Further, the EIS may either be an external entity or monitoring-type application(s) disposed on the headend server. In other embodiments, the system may be substantially disposed on a CPE (or premises network of CPE). In these cases the live feeds, the content source data, the EIS are all external to the CPE. The system may also be distributed across a number of components on a content delivery network.
Headend Server Architecture—
Referring now to
The highlight assembly device 300 is a module adapted to be disposed on a headend server or other network-side entity, and provide content streams to CPE 106 and client devices 224 at the user's location. The feeds and content source metadata are delivered to the apparatus via the network interface 302. In some embodiments, the EIS metadata also arrives via the network interface. The network interface is in data communication with the processing unit 304. The processing unit 304 is in data communication with a memory or storage device 306 to provide the programs running on the processing unit fast memory access. The processing unit stores the incoming feeds on incoming feed storage 310 on the mass storage unit 308 for subsequent processing. In one embodiment, if not received from one or more external sources, the EIS metadata is generated by the excitement monitoring application (EMA) 316. The EIS metadata (whether internally or externally created) and the content source metadata are provided to the metadata processing application 318.
In operation, the metadata processing application splits the live feeds into clips, and associates metadata with the clips. The clips are sent to the clip repository storage 312 and the associated metadata is stored in the metadata storage 314. Alternatively, in some embodiments, the clips and metadata are stored together in the same storage mode, and in some cases, in the same file.
The highlight reel assembly application (HRAA) 320 reviews the metadata, and compares the metadata to user preferences and recommendation engine input in order to create series of clips (or single clips) to be played back to the user, either automatically or upon user request. The HRAA 320 generates a playlist containing the series. Then, the content provision management application generates a content stream for provision to the client device 224 (or, in some embodiments, an intermediary CPE 106). The provision of the content stream can occur either through the network interface, or through any of the multitude of other interfaces 326 disposed on the highlight assembly device 300.
Further, the content rules enforcement application 322 may monitor the content series assembly performed by the highlight reel assembly application 320 to enforce restrictions on the latter; e.g., that the assembly application does not use content for which the target user/subscriber/CPE lack the rights (or subscriber level) to consume. However, it is envisioned that, in some cases, the MSO may provide the target user with a “teaser” (e.g. content from a higher subscriber level), or yet to be released content, to entice to user to increase their associated subscriber level (or for other promotional reasons). Once assembled, the content may be provided to a used via the content provision management application 324.
The network interface 302 comprises any number of high-speed data connections to facilitate the inclusion of the highlight reel assembly device at the headend of the network. The connections must supply bandwidth sufficient to support the incoming feeds, and include any of the interfaces necessary to support any of the network architectures discussed above. Such connections or interfaces may include for instance Gigabit Ethernet/10G, IEEE-1394, Thunderbolt™, or other well known networking technologies.
In an exemplary embodiment, the processing unit 304 and memory 306 are specifically configured to support the applications running on the processing unit. Thus, the processing unit is connected via high bandwidth channels to each of the memory, mass storage, network interface, and any other incoming/outgoing interfaces.
As noted above, in some embodiments the EMA 316 monitors various information sources (including Twitter posts, blogs, sports site comments, content source webpage posts, and live commentary, etc.) to gauge user interest in particular events. Further, the MSO may provide its customers with remotes or other interface options (e.g. on-screen display options, telescoping advertisements, web-links, phone/smartphone apps, etc.) to allow users to identify for a MSO when a particular event is exciting. Machine intelligence techniques can be applied to the various sources. In an exemplary embodiment, the system, upon reviewing a Twitter post, attempts to gauge if the poster of the “tweet” is currently finding the event particularly exciting. Then, the system attempts to corroborate these findings with other such posts (including posts or information from other sources). Once a finding is made, a time-stamped metadata file is created and passed to the metadata processing application. The exemplary implementation of the EMA 316 must properly calibrate its analysis for a number of variables, as will be discussed below.
In some embodiments, all incoming feeds are recorded and placed in mass storage 308. In other embodiments only certain events and/or channels are recorded. This selection or filtration may be based on past use of the recorded feeds, user preferences, and/or ratings, etc. The feeds are time stamped during the recording process for later reference to the EIS and content source metadata.
Once passed to the metadata processing application 318 (either from the EMA 316 or an external EIS), the metadata file is processed for storage. This process includes using the time stamp to identify the portion of the captured incoming feed that pertains to the metadata. Thus, the associated source and time of the event must be identified. This is achieved through comparisons with content source metadata. Alternatively, this may also be achieved through a pre-compiled list of content on each of the various incoming feeds. Once the related portion of an incoming feed is identified, other metadata assisting in the classification of the clip is appended to the metadata file. This may include subject matter, source, genre, title of the event, date/time information, a brief description of the content, and/or the overall likelihood that the clip is “newsworthy” (such as by virtue of being associated with a known exciting sports player, popular personality, etc.). The clips and metadata are then placed in mass storage 308.
The content rules enforcement application 322 maintains sets of rules associated with content and content sources. For example, some content sources may require that none of their content be replayed without explicit permission from that source. In the case when such permission is not granted, the content rules enforcement application bars the HRAA 320 and the content provision management application from including and replay clips from that source. Conversely, if the permission is granted, the content rules enforcement application lifts the bar on inclusion and replay. Further, the content rules enforcement application 322 may contact the content source to obtain permissions related to a clip. Similarly, sources may have varying preferences for individual pieces of content. In some cases, the content source may limit the time after the initial broadcast for which clips may be replayed, or limit their playback to only certain prescribed periods of the day or week. In these cases, the content rules enforcement application enforces an expiration date or other appropriate use restrictions for clips associated that content source.
Once the clips and metadata are available in mass storage, the HRAA 320 begins building series of clips (or single clips) for provision to specific users or groups of users. The HRAA takes pre-set preferences or those from recommendation engines (based on techniques discussed supra) to generate the highlight collections. In an exemplary embodiment, a user requests clips pertaining to a specific sports team. Then, the HRAA readies clips related to that sports team. Further, the user may request clips for that sports team over a given period of time (e.g. the user requests an update on the past week of Chicago Cubs games). In this case, the HRAA automatically locates clips for that team and time period, and compiles reels for play upon an explicit user request, a teaser suggesting interesting content, or as part of a VPMC.
It can also be appreciated the clips need not be assembled only for a specific user or set of users. In some embodiments, a dedicated clip channel is created. Thus, any user tuning the logical or physical channel associated with the clip channel may view it. These clip channels may be subject specific. For these dedicated clip channels, the HRAA uses a general set of preferences rather than one specific for a set of users. In some embodiments, the HRAA also accepts input from the content rules enforcement application. This input ensures that only clips available to the target user or users are included in the series.
After the clip series is compiled by the HRAA 320, the content provision application 324 provides the clips to the user when requested. The content provision application provides the clips through a media stream to a client device operated by the user. In some embodiments, content provision application merely directs a separate VOD server to provide the clips in the order set forth by the HRAA. In other embodiments, the content provision application itself creates the media stream via the onboard interfaces in the highlight assembly device 300. The content provision application also enables and/or enforces rules related to “trick mode” operation (e.g. fast forward, rewind, play, pause, change camera angle, etc.). In some embodiments, the content rules enforcement application provides these rules (as they pertain the user(s) in question) to the content provision application. The content provision application may further make use of A/V codecs in the creation of the media stream.
It is envisioned that in some embodiments, the compilation of a clip or series of clips may involve some level of processing to generate a playable media segment (e.g. more processing than simply playing the clips in the designated order), such as transcoding, transrating, and so forth. Co-owned U.S. patent application Ser. No. 10/970,429 filed Oct. 21, 2004 and entitled “PROGRAMMING CONTENT CAPTURING AND PROCESSING SYSTEM AND METHOD” incorporated by reference herein in its entirety, details methods and apparatus for compiling playable segments from blocks of media on the fly. As discussed therein, program streams are broken into segments, and further into blocks to allow of customizable playback. In an exemplary embodiment, a specific segment is requested the associated blocks are identified and then joined. The media is then presented to the user. In the case that all blocks associated with a given segment are not yet available (e.g. not yet processed or broadcast), the available blocks may be joined to the unavailable blocks and playback may commence prior to the availability of all blocks. For example, consistent with the present invention, a user may be enticed to view a series of exciting clips from a sporting event by a short initial clip. However, the full series of clips may include footage not yet broadcast. Thus, using the techniques described in the above-identified application, the user may view the series of clips in a single coherent session despite the fact not all clips were available (or even in existence) at the time the user began to view the series.
Exemplary CPE—
Referring now to
Once the clips have been parsed, the highlight reel assembly application (HRAA) 420 begins compiling series of clips and/or individual clips for display to a user (or group of users) associated with the CPE. The HRAA 420 uses preferences obtained from the user profile management application 426 to select the clips for inclusion. Furthermore, in some embodiments, the HRAA ensures that the clips included in the compiled series comply with the content rules based on input from the content rules enforcement application.
The content rules enforcement application 422 on the CPE has similar function to that discussed above with respect to the headend embodiments. The series compiled by the HRAA 420 is used by the media stream creation application to create a playlist of clips for the user or users of the CPE. Playback of the clips occurs upon request, automatically, or in response to a request to tune to a VPMC.
It is envisioned that when used in conjunction with the above-discussed headend apparatus, one or more of the functionalities of the CPE may obviated. Thus, in some embodiments, functionality may be removed from the CPE and instead provided by the headend. Conversely, functionality not supplied by the headend may be present on the CPE. It will also be appreciated that users may desire repeat functionality on their own associated CPE for premium performance and customizability that may not be offered by the headend-based device.
The interfaces 402 comprise one or more incoming and/or outgoing ports (e.g. serial, network, radio frequency, and/or display, etc.). The live feeds are sent to the CPE via one or more of these ports. In some embodiments, interface resources such as tuners and/or bandwidth may be limited on any given CPE. Therefore, CPE on a premises network (or other network) may share such resources. Methods and apparatus from sharing these resources are discussed in co-owned U.S. patent application Ser. No. 12/480,597 filed Jun. 8, 2009 and entitled “MEDIA BRIDGE APPARATUS AND METHODS,” and issued as U.S. Pat. No. 9,602,864 on Mar. 21, 2017, incorporated by reference herein in its entirety. As discussed therein, a media bridging apparatus acts as a connection between a portable media device (PMD) and a user's home network. This bridging apparatus may be used, for example, to convert content stored on the PMD to a format capable of being presented on a user's set-top box or other client device. Control of the presentation is also provided by the bridging apparatus. In one embodiment, the apparatus enables a user to access and control playback of media from a PMD via a user interface associated with a television, personal computer or other user device. The apparatus may also enable content stored on the PMD to be copied and stored on a user's digital video recorder (DVR) or other storage apparatus, optionally while maintaining appropriate copyright and digital rights management (DRM) requirements associated with the content being manipulated.
Furthermore, techniques and systems for intra-network resource sharing are also discussed in co-owned U.S. patent application Ser. No. 12/480,591 filed Jun. 8, 2009, entitled “METHODS AND APPARATUS FOR PREMISES CONTENT DISTRIBUTION,” and issued as U.S. Pat. No. 9,866,609 on Dec. 18, 2018, incorporated by reference herein in its entirety. As discussed therein, an apparatus manages content within a single device and for several devices connected to a home network to transfer protected content (including for example audiovisual or multimedia content, applications or data) in a substantially “peer-to-peer” fashion and without resort to a central security server or other such entity. The management, transfer, and “browsing” of content on a single device or on a plurality of devices is accomplished via a content handler. The content handler utilizes various algorithms in conjunction with several “buckets” to make management, transfer, and browsing possible. Further, a scheduling apparatus allows for the reservation of resources on multiple devices within a network. Thus, resources may be treated as if available to any of the network devices. The interfaces 402 which are bidirectional or exclusively outgoing are used by the CPE in the provision of media streams. In some embodiments, the CPE may be integrated in a display device and supply the media stream via digital or analog audio/visual (A/V) ports or via serial transfer. In other embodiments, the CPE is external to the display device, and the media stream may also be provided over a network interface or RF port.
The memory unit 406 provides high-speed memory access to support applications running on the processing unit.
As previously mentioned, the mass storage unit 408 is configured to store the live feeds, metadata, and clips. Further, the storage unit must also be configured to support the provision of a media stream from the stored clips. Thus, in some embodiments the mass storage unit is multi-modal. In some cases, this comprises archival space in lower-speed low-cost storage (such as optical or magnetic storage systems, etc.) and higher-speed (e.g. in terms of seeking and/or transfer operations, etc.) higher-cost and or higher-power consumption storage (e.g. flash memory, high-RPM hard drives, and/or high-speed volatile memory, etc.). In these embodiments, the higher-speed memory would be used as a cache for active operations (e.g. clip parsing or media streaming, etc.).
The processing unit 404 runs a number of applications to manage the parsing of live feeds into clips, user profiles, the selection of clips, the compilation of series of clips, and the provision of media streams. To facilitate these operations the processing unit is in data communication with the memory unit, mass storage units, and incoming/outgoing interfaces using any number of well known bus or other data interface architectures.
The metadata processing application 418 reviews the metadata from the EIS and content source. Based on this metadata, the metadata processing application parses the live feeds into media clips as discussed above with respect to the headend embodiments.
As discussed above, the HRAA 420 is responsible for applying user preferences and other profile data to compile series of clips (or single clips). In some embodiments, the HRAA accepts inputs from the content rules enforcement application 422.
The media stream creation 424 application uses the clips selected by the HRAA 420 to generate a media stream. The provision of these streams may occur via any of a number of methods. In some cases, the media is streamed over a local IP network to a second display device. Alternatively, a peer-to-peer method may be used over the IP network. In other embodiments, an analog or digital A/V protocol may be used to send the media to a directly connected or integrated display device, or alternatively a high-speed serial communication protocol is utilized. It can be appreciated that literally any method or protocol for provisioning media between two (or more) operatively connected devices may be used consistent with present invention.
The user profile management application 426 maintains user preferences for use with the HRAA. In one variant, the user profile management application collects user preferences as either solicited preferences from a user and/or volunteered preferences. The user profile management application runs a user interface to collect these preferences from the user. Further, in some embodiments, the user profile management application may also glean or passively obtain preferences from monitoring user action with respect to content (or other related user actions).
The user profile management application may also include a recommendation engine. This recommendation engine may serve other external network connected devices. In other embodiments, the user profile management application collects data for a recommendation engine running on an external server. Thus, a single profile may be used on multiple devices in either case. Whether the recommendation engine run locally or on an external network entity, the engine still provides input to the HRAA.
Distributed Embodiments—
Referring now to
In other configurations, the individual applications discussed above are run in a distributed fashion. In one embodiment, the portion of the HRAA that selects clips is run on a server, and the portion of the HRAA that orders the clips into a series for playback is run on the user's CPE.
Further, in some embodiments, the network entities may simply provision the clips in a video stream via the network directly to a client device 224. It will be appreciated by those skilled in the art given the present disclosure that myriad other divisions of application can be made in various distributed embodiments according to the invention.
Referring now to
Further, referring back to
Mobile Devices—
In some embodiments, a user may receive, view, or request the clips on a wireless mobile device. In these embodiments, the user connects to a media server using a browser, guide application, or other application, and the clips are provisioned to user via the network to which the wireless device is connected. The wireless network associated with the device may be any of those mentioned above, and may further include WLAN (e.g., Wi-Fi) or cellular technologies such as 3G and 4G technologies (e.g. GSM, UMTS, CDMA, CDMA2000, WCDMA, EV-DO, 3GPP standards, LTE, LTE-A, EDGE, GPRS, HSPA, HSPA+, HSPDA, and/or HSPUA, etc.). Also the wireless link of the mobile device may be based on a combination of such networks used in parallel, such as for example where the mobile device is a hybrid phone capable of utilizing multiple air interfaces (even simultaneously). Further, in some embodiments, the mobile device may foreword clips or other content provisioned through the network to other client devices via the same or a second network (e.g., receive via LTE, and forward via WLAN or PAN such as IEEE 802.15).
In some embodiments, the mobile device comprises a display unit such as a touch-screen display and input device. The display unit is used for, inter alia, displaying requested clips and presenting information and options to the user. Further, audio and even tactile (e.g., vibrational) hardware may be included for presentation of content through other physical sensations.
The mobile device includes a user interface for managing direct user requests, user management of applications, and other functions. In some embodiments, the user interface comprises one or more soft-function keys on the aforementioned display device to allow for contextual input from a user. However, in light of this disclosure, it can be appreciated that a multitude of user interface implementations (including physical buttons, switches, touchpads, scroll wheels/balls, etc.) can be used in accordance with the present invention.
Exemplary methods and apparatus for providing media to IP-based mobile devices are presented in co-owned U.S. patent application Ser. No. 13/403,802 filed Feb. 23, 2012 and entitled “APPARATUS AND METHODS FOR PROVIDING CONTENT TO AN IP-ENABLED DEVICE IN A CONTENT DISTRIBUTION NETWORK”, previously incorporated herein. As discussed therein, content is provided to a plurality of IP-enabled devices serviced by a content distribution network. In one embodiment, a network architecture is disclosed which enables delivery of content to such IP-enabled devices without the use of a high-speed data connection; i.e., via another distribution platform (such as for example a traditional CATV or other managed distribution network DOCSIS or in-band QAMs). This capability allows the managed network operator to provide audio/video content services to an IP-enabled device (e.g., mobile wireless content rendering device such as a smartphone or tablet computer) associated with a non-data subscriber of the operator's network. For example, an MSO is able to make content delivery services available to a subscriber's tablet computer (e.g., iPad™) when the owner thereof does not subscribe to the MSO's high-speed data network or services, and instead only subscribes to the MSO's video services. This approach advantageously enables a user to receive content on IP-enabled devices, which are generally more mobile than non-IP devices, thereby enhancing the user experience by allowing the user to received the content at various locations (as well as that specified in the subscription agreement; e.g., at the subscriber's premises).
See also co-owned U.S. patent application Ser. No. 13/403,814 filed Feb. 23, 2012 and entitled “APPARATUS AND METHODS FOR CONTENT DISTRIBUTION TO PACKET-ENABLED DEVICES VIA A NETWORK BRIDGE”, and issued as U.S. Pat. No. 9,426,123 on Aug. 23, 2016, previously incorporated herein, which may be used consistent with the present invention. As discussed therein, content is provided to a plurality of IP-enabled devices serviced by a content distribution network. In an exemplary implementation, extant network structure and function are utilized to the maximum extent in order to minimize MSO investment in providing such services, thereby also enhancing rapid incorporation of the technology and provision of services to the users/subscribers. Given that in some embodiments non-managed or third party networks are used to provide clip to devices consistent with the present invention, it will be appreciated that connectivity with a wide variety of devices (including wireless mobile devices) is achieved. Further, in some implementations, a recommendation engine runs on the mobile device, which receives metadata related to the clips from a media server (e.g. VOD server, headend, or CPE, etc.), and recommends clips or other content. If requested, the clips are then forwarded to the mobile device from a media server (either the same media server or another).
The use of mobile devices further facilitates the “social” aspects of the present invention. Some embodiments include applications for recommending videos to e.g., friends or family and providing incentives to users (e.g., coupons, discounts or exclusive media, etc.) for encouraging others to participate. The invention-enabled mobile device may also be equipped with near-field communication (NFC) or other wireless payment technology such as for example that set forth in ISO Std. 14443 for contactless payment, so as to facilitate use of such coupons, discounts, etc., such as via the well known Google Wallet approach.
Methods of Highlight Generation and Provision—
Methods useful with the exemplary systems and apparatus discussed above are now described.
In a salient aspect of the invention, incoming live feeds are recorded, and time-stamped metadata from sources (either internal or external) able to identify exciting moments and events is used to parse or select portions of the live feeds, so as to permit generation clips related to the exciting moments and events. Those clips are then sent to users for viewing. In some embodiments, a recommendation engine is used to select clips matching interests of a particular user or group of users. Single clips or series of clips may be sent to the matched users.
Referring now to
The clips may be presented to the user through recommendation by a recommendation engine, or by recommendation by another user or network entity. Alternatively, the user may request clips be played. The system may also be used to view series of clips whether related or otherwise.
It will be appreciated that the aforementioned metadata need not account for all periods of the live feeds, but rather only certain portions of interest. Further, during clip generation, the period covered by a clip may correspond to a shorter, longer, or identical duration to that specified in its associated metadata. In some embodiments, the metadata works as a lagging indicator of excitement. For example, an exciting event occurs and shortly thereafter it is identified. Metadata may also be used in an anticipatory or predictive fashion, such as where the excitement level associated with a sporting event is predicted to increase under certain scenarios, such as an NFL team getting within the “red zone”, the start or finish of the Indy 500, the kickoff of the Superbowl, etc.
In some embodiments, the metadata may only include the time at which that particular event was identified, rather than the time of the event itself. Thus, in such embodiments, the clip is selected so that it includes a period leading up to the time identified in the metadata. In other embodiments, the metadata provides time detail about the actual event, in addition to the time of identification. Thus, the duration of a clip may correspond directly to the times provided in the metadata. However, it should be noted that a clip may still be extended or truncated in relation to the times provided by the metadata. An event may be identified in a time-stamped metadata file, but the MSO may wish to include surrounding material based on user preferences or content source rules. For example, an exciting play is a sports event may be identified, but the MSO provides a clip of the entire quarter comprising the event. Alternatively, an entire quarter is highlighted as exciting by the metadata, but the MSO only generates a single clip or series of clips from the exciting quarter. Thus, a “video snack” is presented to the user, as opposed to a long piece of content.
In embodiments involving previously broadcast material, including material only broadcast minutes or seconds prior, the technology to “lookback” at this material for review is important. Methods and apparatus for subsequent review and manipulation of previously broadcast content are discussed in commonly owned U.S. patent application Ser. No. 10/913,064 filed Aug. 6, 2004 and entitled “TECHNIQUE FOR DELIVERING PROGRAMMING CONTENT BASED ON A MODIFIED NETWORK PERSONAL VIDEO RECORDER SERVICE” previously incorporated herein by reference. As discussed therein, selected programs or program channels may be afforded a network personal video recorder (NPVR) service to enhance a user's enjoyment of programming content. In accordance with the NPVR service, broadcast programs (or at least those broadcast programs afforded the NPVR service) are recorded at a headend of a content delivery network when they are delivered to a user at a set-top terminal. Thus, the user not only may “reserve” for review a future program and a previously broadcast program, but also restart an in-progress program since it has been recorded at the headend regardless of any user request. That is, the NPVR service obviates the need of a proactive effort otherwise required of a typical DVR user, which includes deciding and actively electing in advance what shows to record. In addition, the NPVR service furnishes trick mode functions (e.g., rewind, pause and fast-forward functions) for manipulating a presentation of recorded programming content. Further, the lookback service may be implemented on any device within the content delivery network, including at a CPE capable of receiving and recording the live feeds. Thus, the feeds recorded without specific (or in some cases any) user intervention may be compared to the received excitement metadata for the generation of clips, consistent with the present invention.
This system may be used in review or catch-up. A user may wish to review all exciting clips related to a particular subject to provide then with a quick update or chance to review recent events. For example, a user may want to review an entire season of a sports team. Highlights from that season that were recorded in the lookback material would then be identified and compiled into a series of clips. Naturally, similar procedures are used with other events (single games, news coverage, political campaigns, reality-TV, late-night shows, user-created material, TV-serials, radio programs, etc.).
In one implementation of the invention, the system utilizes past recordings by the user at the CPE (or NPVR) to generate recommendations. Time-stamp and source data related to past recordings (or the actual recordings themselves) may be uploaded to the headend. The EIS metadata may then be cross-referenced with the recordings made by the user to identify patterns. For example, if a user has recorded the majority of plays made by Peyton Manning, the system identifies the user as a Peyton Manning fan. The system then provides other/future clips or reels related to Peyton Manning (or perhaps individuals or events having less direct relationship, such as Eli Manning, Archie Manning, the Indianapolis Colts, etc.).
The system may also offer “collection completion.” For instance, in one such implementation, if a given user (or device) has recorded most of the content related to a given subject but has missed certain pieces, the system identifies the missing elements directly (and/or indirectly) related to that subject, and offers them to the subscriber a recommendations, part of a subscription, and/or for individual purchase.
In some embodiments, the lookback feature may be used with a picture-in-picture (PIP function to display exciting moments from content while a user is viewing another piece of content. For example, a recent exciting clip from a live sporting event may be displayed on a users screen in such a PIP window while the user is watching a concurrent show. This is useful in a variety of circumstances. For instance, the system may be used to ensure that a user sees all the exciting moments of a particular piece of content without viewing the entire program. A user may not want to watch the entirety of an awards show, but may want to receive updates of excitement in near-real-time.
Also, the system may be used to alert a user that a particular piece of content is becoming interesting. For example, an identified fan of a particular sports team may wish to be notified when a game gets interesting. Further, a viewer may be identified as having previously tuned away from a “boring game” and may wish to see clips or tune back if exciting moments are occurring.
The excitement metadata may also be used to control functions other than the generation of clips. In some embodiments, rather than generate a clip for an event, the system may simply tune the display device to the live feed associated with the metadata. For example, a user may be uninterested in a clip comprising an entire sporting event, but may be interested in watching an exciting game currently in progress. Further, if a user tuned away from an event at a particular excitement level, they may wish to be automatically tuned back if the excitement level increases from that particular level (e.g., “delta excitement”).
In addition, a user may want to be automatically tuned to any content above a certain threshold of excitement. In these embodiments, conflicts may be resolved by tuning to the most exciting event among a group of events above the threshold, or other defaults or user preferences may dictate a resolution (e.g. always prioritizing a favorite subject of the user or a favorite team/player, etc.) Further, in other variants, an on-screen display option to tune to an exciting game is presented in lieu of automatic tuning. In light of the present disclosure, it can also be appreciated that the system may be used to avoid exciting events (e.g. a user may only find games enjoyable/relaxing when their favorite team has a comfortable lead, and may wish to be warned before tuning to a pitched battle). Anecdotal evidence indicates that those regularly watching exciting sporting events, especially where they have a vested interest or emotional connection to the team(s), actually tends to shorten one's life span. Thus, the system may also direct a user away from an event above a certain threshold of excitement, such as to maintain their peace or in effect extend their life over that which might exist if the events were routinely watched.
In a similar vein, news programming, which frequently and consistently focuses on potentially disturbing events or negative aspects of humanity (e.g., crimes, plane crashes, accidents, etc.) is often equated with inciting depression or other emotional disabilities within people watching them on a routine basis. Hence, metrics similar to “excitement” may be used when evaluating content such as news programs, such as for example a “negativity” index wherein emotionally upsetting stories such as murders or plane crashes are tuned away from, while more positive portions such as weather forecasts or human interest stories are maintained.
Movies for example might use a “suspense/tension” or “hilarity” metric, so as to cue users to (or away from) suspenseful or humorous portions thereof.
Content rules are also considered when providing clips to users. Users lacking the subscription level to view particular piece of content may not be able to view all related clips. However, in some embodiments, highlights below a certain time threshold may still be provided. Further, exciting clips may be used as “teasers” to entice a user to upgrade their subscription level or purchase a specific piece of content.
In other variants, a user may be able to subscribe to an “excitement ticket”, where all material highlighted by a specified EIS (or multiple EIS's) is available to that user. In some variants, the ticket is limited to a specific subject (e.g. a certain team or sport, etc.), or may range across multiple different topical areas or genres. Different premiums are charged based on the nature of the specific “tickets”.
In yet other variants, the user is charged on a per-clip and/or per-reel basis. Some subscribers may prefer such ala carte availability of clips or reels. Furthermore, the MSO may see wider initial adoption of the service if users are not forced to commit to larger fees allowing repeated use.
Further, such excitement metadata may be used in the assessment of premiums charged to secondary content providers such as advertisers. In some embodiments, content with increased expectation of excitement may garner a higher fee for the insertion of secondary content. Further, given that exciting clips have greater replay likelihood, attaching secondary content to the replays of that clip warrants higher premiums. Excitement metadata may also be used in real-time in this fashion, in an exemplary embodiment, a secondary content source may select a threshold (or thresholds) at which they want to have their content inserted for maximum effect. Even allowing participation in such a targeted secondary content insertion service may warrant a service fee (separate from that associated with actual insertion of secondary content).
Methods and apparatus for the insertion of secondary content and the use of advertising “avails” are discussed in commonly owned U.S. patent application Ser. No. 12/503,749 filed Jul. 15, 2009, entitled “METHODS AND APPARATUS FOR EVALUATING AN AUDIENCE IN A CONTENT-BASED NETWORK”, which issued as U.S. Pat. No. 9,178,634 on Nov. 3, 2015 and is incorporated herein by reference in its entirety. As discussed therein, the identification, creation, and distribution of audience or viewer qualities to an advertisement management system and/or an advertisement decision maker are disclosed. The system provides viewership data in real-time (or near-real time), and offers the ability to monitor audience activities regarding, inter alia, broadcast, VOD, and DVR content. This system advantageously allows the content provider to create more targeted advertising campaigns through use of an algorithm that combines advertising placement opportunities with audience qualifiers (i.e., psychographic, geographic, demographic, characteristic, etc. classifications) to create an advertising “inventory” that can be more readily monetized. In different variants, the inventory can be based on historical and/or “real time” data, such that adverting placements can be conducted dynamically based on prevailing audience characteristics or constituency at that time. Methods and apparatus for managing such advertising inventory via a management system are also discussed.
Referring now to
First, at step 702 of the method 700, an excitement or other metric related to a live feed or piece is assessed. At step 704, it is determined if the assessed metric meets a given threshold criterion or criteria. If the criterion/criteria is/are not met, no metadata is generated. If the criterion/criteria is/are met, metadata is created at step 706. Information identifying the content or feed is added (step 708), and a time-stamp is created (step 710). At step 712, the metadata is passed on to the automated highlight reel system or other system dependent on the excitement or other metadata. Thus, a method for generating metadata related to exciting events is presented.
Referring now to
A number of excitement metrics may be implemented consistent with the invention. Simple metrics related to the content may be used. For example, the score of a game is an indicator of excitement. A close game may be more exciting than one with a large differential. However, a record-breaking (or near record-breaking) blowout would be of interest to many users. Thus, interest may be mapped to (i) absolute score; and/or (ii) differential or relative score, but not necessarily in a linear fashion.
Further, lead changes, or changes or large changes in score often indicate excitement. Thus, for example points per unit of time (e.g., excitement “velocity” or even “acceleration”) can be also used as an excitement metric. Similarly, the number of times the lead is changed, or the rate at which lead changes occur per unit time (indicating a pitched battle) also indicates excitement. The temporal placement of such changes also may indicate excitement (i.e., lead changes in the first quarter of a game are generally less exciting than those occurring in the fourth quarter, since the latter have more immediate relationship to the final outcome of the game.
The level of noise produced by the crowd (where present) may also indicate excitement. Times of loud cheering or booing indicate excitement. Furthermore, times of unusual silence (or unexpected breaks in cheering) may indicate particularly tense or surprising moments, or at events such as a tennis or golf match, times of intense competition.
At a car race (e.g., Indy 500), vehicle velocity may be an indicator of excitement level. For example, if the cars are averaging 100 mph, it is likely that a crash has occurred, and the yellow flag is out (which can be viewed as potentially exciting for the more morbid person, or less exciting for those interested in high-speed racing and maneuvering).
In some embodiments, the records of teams and/or players are used to predict excitement. For example, if two teams have won and lost to the same or similar teams it is predicted that their own direct matchup will be a close one. This can also be considered in light of their current standings; two teams who have had exciting matches in the past may be considered of less interest if one of the teams has e.g., already locked up a playoff berth, while the other has been excluded from the playoffs.
Similarly, in the field of politics, the news coverage of an election may be of increased excitement if two candidates have been polling similarly in a given region.
In some embodiments, the number of posts (Twitter, Facebook (likes or posts), blog, website comments, etc.) related to specific content are used as a metric. A large absolute number of posts may indicate high levels of interest. However, some embodiments of the invention account for the expected interest in a certain event (i.e., even a “boring” Superbowl will have a large absolute number of posts, or for instance a large money market baseball team will have more expected posts than a smaller market team). Thus, the absolute number of posts is weighted versus an expected number when gauging excitement.
Web crawling applications can be used to collect and assess such posts either from single sources are multiple sources. Further, social media sites (e.g., Facebook and Twitter, etc.) may offer live counts by subject of posts made on their servers.
The subject of posts may be considered in the assessment of posts. Machine plain language learning algorithms can be used to glean data from the posts relating to context. For example, positive posts about certain content may indicate general excitement, and negative posts may indicate an uninteresting game. Further, a mix of positive and negative posts may indicate an unexciting event (i.e., the fans of the winning and losing teams in a blowout), even in the case in which the absolute number of posts is high. Such plain language learning and contextual analysis algorithms are useful in classifying the content of a given post.
Further, in some embodiments, metrics directed by the MSO are used. For example, the users of the MSO's network may be presented with an excitement option (on-screen display option, remote button, soft key, or mobile application, etc.) that allow users to highlight exciting moments in the feeds or content provided by the MSO. When, the user engages the option, the MSO logs the entry. The excitement associated with the clip is associated with the number of such entries. In some variants, the excitement may also be weighed against the total number of viewers. For example if 98% of the active viewers find a feed exciting, a higher excitement rating may be assessed, even in comparison to a feed with a higher absolute number of entries but with only 30% of active viewer responding. It can be appreciated by those of ordinary skill in view of this disclosure, that any number of such “crowd-sourcing” systems to develop aggregate opinions of many users can be applied.
In other embodiments, such excitement options are modified to allow users to create their own custom excitement reels for later viewing and/or sharing with other users. Users may identify start and stop times for individual clips. Then, in an editing application, the user arranges those clips into a reel. The editing program may also provide tools for altering the clips themselves (e.g. changing clip duration, zooming, adding audio or text commentary, cropping, filter effects, etc.). The clips/reels are then stored on a CPE or uploaded to a content server. Uploaded content may be shared with other users, or even more broadly via social media such as Youtube or the like.
In addition, the MSO may maintain fronts on social websites, for example by allowing users to “like” or “follow” particular feeds to indicate interest.
In certain variants of such MSO directed schemes, incentives may be used to increase participation or accuracy. Accuracy may be judged by the total number of times a user identifies a feed or piece of content exciting, versus the number of times enough other users also find that feed or piece of content exciting to exceed the excitement threshold. Honors or discounts may be given to highly active and/or accurate users. For example, if a user is involved in the identification of 100 exciting events, a discount or other incentive on service may be given. Further, in some cases the discount may only be available to users with accuracies above a specific threshold (to discourage “spamming” or random identifications to inflate participation.)
In other variants of such MSO directed schemes, a futures market may be used to predict excitement. Participants supply/purchase a series of “puts” (option to sell) and/or “calls” (option to buy) as to whether specific shows will rise above or fall below a given excitement level. Users then may do background research on various events (e.g. sporting events, political debates, etc.) to gain an advantage over other users while trading and selling these options. This model applies a market-based approach to prediction. Thus, the system leverages potentially extensive individual user research on a large scale. The system may or may not use money, or other forms of consideration/value. In certain variants, the participants trade credits toward subscription expenses rather than actual funds. Yet other variants include a “contest” with point trading rather than money or credits, and the most successful trader(s) (those with highest point totals) may be given consideration (e.g. prizes, discounts, promotions, notoriety, etc.).
Multiple metrics may be combined to increase the accuracy of an assessment. For instance, an anomalous number of posts about a sporting event may simply be a statistical aberration if there is no scoring data of particular interest. Alternatively, one metric may be faulty for a given reason (one would expect muted cheers if the away team wins a game or make great plays, even if the game was exciting.) Thus, monitoring multiple metrics can ensure that over-reliance on particular metric does not occur.
In other embodiments, mathematical analyses are applied to the metrics measured. In models similar to those used in decision making in high-frequency trading, a MSO monitors one or more metrics, and such models are used to predict the likelihood a given trend continues over a given time scale. For example, if one specific minute of an event is found to be exciting, such a model provides probabilities that the next minute will be exciting. Such models may be tailored to time scales ranging from less than one minute up to one month. Functional forms, such as those discussed in “Scaling of the distribution of fluctuations of financial market indices” by P. Gopikrishnan, et al., incorporated herein by reference in its entirety, may be used to generate the aforementioned probabilistic models. As discussed therein, power-law asymptotic functions are used to model the temporal distribution of extreme events (such as those associated with high-level of excitement).
It will be appreciated that while the exemplary embodiments of the invention have been discussed primarily in terms of providing content to subscribers, the invention is in no way so limited. The invention may also be applied in other contexts. For example, in the field of sports, when a game is identified as “exciting”, it may be the case that the teams are well matched. Thus, coaches/players may be interested in intelligence related to the opposing team. The present invention may be applied to generate a reel of clips useful in such research. For instance, the system may be utilized to automatically create a reel containing every passing play in a given season for a given team (or when a given quarterback was in play). The reel may then be used to identify patterns and weaknesses in the opposing team's strategy. The system may also be used on proprietary content not publically released. For example, the present invention may be applied to camera angles that show the position of every player on a football field. Such camera angles are not generally aired on TV, but the system may be applied to the proprietary archive of an organization, rather than exclusively to content in an MSO's archive. The team may then purchase or receive such reels as a subscription service or on a per reel basis.
As another exemplary application (in the field of politics), a campaign may be interested in highlighting their candidate's best moments (e.g., speeches, town hall meeting, bill passages, leadership in crises, etc.) for an “instant commercial” or other campaign montage, such as for press release, documentary, website post, or advertisements, etc. The present invention may be applied to automatically locate a collection of such clips, and compose them into a highlight reel. The reel may then be purchased by the campaign. In a similar vein, the present invention may also be applied to opposition research. A campaign may be equally interested in a collection of clips that portray their opponent in the worst light (e.g. gaffes, poor past decisions, critical press coverage, etc.) to make an instant negative advertisement.
Further, using the present invention, such reels may be targeted to or composed for individual viewers or subscribers based on their interests or demographic information. For example, a given subscriber may have very specific views on taxes. In this case, the system composes a reel in which every clip relates to the candidate's position/history on taxes. Thus, campaigns may not only purchase premade reels for general release; using the present invention, a service for making and/or delivering targeted advertising may be provided to a campaign. In some such targeted political advertising embodiments, the campaign may supply a collection of content from its own archive for the creation of the highlight reels.
Hence, various aspects of the present invention may be used in virtually any context in which an individual or group is in interested in a “highlight” (or lowlight) reel or other montage of contextually related media. For example, event planners often require such highlight reels for social events (company meetings, conventions, birthdays, concerts, etc.). Reels for these events may be made on a contractual basis, or simply compiled and then made available for purchase. These reels may be created from a content archive maintained by an MSO, or from a private library maintained by the recipient or third party. For example, cloud or online video/picture upload sites may use the present invention to offer their users instant highlight reels from the uploaded content stored on the site.
It will be recognized that while certain aspects of the invention are described in terms of a specific sequence of steps of a method, these descriptions are only illustrative of the broader methods of the invention, and may be modified as required by the particular application. Certain steps may be rendered unnecessary or optional under certain circumstances. Additionally, certain steps or functionality may be added to the disclosed embodiments, or the order of performance of two or more steps permuted. All such variations are considered to be encompassed within the invention disclosed and claimed herein.
While the above detailed description has shown, described, and pointed out novel features of the invention as applied to various embodiments, it will be understood that various omissions, substitutions, and changes in the form and details of the device or process illustrated may be made by those skilled in the art without departing from the invention. This description is in no way meant to be limiting, but rather should be taken as illustrative of the general principles of the invention. The scope of the invention should be determined with reference to the claims.
This application is a continuation of and claims priority to co-owned and co-pending U.S. patent application Ser. No. 13/439,683 filed on Apr. 4, 2012 of the same title, issuing as U.S. Pat. No. 9,467,723 on Oct. 11, 2016, which is incorporated herein by reference in its entirety. In addition, the present application is related to commonly owned U.S. patent application Ser. No. 10/913,064 filed Aug. 6, 2004 and entitled “TECHNIQUE FOR DELIVERING PROGRAMMING CONTENT BASED ON A MODIFIED NETWORK PERSONAL VIDEO RECORDER SERVICE”, U.S. patent application Ser. No. 12/414,554 filed Mar. 30, 2009 and entitled “PERSONAL MEDIA CHANNEL APPARATUS AND METHODS”, U.S. patent application Ser. No. 12/414,576 filed Mar. 30, 2009, entitled “RECOMMENDATION ENGINE APPARATUS AND METHODS”, and issued as U.S. Pat. No. 9,215,423 on Dec. 15, 2015, U.S. patent application Ser. No. 13/403,802 filed Feb. 23, 2012 and entitled “APPARATUS AND METHODS FOR PROVIDING CONTENT TO AN IP-ENABLED DEVICE IN A CONTENT DISTRIBUTION NETWORK”, U.S. patent application Ser. No. 13/403,814 filed Feb. 23, 2012 “APPARATUS AND METHODS FOR CONTENT DISTRIBUTION TO PACKET-ENABLED DEVICES VIA A NETWORK BRIDGE”, each of the foregoing being incorporated herein by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
4677501 | Saltzman et al. | Jun 1987 | A |
5253066 | Vogel et al. | Oct 1993 | A |
5285272 | Bradley et al. | Feb 1994 | A |
5335277 | Harvey et al. | Aug 1994 | A |
5357276 | Banker et al. | Oct 1994 | A |
5371551 | Logan et al. | Dec 1994 | A |
5410344 | Graves et al. | Apr 1995 | A |
5436917 | Karasawa | Jul 1995 | A |
5477263 | O'Callaghan et al. | Dec 1995 | A |
5517257 | Dunn et al. | May 1996 | A |
5528282 | Voeten et al. | Jun 1996 | A |
5528284 | Iwami et al. | Jun 1996 | A |
5534911 | Levitan | Jul 1996 | A |
5543927 | Herz | Aug 1996 | A |
5550640 | Tsuboi et al. | Aug 1996 | A |
5557319 | Gurusami et al. | Sep 1996 | A |
5579183 | Van Gestel et al. | Nov 1996 | A |
5619247 | Russo | Apr 1997 | A |
5628284 | Sheen et al. | May 1997 | A |
5671386 | Blair et al. | Sep 1997 | A |
5682597 | Ganek et al. | Oct 1997 | A |
5687275 | Lane et al. | Nov 1997 | A |
5699360 | Nishida et al. | Dec 1997 | A |
5708961 | Hylton et al. | Jan 1998 | A |
5710970 | Walters et al. | Jan 1998 | A |
5721878 | Ottesen et al. | Feb 1998 | A |
5727113 | Shimoda | Mar 1998 | A |
5729280 | Inoue et al. | Mar 1998 | A |
5729648 | Boyce et al. | Mar 1998 | A |
5745837 | Fuhrmann | Apr 1998 | A |
5748254 | Harrison et al. | May 1998 | A |
5758257 | Herz et al. | May 1998 | A |
5778187 | Monteiro et al. | Jul 1998 | A |
5793971 | Fujita et al. | Aug 1998 | A |
5794217 | Allen | Aug 1998 | A |
5798785 | Hendricks et al. | Aug 1998 | A |
5808608 | Young et al. | Sep 1998 | A |
5818438 | Howe et al. | Oct 1998 | A |
5818510 | Cobbley et al. | Oct 1998 | A |
5822493 | Uehara et al. | Oct 1998 | A |
5822530 | Brown | Oct 1998 | A |
5838921 | Speeter | Nov 1998 | A |
5844552 | Gaughan et al. | Dec 1998 | A |
5850218 | Lajoie et al. | Dec 1998 | A |
5861881 | Freeman et al. | Jan 1999 | A |
5887243 | Harvey et al. | Mar 1999 | A |
5897635 | Torres et al. | Apr 1999 | A |
5915068 | Levine | Jun 1999 | A |
5917538 | Asamizuya | Jun 1999 | A |
5940738 | Rao | Aug 1999 | A |
5999525 | Krishnaswamy et al. | Dec 1999 | A |
5999535 | Wang et al. | Dec 1999 | A |
6005603 | Flavin | Dec 1999 | A |
6016316 | Moura et al. | Jan 2000 | A |
6026211 | Nakamura et al. | Feb 2000 | A |
6046760 | Jun | Apr 2000 | A |
6047327 | Tso et al. | Apr 2000 | A |
6052588 | Mo et al. | Apr 2000 | A |
6055358 | Traxlmayr | Apr 2000 | A |
6065050 | Demoney | May 2000 | A |
6108002 | Ishizaki | Aug 2000 | A |
6115532 | Saeki | Sep 2000 | A |
6118472 | Dureau et al. | Sep 2000 | A |
6118922 | Van Gestel et al. | Sep 2000 | A |
6125397 | Yoshimura et al. | Sep 2000 | A |
6167432 | Jiang | Dec 2000 | A |
6172712 | Beard | Jan 2001 | B1 |
6177931 | Alexander et al. | Jan 2001 | B1 |
6181697 | Nurenberg et al. | Jan 2001 | B1 |
6219710 | Gray et al. | Apr 2001 | B1 |
6233389 | Barton et al. | May 2001 | B1 |
6253375 | Gordon et al. | Jun 2001 | B1 |
6259701 | Shur et al. | Jul 2001 | B1 |
6314572 | Larocca et al. | Nov 2001 | B1 |
6317884 | Eames et al. | Nov 2001 | B1 |
6324338 | Wood et al. | Nov 2001 | B1 |
6345038 | Selinger | Feb 2002 | B1 |
6389538 | Gruse et al. | May 2002 | B1 |
6396531 | Gerszberg et al. | May 2002 | B1 |
6442328 | Elliott et al. | Aug 2002 | B1 |
6442332 | Knudson et al. | Aug 2002 | B1 |
6473793 | Dillon et al. | Oct 2002 | B1 |
6519062 | Yoo | Feb 2003 | B1 |
6523696 | Saito et al. | Feb 2003 | B1 |
6532593 | Moroney | Mar 2003 | B1 |
6543053 | Li et al. | Apr 2003 | B1 |
6546016 | Gerszberg et al. | Apr 2003 | B1 |
6564381 | Hodge et al. | May 2003 | B1 |
6581207 | Sumita et al. | Jun 2003 | B1 |
6609253 | Swix et al. | Aug 2003 | B1 |
6640145 | Hoffberg et al. | Oct 2003 | B2 |
6642938 | Gilboy | Nov 2003 | B1 |
6642939 | Vallone et al. | Nov 2003 | B1 |
6643262 | Larsson et al. | Nov 2003 | B1 |
6694145 | Riikonen et al. | Feb 2004 | B2 |
6711742 | Kishi et al. | Mar 2004 | B1 |
6718552 | Goode | Apr 2004 | B1 |
6721789 | Demoney | Apr 2004 | B1 |
6748395 | Picker et al. | Jun 2004 | B1 |
6754904 | Cooper et al. | Jun 2004 | B1 |
6757906 | Look et al. | Jun 2004 | B1 |
6769127 | Bonomi et al. | Jul 2004 | B1 |
6774926 | Ellis et al. | Aug 2004 | B1 |
6788676 | Partanen et al. | Sep 2004 | B2 |
6847778 | Vallone et al. | Jan 2005 | B1 |
6865746 | Herrington et al. | Mar 2005 | B1 |
6909726 | Sheeran | Jun 2005 | B1 |
6917614 | Laubach et al. | Jul 2005 | B1 |
6917641 | Kotzin et al. | Jul 2005 | B2 |
6918131 | Rautila et al. | Jul 2005 | B1 |
6925257 | Yoo | Aug 2005 | B2 |
6931018 | Fisher | Aug 2005 | B1 |
6931657 | Marsh | Aug 2005 | B1 |
6934964 | Schaffer et al. | Aug 2005 | B1 |
6944150 | McConnell et al. | Sep 2005 | B1 |
6978474 | Sheppard et al. | Dec 2005 | B1 |
6981045 | Brooks | Dec 2005 | B1 |
7003670 | Heaven et al. | Feb 2006 | B2 |
7006881 | Hoffberg et al. | Feb 2006 | B1 |
7009972 | Maher et al. | Mar 2006 | B2 |
7013290 | Ananian | Mar 2006 | B2 |
7020652 | Matz et al. | Mar 2006 | B2 |
7027460 | Iyer et al. | Apr 2006 | B2 |
7039048 | Monta et al. | May 2006 | B1 |
7051352 | Schaffer | May 2006 | B1 |
7054902 | Toporek et al. | May 2006 | B2 |
7055031 | Platt | May 2006 | B2 |
7055165 | Connelly | May 2006 | B2 |
7068639 | Varma et al. | Jun 2006 | B1 |
7096483 | Johnson | Aug 2006 | B2 |
7099308 | Merrill et al. | Aug 2006 | B2 |
7100183 | Kunkel et al. | Aug 2006 | B2 |
7103905 | Novak | Sep 2006 | B2 |
7106382 | Shiotsu | Sep 2006 | B2 |
7146627 | Ismail et al. | Dec 2006 | B1 |
7149772 | Kalavade | Dec 2006 | B1 |
7167895 | Connelly | Jan 2007 | B1 |
7174126 | McElhatten et al. | Feb 2007 | B2 |
7174127 | Otten et al. | Feb 2007 | B2 |
7174385 | Li | Feb 2007 | B2 |
7185355 | Ellis et al. | Feb 2007 | B1 |
7197708 | Frendo et al. | Mar 2007 | B1 |
7206775 | Kaiser et al. | Apr 2007 | B2 |
7207055 | Hendricks et al. | Apr 2007 | B1 |
7209458 | Ahvonen et al. | Apr 2007 | B2 |
7213036 | Apparao et al. | May 2007 | B2 |
7228556 | Beach et al. | Jun 2007 | B2 |
7228656 | Mitchell et al. | Jun 2007 | B2 |
7240359 | Sie et al. | Jul 2007 | B1 |
7242960 | Van Rooyen | Jul 2007 | B2 |
7242988 | Hoffberg et al. | Jul 2007 | B1 |
7254608 | Yeager et al. | Aug 2007 | B2 |
7257106 | Chen et al. | Aug 2007 | B2 |
7260823 | Schlack et al. | Aug 2007 | B2 |
7265927 | Sugiyama et al. | Sep 2007 | B2 |
7293276 | Phillips et al. | Nov 2007 | B2 |
7312391 | Kaiser et al. | Dec 2007 | B2 |
7325043 | Rosenberg et al. | Jan 2008 | B1 |
7325073 | Shao et al. | Jan 2008 | B2 |
7330483 | Peters, Jr. et al. | Feb 2008 | B1 |
7330510 | Castillo et al. | Feb 2008 | B2 |
7333483 | Zhao et al. | Feb 2008 | B2 |
7336787 | Unger et al. | Feb 2008 | B2 |
7337458 | Michelitsch et al. | Feb 2008 | B2 |
7340759 | Rodriguez | Mar 2008 | B1 |
7340762 | Kim | Mar 2008 | B2 |
7359375 | Lipsanen et al. | Apr 2008 | B2 |
7363643 | Drake et al. | Apr 2008 | B2 |
7376386 | Phillips et al. | May 2008 | B2 |
7382786 | Chen et al. | Jun 2008 | B2 |
7444655 | Sardera | Oct 2008 | B2 |
7457520 | Rossetti et al. | Nov 2008 | B2 |
7486869 | Alexander et al. | Feb 2009 | B2 |
7487523 | Hendricks | Feb 2009 | B1 |
7532712 | Gonder et al. | May 2009 | B2 |
7543322 | Bhogal | Jun 2009 | B1 |
7548562 | Ward et al. | Jun 2009 | B2 |
7567983 | Pickelsimer et al. | Jul 2009 | B2 |
7571452 | Gutta | Aug 2009 | B2 |
7592912 | Hasek et al. | Sep 2009 | B2 |
7602820 | Helms et al. | Oct 2009 | B2 |
7609637 | Doshi et al. | Oct 2009 | B2 |
7624337 | Sull et al. | Nov 2009 | B2 |
7650319 | Hoffberg et al. | Jan 2010 | B2 |
7690020 | Lebar | Mar 2010 | B2 |
7693171 | Gould | Apr 2010 | B2 |
7721314 | Sincaglia et al. | May 2010 | B2 |
7725553 | Rang et al. | May 2010 | B2 |
7742074 | Minatogawa | Jun 2010 | B2 |
7763360 | Paul et al. | Jul 2010 | B2 |
7770200 | Brooks et al. | Aug 2010 | B2 |
7787539 | Chen | Aug 2010 | B2 |
7809942 | Baran et al. | Oct 2010 | B2 |
7889765 | Brooks et al. | Feb 2011 | B2 |
7893171 | Le et al. | Feb 2011 | B2 |
7900052 | Jonas et al. | Mar 2011 | B2 |
7908626 | Williamson et al. | Mar 2011 | B2 |
7916755 | Hasek et al. | Mar 2011 | B2 |
7924451 | Hirooka | Apr 2011 | B2 |
7936775 | Iwamura | May 2011 | B2 |
7937725 | Schaffer et al. | May 2011 | B1 |
7954131 | Cholas et al. | May 2011 | B2 |
7975283 | Bedingfield et al. | Jul 2011 | B2 |
7987491 | Reisman | Jul 2011 | B2 |
8015306 | Bowman | Sep 2011 | B2 |
8024762 | Britt | Sep 2011 | B2 |
8032914 | Rodriguez | Oct 2011 | B2 |
8042054 | White et al. | Oct 2011 | B2 |
8046836 | Isokawa | Oct 2011 | B2 |
8056103 | Candelore | Nov 2011 | B2 |
8073460 | Scofield et al. | Dec 2011 | B1 |
8078696 | Lajoie et al. | Dec 2011 | B2 |
8090014 | Cheung et al. | Jan 2012 | B2 |
8090104 | Wajs et al. | Jan 2012 | B2 |
8095610 | Gould et al. | Jan 2012 | B2 |
8099757 | Riedl et al. | Jan 2012 | B2 |
8151194 | Chan et al. | Apr 2012 | B1 |
8151294 | Carlucci et al. | Apr 2012 | B2 |
8166126 | Bristow et al. | Apr 2012 | B2 |
8170065 | Hasek et al. | May 2012 | B2 |
8181209 | Hasek et al. | May 2012 | B2 |
8196201 | Repasi et al. | Jun 2012 | B2 |
8214256 | Riedl et al. | Jul 2012 | B2 |
8219134 | Maharajh et al. | Jul 2012 | B2 |
8249497 | Ingrassia et al. | Aug 2012 | B2 |
8280833 | Miltonberger | Oct 2012 | B2 |
8280982 | La Joie et al. | Oct 2012 | B2 |
8281352 | Brooks et al. | Oct 2012 | B2 |
8290811 | Robinson | Oct 2012 | B1 |
8341242 | Dillon et al. | Dec 2012 | B2 |
8347341 | Markley et al. | Jan 2013 | B2 |
8396055 | Patel et al. | Mar 2013 | B2 |
8429702 | Yasrebi et al. | Apr 2013 | B2 |
8472371 | Bari et al. | Jun 2013 | B1 |
8479251 | Feinleib | Jul 2013 | B2 |
8484511 | Engel et al. | Jul 2013 | B2 |
8516529 | Lajoie et al. | Aug 2013 | B2 |
8520850 | Helms et al. | Aug 2013 | B2 |
8549013 | Sarma et al. | Oct 2013 | B1 |
8561103 | Begeja et al. | Oct 2013 | B2 |
8561116 | Hasek | Oct 2013 | B2 |
8677431 | Smith | Mar 2014 | B2 |
8713623 | Brooks | Apr 2014 | B2 |
8731053 | Karegoudar | May 2014 | B2 |
8738607 | Dettinger et al. | May 2014 | B2 |
8750490 | Murtagh et al. | Jun 2014 | B2 |
8750909 | Fan et al. | Jun 2014 | B2 |
8805270 | Maharajh et al. | Aug 2014 | B2 |
8813122 | Montie et al. | Aug 2014 | B1 |
8838149 | Hasek | Sep 2014 | B2 |
8949919 | Cholas et al. | Feb 2015 | B2 |
8995815 | Maharajh et al. | Mar 2015 | B2 |
9021566 | Panayotopoulos et al. | Apr 2015 | B1 |
9124608 | Jin et al. | Sep 2015 | B2 |
9124650 | Maharajh et al. | Sep 2015 | B2 |
9215423 | Kimble et al. | Dec 2015 | B2 |
9473730 | Roy | Oct 2016 | B1 |
20010004768 | Hodge et al. | Jun 2001 | A1 |
20010047516 | Swain et al. | Nov 2001 | A1 |
20010050924 | Herrmann et al. | Dec 2001 | A1 |
20010050945 | Lindsey | Dec 2001 | A1 |
20020002688 | Gregg et al. | Jan 2002 | A1 |
20020024943 | Karaul et al. | Feb 2002 | A1 |
20020026645 | Son et al. | Feb 2002 | A1 |
20020027883 | Belaiche | Mar 2002 | A1 |
20020027894 | Arrakoski et al. | Mar 2002 | A1 |
20020031120 | Rakib | Mar 2002 | A1 |
20020032754 | Logston et al. | Mar 2002 | A1 |
20020042914 | Walker et al. | Apr 2002 | A1 |
20020042921 | Ellis | Apr 2002 | A1 |
20020049755 | Koike | Apr 2002 | A1 |
20020053076 | Landesmann | May 2002 | A1 |
20020056087 | Berezowski et al. | May 2002 | A1 |
20020056125 | Hodge et al. | May 2002 | A1 |
20020059218 | August et al. | May 2002 | A1 |
20020059619 | Lebar | May 2002 | A1 |
20020066033 | Dobbins et al. | May 2002 | A1 |
20020083451 | Gill et al. | Jun 2002 | A1 |
20020087995 | Pedlow et al. | Jul 2002 | A1 |
20020100059 | Buehl et al. | Jul 2002 | A1 |
20020123931 | Splaver et al. | Sep 2002 | A1 |
20020131511 | Zenoni | Sep 2002 | A1 |
20020143607 | Connelly | Oct 2002 | A1 |
20020144267 | Gutta et al. | Oct 2002 | A1 |
20020147771 | Traversat et al. | Oct 2002 | A1 |
20020152091 | Nagaoka et al. | Oct 2002 | A1 |
20020152299 | Traversat et al. | Oct 2002 | A1 |
20020152474 | Dudkiewicz | Oct 2002 | A1 |
20020174430 | Ellis et al. | Nov 2002 | A1 |
20020174433 | Baumgartner et al. | Nov 2002 | A1 |
20020178444 | Trajkovic et al. | Nov 2002 | A1 |
20020188744 | Mani | Dec 2002 | A1 |
20020188869 | Patrick | Dec 2002 | A1 |
20020188947 | Wang et al. | Dec 2002 | A1 |
20020191950 | Wang | Dec 2002 | A1 |
20020194595 | Miller et al. | Dec 2002 | A1 |
20030005453 | Rodriguez et al. | Jan 2003 | A1 |
20030005457 | Faibish et al. | Jan 2003 | A1 |
20030028451 | Ananian | Feb 2003 | A1 |
20030028873 | Lemmons | Feb 2003 | A1 |
20030046704 | Laksono et al. | Mar 2003 | A1 |
20030056217 | Brooks | Mar 2003 | A1 |
20030061618 | Horiuchi et al. | Mar 2003 | A1 |
20030086422 | Klinker et al. | May 2003 | A1 |
20030093790 | Logan et al. | May 2003 | A1 |
20030093794 | Thomas et al. | May 2003 | A1 |
20030097574 | Upton | May 2003 | A1 |
20030115267 | Hinton et al. | Jun 2003 | A1 |
20030118014 | Iyer et al. | Jun 2003 | A1 |
20030123465 | Donahue | Jul 2003 | A1 |
20030135628 | Fletcher et al. | Jul 2003 | A1 |
20030135860 | Dureau | Jul 2003 | A1 |
20030163443 | Wang | Aug 2003 | A1 |
20030165241 | Fransdonk | Sep 2003 | A1 |
20030166401 | Combes et al. | Sep 2003 | A1 |
20030188317 | Liew et al. | Oct 2003 | A1 |
20030200548 | Baran et al. | Oct 2003 | A1 |
20030208767 | Williamson et al. | Nov 2003 | A1 |
20030217137 | Roese et al. | Nov 2003 | A1 |
20030217365 | Caputo | Nov 2003 | A1 |
20030220100 | McElhatten et al. | Nov 2003 | A1 |
20040015986 | Carver et al. | Jan 2004 | A1 |
20040019913 | Wong et al. | Jan 2004 | A1 |
20040031058 | Reisman | Feb 2004 | A1 |
20040034677 | Davey et al. | Feb 2004 | A1 |
20040034877 | Nogues | Feb 2004 | A1 |
20040045032 | Cummings et al. | Mar 2004 | A1 |
20040045035 | Cummings et al. | Mar 2004 | A1 |
20040045037 | Cummings et al. | Mar 2004 | A1 |
20040049694 | Candelore | Mar 2004 | A1 |
20040057457 | Ahn et al. | Mar 2004 | A1 |
20040083177 | Chen et al. | Apr 2004 | A1 |
20040117254 | Nemirofsky et al. | Jun 2004 | A1 |
20040117838 | Karaoguz et al. | Jun 2004 | A1 |
20040133923 | Watson et al. | Jul 2004 | A1 |
20040137918 | Varonen et al. | Jul 2004 | A1 |
20040158870 | Paxton et al. | Aug 2004 | A1 |
20040166832 | Portman et al. | Aug 2004 | A1 |
20040216158 | Blas | Oct 2004 | A1 |
20040230994 | Urdang et al. | Nov 2004 | A1 |
20040250273 | Swix et al. | Dec 2004 | A1 |
20040268398 | Fano et al. | Dec 2004 | A1 |
20040268403 | Krieger et al. | Dec 2004 | A1 |
20050002418 | Yang et al. | Jan 2005 | A1 |
20050002638 | Putterman et al. | Jan 2005 | A1 |
20050005308 | Logan et al. | Jan 2005 | A1 |
20050028208 | Ellis et al. | Feb 2005 | A1 |
20050034171 | Benya | Feb 2005 | A1 |
20050047501 | Yoshida et al. | Mar 2005 | A1 |
20050049886 | Grannan et al. | Mar 2005 | A1 |
20050050579 | Dietz et al. | Mar 2005 | A1 |
20050055220 | Lee et al. | Mar 2005 | A1 |
20050055729 | Atad et al. | Mar 2005 | A1 |
20050071882 | Rodriguez et al. | Mar 2005 | A1 |
20050083921 | McDermott, III | Apr 2005 | A1 |
20050086334 | Aaltonen et al. | Apr 2005 | A1 |
20050086683 | Meyerson | Apr 2005 | A1 |
20050097599 | Plotnick et al. | May 2005 | A1 |
20050108763 | Baran et al. | May 2005 | A1 |
20050114701 | Atkins et al. | May 2005 | A1 |
20050114900 | Ladd et al. | May 2005 | A1 |
20050157731 | Peters | Jul 2005 | A1 |
20050165899 | Mazzola | Jul 2005 | A1 |
20050177855 | Maynard et al. | Aug 2005 | A1 |
20050188415 | Riley | Aug 2005 | A1 |
20050223097 | Ramsayer et al. | Oct 2005 | A1 |
20050228725 | Rao et al. | Oct 2005 | A1 |
20050289616 | Horiuchi et al. | Dec 2005 | A1 |
20050289618 | Hardin | Dec 2005 | A1 |
20060020786 | Helms et al. | Jan 2006 | A1 |
20060021004 | Moran et al. | Jan 2006 | A1 |
20060021019 | Hinton et al. | Jan 2006 | A1 |
20060041905 | Wasilewski | Feb 2006 | A1 |
20060041915 | Dimitrova et al. | Feb 2006 | A1 |
20060047957 | Helms et al. | Mar 2006 | A1 |
20060053463 | Choi | Mar 2006 | A1 |
20060059532 | Dugan et al. | Mar 2006 | A1 |
20060061682 | Bradley et al. | Mar 2006 | A1 |
20060090186 | Santangelo et al. | Apr 2006 | A1 |
20060095940 | Yearwood | May 2006 | A1 |
20060117379 | Bennett et al. | Jun 2006 | A1 |
20060128397 | Choti et al. | Jun 2006 | A1 |
20060130099 | Rooyen | Jun 2006 | A1 |
20060130101 | Wessel van Rooyen | Jun 2006 | A1 |
20060130107 | Gonder et al. | Jun 2006 | A1 |
20060130113 | Carlucci et al. | Jun 2006 | A1 |
20060136964 | Diez et al. | Jun 2006 | A1 |
20060136968 | Han et al. | Jun 2006 | A1 |
20060139379 | Toma et al. | Jun 2006 | A1 |
20060149850 | Bowman | Jul 2006 | A1 |
20060156392 | Baugher et al. | Jul 2006 | A1 |
20060161635 | Lamkin et al. | Jul 2006 | A1 |
20060165082 | Pfeffer et al. | Jul 2006 | A1 |
20060165173 | Kim et al. | Jul 2006 | A1 |
20060171423 | Helms et al. | Aug 2006 | A1 |
20060187900 | Akbar | Aug 2006 | A1 |
20060206712 | Dillaway et al. | Sep 2006 | A1 |
20060209799 | Gallagher et al. | Sep 2006 | A1 |
20060218601 | Michel | Sep 2006 | A1 |
20060218604 | Riedl et al. | Sep 2006 | A1 |
20060221246 | Yoo | Oct 2006 | A1 |
20060224690 | Falkenburg et al. | Oct 2006 | A1 |
20060236358 | Liu et al. | Oct 2006 | A1 |
20060238656 | Chen et al. | Oct 2006 | A1 |
20060248553 | Mikkelson et al. | Nov 2006 | A1 |
20060259927 | Acharya et al. | Nov 2006 | A1 |
20060288366 | Boylan, III | Dec 2006 | A1 |
20060291506 | Cain | Dec 2006 | A1 |
20070014293 | Filsfils et al. | Jan 2007 | A1 |
20070019645 | Menon | Jan 2007 | A1 |
20070022459 | Gaebel et al. | Jan 2007 | A1 |
20070022469 | Cooper et al. | Jan 2007 | A1 |
20070025372 | Brenes et al. | Feb 2007 | A1 |
20070033282 | Mao et al. | Feb 2007 | A1 |
20070033531 | Marsh | Feb 2007 | A1 |
20070038671 | Holm et al. | Feb 2007 | A1 |
20070049245 | Lipman | Mar 2007 | A1 |
20070050822 | Stevens et al. | Mar 2007 | A1 |
20070053513 | Hoffberg et al. | Mar 2007 | A1 |
20070061023 | Hoffberg et al. | Mar 2007 | A1 |
20070067851 | Fernando et al. | Mar 2007 | A1 |
20070073704 | Bowden et al. | Mar 2007 | A1 |
20070076728 | Rieger et al. | Apr 2007 | A1 |
20070081537 | Wheelock | Apr 2007 | A1 |
20070094691 | Gazdzinski | Apr 2007 | A1 |
20070113246 | Xiong | May 2007 | A1 |
20070118848 | Schwesinger et al. | May 2007 | A1 |
20070121578 | Annadata et al. | May 2007 | A1 |
20070121678 | Brooks et al. | May 2007 | A1 |
20070124488 | Baum et al. | May 2007 | A1 |
20070127519 | Hasek et al. | Jun 2007 | A1 |
20070136777 | Hasek et al. | Jun 2007 | A1 |
20070153820 | Gould | Jul 2007 | A1 |
20070154041 | Beauchamp | Jul 2007 | A1 |
20070157228 | Bayer et al. | Jul 2007 | A1 |
20070157234 | Walker | Jul 2007 | A1 |
20070157262 | Ramaswamy et al. | Jul 2007 | A1 |
20070180230 | Cortez | Aug 2007 | A1 |
20070192103 | Sato et al. | Aug 2007 | A1 |
20070204300 | Markley et al. | Aug 2007 | A1 |
20070204308 | Nicholas et al. | Aug 2007 | A1 |
20070204314 | Hasek et al. | Aug 2007 | A1 |
20070209054 | Cassanova | Sep 2007 | A1 |
20070209059 | Moore et al. | Sep 2007 | A1 |
20070217436 | Markley et al. | Sep 2007 | A1 |
20070219910 | Martinez | Sep 2007 | A1 |
20070226365 | Hildreth et al. | Sep 2007 | A1 |
20070245376 | Svendsen | Oct 2007 | A1 |
20070250880 | Hainline | Oct 2007 | A1 |
20070261089 | Aaby et al. | Nov 2007 | A1 |
20070261116 | Prafullchandra et al. | Nov 2007 | A1 |
20070276925 | La Joie et al. | Nov 2007 | A1 |
20070276926 | Lajoie et al. | Nov 2007 | A1 |
20070280298 | Hearn et al. | Dec 2007 | A1 |
20070288637 | Layton et al. | Dec 2007 | A1 |
20070288715 | Boswell et al. | Dec 2007 | A1 |
20070294717 | Hill et al. | Dec 2007 | A1 |
20070294738 | Kuo et al. | Dec 2007 | A1 |
20070299728 | Nemirofsky et al. | Dec 2007 | A1 |
20080021836 | Lao | Jan 2008 | A1 |
20080022012 | Wang | Jan 2008 | A1 |
20080027801 | Walter et al. | Jan 2008 | A1 |
20080036917 | Pascarella et al. | Feb 2008 | A1 |
20080059804 | Shah et al. | Mar 2008 | A1 |
20080066095 | Reinoso | Mar 2008 | A1 |
20080066112 | Bailey et al. | Mar 2008 | A1 |
20080085750 | Yoshizawa | Apr 2008 | A1 |
20080086750 | Yasrebi et al. | Apr 2008 | A1 |
20080091805 | Malaby et al. | Apr 2008 | A1 |
20080091807 | Strub et al. | Apr 2008 | A1 |
20080092058 | Afergan et al. | Apr 2008 | A1 |
20080092181 | Britt | Apr 2008 | A1 |
20080098212 | Helms et al. | Apr 2008 | A1 |
20080098241 | Cheshire | Apr 2008 | A1 |
20080098450 | Wu et al. | Apr 2008 | A1 |
20080101460 | Rodriguez | May 2008 | A1 |
20080109853 | Einarsson et al. | May 2008 | A1 |
20080112405 | Cholas et al. | May 2008 | A1 |
20080133551 | Wensley et al. | Jun 2008 | A1 |
20080134043 | Georgis | Jun 2008 | A1 |
20080134165 | Anderson et al. | Jun 2008 | A1 |
20080137541 | Agarwal et al. | Jun 2008 | A1 |
20080137740 | Thoreau et al. | Jun 2008 | A1 |
20080155059 | Hardin et al. | Jun 2008 | A1 |
20080155614 | Cooper et al. | Jun 2008 | A1 |
20080162353 | Tom et al. | Jul 2008 | A1 |
20080168487 | Chow et al. | Jul 2008 | A1 |
20080170530 | Connors et al. | Jul 2008 | A1 |
20080170551 | Zaks | Jul 2008 | A1 |
20080177998 | Apsangi et al. | Jul 2008 | A1 |
20080178225 | Jost | Jul 2008 | A1 |
20080184344 | Hernacki et al. | Jul 2008 | A1 |
20080189617 | Covell et al. | Aug 2008 | A1 |
20080192820 | Brooks et al. | Aug 2008 | A1 |
20080198780 | Sato | Aug 2008 | A1 |
20080200154 | Maharajh et al. | Aug 2008 | A1 |
20080201386 | Maharajh et al. | Aug 2008 | A1 |
20080201736 | Gordon et al. | Aug 2008 | A1 |
20080201748 | Hasek et al. | Aug 2008 | A1 |
20080215755 | Farber et al. | Sep 2008 | A1 |
20080222684 | Mukraj et al. | Sep 2008 | A1 |
20080229379 | Akhter | Sep 2008 | A1 |
20080235746 | Peters et al. | Sep 2008 | A1 |
20080244667 | Osborne | Oct 2008 | A1 |
20080256615 | Schlacht et al. | Oct 2008 | A1 |
20080273591 | Brooks et al. | Nov 2008 | A1 |
20080279534 | Buttars | Nov 2008 | A1 |
20080281971 | Leppanen et al. | Nov 2008 | A1 |
20080282299 | Koat et al. | Nov 2008 | A1 |
20080297669 | Zalewski et al. | Dec 2008 | A1 |
20080306903 | Larson et al. | Dec 2008 | A1 |
20080320523 | Morris et al. | Dec 2008 | A1 |
20080320528 | Kim et al. | Dec 2008 | A1 |
20080320540 | Brooks et al. | Dec 2008 | A1 |
20090006211 | Perry et al. | Jan 2009 | A1 |
20090025027 | Craner | Jan 2009 | A1 |
20090030802 | Plotnick et al. | Jan 2009 | A1 |
20090031335 | Hendricks et al. | Jan 2009 | A1 |
20090064221 | Stevens | Mar 2009 | A1 |
20090070842 | Corson | Mar 2009 | A1 |
20090076898 | Wang et al. | Mar 2009 | A1 |
20090077583 | Sugiyama et al. | Mar 2009 | A1 |
20090083279 | Hasek | Mar 2009 | A1 |
20090083811 | Dolce et al. | Mar 2009 | A1 |
20090083813 | Dolce et al. | Mar 2009 | A1 |
20090086643 | Kotrla et al. | Apr 2009 | A1 |
20090094347 | Ting et al. | Apr 2009 | A1 |
20090098861 | Kalliola et al. | Apr 2009 | A1 |
20090100459 | Riedl et al. | Apr 2009 | A1 |
20090100493 | Jones et al. | Apr 2009 | A1 |
20090119703 | Piepenbrink et al. | May 2009 | A1 |
20090132347 | Anderson et al. | May 2009 | A1 |
20090133048 | Gibbs et al. | May 2009 | A1 |
20090133090 | Busse | May 2009 | A1 |
20090141696 | Chou et al. | Jun 2009 | A1 |
20090150210 | Athsani et al. | Jun 2009 | A1 |
20090150917 | Huffman et al. | Jun 2009 | A1 |
20090151006 | Saeki et al. | Jun 2009 | A1 |
20090158311 | Hon et al. | Jun 2009 | A1 |
20090172776 | Makagon et al. | Jul 2009 | A1 |
20090175218 | Song et al. | Jul 2009 | A1 |
20090178089 | Picco | Jul 2009 | A1 |
20090185576 | Kisel et al. | Jul 2009 | A1 |
20090187939 | Lajoie | Jul 2009 | A1 |
20090187944 | White et al. | Jul 2009 | A1 |
20090193097 | Gassewitz | Jul 2009 | A1 |
20090193486 | Patel et al. | Jul 2009 | A1 |
20090201917 | Maes et al. | Aug 2009 | A1 |
20090210899 | Lawrence-Apfelbaum et al. | Aug 2009 | A1 |
20090210912 | Cholas et al. | Aug 2009 | A1 |
20090225760 | Foti | Sep 2009 | A1 |
20090228941 | Russell et al. | Sep 2009 | A1 |
20090235308 | Ehlers et al. | Sep 2009 | A1 |
20090248794 | Helms et al. | Oct 2009 | A1 |
20090282241 | Prafullchandra et al. | Nov 2009 | A1 |
20090282449 | Lee | Nov 2009 | A1 |
20090293101 | Carter et al. | Nov 2009 | A1 |
20090296621 | Park et al. | Dec 2009 | A1 |
20090327057 | Redlich | Dec 2009 | A1 |
20100012568 | Fujisawa et al. | Jan 2010 | A1 |
20100027560 | Yang et al. | Feb 2010 | A1 |
20100027787 | Benkert et al. | Feb 2010 | A1 |
20100030578 | Siddique et al. | Feb 2010 | A1 |
20100031299 | Harrang et al. | Feb 2010 | A1 |
20100036720 | Jain et al. | Feb 2010 | A1 |
20100042478 | Reisman | Feb 2010 | A1 |
20100043030 | White | Feb 2010 | A1 |
20100083329 | Joyce et al. | Apr 2010 | A1 |
20100083362 | Francisco et al. | Apr 2010 | A1 |
20100086020 | Schlack | Apr 2010 | A1 |
20100106846 | Noldus et al. | Apr 2010 | A1 |
20100115091 | Park et al. | May 2010 | A1 |
20100115113 | Short et al. | May 2010 | A1 |
20100115540 | Fan et al. | May 2010 | A1 |
20100121936 | Liu et al. | May 2010 | A1 |
20100122274 | Gillies et al. | May 2010 | A1 |
20100122276 | Chen | May 2010 | A1 |
20100125658 | Strasters | May 2010 | A1 |
20100131973 | Dillon et al. | May 2010 | A1 |
20100132003 | Bennett et al. | May 2010 | A1 |
20100135646 | Bang et al. | Jun 2010 | A1 |
20100138900 | Peterka et al. | Jun 2010 | A1 |
20100145917 | Bone | Jun 2010 | A1 |
20100146541 | Velazquez | Jun 2010 | A1 |
20100162367 | Lajoie et al. | Jun 2010 | A1 |
20100169503 | Kollmansberger et al. | Jul 2010 | A1 |
20100169977 | Dasher et al. | Jul 2010 | A1 |
20100186029 | Kim et al. | Jul 2010 | A1 |
20100198655 | Ketchum et al. | Aug 2010 | A1 |
20100199299 | Chang et al. | Aug 2010 | A1 |
20100199312 | Chang et al. | Aug 2010 | A1 |
20100217613 | Kelly | Aug 2010 | A1 |
20100218231 | Frink et al. | Aug 2010 | A1 |
20100219613 | Zaloom et al. | Sep 2010 | A1 |
20100251304 | Donoghue et al. | Sep 2010 | A1 |
20100251305 | Kimble et al. | Sep 2010 | A1 |
20100262461 | Bohannon | Oct 2010 | A1 |
20100262999 | Curran | Oct 2010 | A1 |
20100269144 | Forsman et al. | Oct 2010 | A1 |
20100280641 | Harkness et al. | Nov 2010 | A1 |
20100287588 | Cox et al. | Nov 2010 | A1 |
20100287609 | Gonzalez et al. | Nov 2010 | A1 |
20100293494 | Schmidt | Nov 2010 | A1 |
20100312826 | Sarosi et al. | Dec 2010 | A1 |
20100313225 | Cholas et al. | Dec 2010 | A1 |
20100325547 | Keng et al. | Dec 2010 | A1 |
20100333137 | Hamano et al. | Dec 2010 | A1 |
20110015989 | Tidwell et al. | Jan 2011 | A1 |
20110016479 | Tidwell et al. | Jan 2011 | A1 |
20110016482 | Tidwell et al. | Jan 2011 | A1 |
20110055866 | Piepenbrink et al. | Mar 2011 | A1 |
20110058675 | Brueck et al. | Mar 2011 | A1 |
20110071841 | Fomenko et al. | Mar 2011 | A1 |
20110078001 | Archer et al. | Mar 2011 | A1 |
20110078005 | Klappert | Mar 2011 | A1 |
20110078731 | Nishimura | Mar 2011 | A1 |
20110083069 | Paul et al. | Apr 2011 | A1 |
20110083144 | Bocharov et al. | Apr 2011 | A1 |
20110090898 | Patel et al. | Apr 2011 | A1 |
20110093900 | Patel et al. | Apr 2011 | A1 |
20110099017 | Ure | Apr 2011 | A1 |
20110102600 | Todd | May 2011 | A1 |
20110103374 | Lajoie et al. | May 2011 | A1 |
20110107364 | Lajoie et al. | May 2011 | A1 |
20110107379 | Lajoie et al. | May 2011 | A1 |
20110110515 | Tidwell et al. | May 2011 | A1 |
20110126018 | Narsinh et al. | May 2011 | A1 |
20110126244 | Hasek | May 2011 | A1 |
20110126246 | Thomas et al. | May 2011 | A1 |
20110138064 | Rieger et al. | Jun 2011 | A1 |
20110145049 | Hertel et al. | Jun 2011 | A1 |
20110154383 | Hao et al. | Jun 2011 | A1 |
20110166932 | Smith et al. | Jul 2011 | A1 |
20110173053 | Aaltonen et al. | Jul 2011 | A1 |
20110173095 | Kassaei et al. | Jul 2011 | A1 |
20110178943 | Motahari et al. | Jul 2011 | A1 |
20110191801 | Vytheeswaran | Aug 2011 | A1 |
20110212756 | Packard et al. | Sep 2011 | A1 |
20110213688 | Santos et al. | Sep 2011 | A1 |
20110219229 | Cholas et al. | Sep 2011 | A1 |
20110219411 | Smith | Sep 2011 | A1 |
20110223944 | Gosselin | Sep 2011 | A1 |
20110231660 | Kanungo | Sep 2011 | A1 |
20110239253 | West et al. | Sep 2011 | A1 |
20110246616 | Ronca et al. | Oct 2011 | A1 |
20110258049 | Ramer et al. | Oct 2011 | A1 |
20110264530 | Santangelo et al. | Oct 2011 | A1 |
20110265116 | Stern et al. | Oct 2011 | A1 |
20110276881 | Keng et al. | Nov 2011 | A1 |
20110277008 | Smith | Nov 2011 | A1 |
20110302624 | Chen et al. | Dec 2011 | A1 |
20120005527 | Engel et al. | Jan 2012 | A1 |
20120008786 | Cronk et al. | Jan 2012 | A1 |
20120011567 | Cronk et al. | Jan 2012 | A1 |
20120023535 | Brooks et al. | Jan 2012 | A1 |
20120030363 | Conrad | Feb 2012 | A1 |
20120072526 | Kling et al. | Mar 2012 | A1 |
20120076015 | Pfeffer | Mar 2012 | A1 |
20120079523 | Trimper et al. | Mar 2012 | A1 |
20120089699 | Cholas | Apr 2012 | A1 |
20120096106 | Blumofe et al. | Apr 2012 | A1 |
20120124149 | Gross et al. | May 2012 | A1 |
20120124606 | Tidwell et al. | May 2012 | A1 |
20120124612 | Adimatyam et al. | May 2012 | A1 |
20120137332 | Kumar | May 2012 | A1 |
20120144195 | Nair et al. | Jun 2012 | A1 |
20120144416 | Wetzer et al. | Jun 2012 | A1 |
20120151077 | Finster | Jun 2012 | A1 |
20120166530 | Tseng | Jun 2012 | A1 |
20120167132 | Mathews et al. | Jun 2012 | A1 |
20120170544 | Cheng et al. | Jul 2012 | A1 |
20120170741 | Chen et al. | Jul 2012 | A1 |
20120173746 | Salinger et al. | Jul 2012 | A1 |
20120185693 | Chen et al. | Jul 2012 | A1 |
20120185899 | Riedl et al. | Jul 2012 | A1 |
20120191844 | Boyns et al. | Jul 2012 | A1 |
20120215878 | Kidron | Aug 2012 | A1 |
20120215903 | Fleischman et al. | Aug 2012 | A1 |
20120246462 | Moroney et al. | Sep 2012 | A1 |
20120278833 | Tam et al. | Nov 2012 | A1 |
20120284804 | Lindquist et al. | Nov 2012 | A1 |
20120308071 | Ramsdell et al. | Dec 2012 | A1 |
20120324552 | Padala et al. | Dec 2012 | A1 |
20130014140 | Ye et al. | Jan 2013 | A1 |
20130014171 | Sansom et al. | Jan 2013 | A1 |
20130024888 | Sivertsen | Jan 2013 | A1 |
20130024891 | Elend et al. | Jan 2013 | A1 |
20130031578 | Zhu et al. | Jan 2013 | A1 |
20130039338 | Suzuki et al. | Feb 2013 | A1 |
20130046849 | Wolf et al. | Feb 2013 | A1 |
20130073400 | Heath | Mar 2013 | A1 |
20130097647 | Brooks et al. | Apr 2013 | A1 |
20130117692 | Padmanabhan et al. | May 2013 | A1 |
20130132986 | Mack et al. | May 2013 | A1 |
20130133010 | Chen | May 2013 | A1 |
20130166906 | Swaminathan et al. | Jun 2013 | A1 |
20130174271 | Handal et al. | Jul 2013 | A1 |
20130179588 | McCarthy et al. | Jul 2013 | A1 |
20130219178 | Xiques et al. | Aug 2013 | A1 |
20130227283 | Williamson et al. | Aug 2013 | A1 |
20130227284 | Pfeffer et al. | Aug 2013 | A1 |
20130311464 | Nix et al. | Nov 2013 | A1 |
20140012843 | Soon-Shiong | Jan 2014 | A1 |
20140074855 | Zhao et al. | Mar 2014 | A1 |
20140201799 | Smith | Jul 2014 | A1 |
20140230003 | Ma et al. | Aug 2014 | A1 |
20140245341 | Mack et al. | Aug 2014 | A1 |
20140259182 | Mershon | Sep 2014 | A1 |
20150020126 | Kegel et al. | Jan 2015 | A1 |
20150040176 | Hybertson et al. | Feb 2015 | A1 |
20150095932 | Ren | Apr 2015 | A1 |
20150109122 | Stern et al. | Apr 2015 | A1 |
20150161386 | Gupta et al. | Jun 2015 | A1 |
20150163540 | Masterson | Jun 2015 | A1 |
20160241617 | Jelley et al. | Aug 2016 | A1 |
Number | Date | Country |
---|---|---|
1087619 | Mar 2001 | EP |
1821459 | Aug 2007 | EP |
2001275090 | Oct 2001 | JP |
2005519365 | Jun 2005 | JP |
2005519501 | Jun 2005 | JP |
2005339093 | Dec 2005 | JP |
2008015936 | Jan 2008 | JP |
2009211632 | Sep 2009 | JP |
2010502109 | Jan 2010 | JP |
2010079902 | Apr 2010 | JP |
2012505436 | Mar 2012 | JP |
2012523614 | Oct 2012 | JP |
WO-0011871 | Mar 2000 | WO |
WO-0052928 | Sep 2000 | WO |
WO-0110125 | Feb 2001 | WO |
WO-0139505 | May 2001 | WO |
WO-0156285 | Aug 2001 | WO |
WO-0195610 | Dec 2001 | WO |
WO-0195621 | Dec 2001 | WO |
WO-2005015422 | Feb 2005 | WO |
WO-2005031524 | Apr 2005 | WO |
WO-2007060451 | May 2007 | WO |
WO-2010008487 | Jan 2010 | WO |
WO-2011035443 | Mar 2011 | WO |
WO-2011053858 | May 2011 | WO |
WO-2012021245 | Feb 2012 | WO |
WO-2012114140 | Aug 2012 | WO |
Entry |
---|
US 6,940,674 B2, 09/2005, Sakamoto (withdrawn) |
Apple Inc., HTTP Live Streaming Overview, Apr. 1, 2011. |
Buss, “Ultra TV”, Brandmarketing, Sep. 1999, vol. VI, No. 9, p. 74, ISSN 1091-6962, 1999 Responsive Database Services, Inc. Business and Industry; 1999 Fairchild Publications. |
Cantor, et al., Bindings for the OASIS Security Assertion Markup Language (SAML) V2.0, OASIS Standard, Mar. 2005 (http://docs.oasis-open.org/security/saml/v2.0/). |
Extended European Search Report for Application No. EP13155950, dated Jun. 14, 2013, 6 pages. |
Flynn, et al., “Interactive TV, CNNFn”, transcipt #00081407FN-111 interview Josh Bernoff, Digital Jam, Aug. 14, 2000. |
Furchgott, “Don't want people to control their T.V.s?”, The New York Times, Aug. 24, 2000, Section G, p. 1, col. 2, Circuits, 2000 The New York Times Company. |
Future VOD role of studios vs. other companies debated, Video Week, Apr. 10, 2000, section: This Week's News, 2000 Warren Publishing, Inc. |
Gopikrishnan, et al., Scaling of the distribution of fluctuations of financial market indices, Physical Review E. vol. 60 No. 5, Nov. 1999. |
Gunther, et al.,“When technology attacks!; Your T.V. is looking weird. Network executives are getting flustered. Viewing choices are exploding. That's what happens . . . ”, Fortune, Mar. 6, 2000, section: Features/Television, p. 152, 2000 Time Inc. |
“Independent study shows TiVo service increases enjoyment and changes people's attitudes towards T.V.”, PR Newswire, May 2, 2000, 2000 FT Asia Intelligence Wire; 2000 PR Newswire. |
Kale, RFC 1180 “A TCP/1P tutorial”, Jan. 1991, Spider Systems Limited, Section 4 “ARP”. |
Larsen, Peter Thal, “Inside Track: TV viewers can box clever: Technology Video Recorders: personal video reorders will be a godsend for viewers. But what about the schedulers”, Financial Times London Ed., Jun. 23, 2000, p. 18, ISSN 0307-1766, 2000 Responsive Database Services, Inc. Business and Industry; 2000 Financial Times Ltd. |
Lowry, Television, as you like it; Today's gadgetry is smart enough to let viewers choose camera angles, or kick back and rewind as the action unfolds live. Watch it, and it watches back, Los Angeles Times, Feb. 13, 2000, section: Calendar, p. 8, Calendar Desk, 2000 Times Mirror Company. |
“More ‘convergence’ digital video recorders emerge”, Video Week, Jun. 19, 2000, section: This Week's News, 2000 Warren Publishing, Inc. |
OpenCable, Enhanced TV Binary Interchange, Format 1.0 OC-SP-ETV-BIF1.0-104-070921 Date: Sep. 21, 2007, 420 pages. |
OpenCable Specifications, Alternate Content, Real-Time Event Signaling and Management API, OC-SP-ESAM-API-I01-120910 (2012). |
“PVR copyright concerns raised”, Audio Week, Aug. 23, 1999, section: This Week's News, 1999 Warren Publishing, Inc. |
Ramakrishnan, et al., Operating System Support for a Video-On-Demand File Service, Digital Equipment Corporation, 1995, p. 4 (“CMFAP”). |
Redux screenshot from http://www.redux.com, “Select a channel to start watching” © 2014 Redux, Inc.014 Redux, Inc. All rights reserved; http://www.redux.com/; 2 pages. |
Sabga, et al., “TiVo—CEO, CNNfn”, transcript # 000901 10FN-107 interview Michael Ramsay, The N.E.W. Show, Sep. 1, 2000, Fri. 5:18 p.m. EST, 2000 Cable News Network. |
Schonfeld, “Thuuz tells sports fans if a game is worth watching”, Oct. 7, 2010, TC News from http://techcrunch.com/2010/10/07/thuuz, 2 pages. |
SCTE American National Standard ANSI/SCTE 118-2 2007. |
SCTE American National Standard ANSI/SCTE 130-1 2008. |
SCTE, American National Standard, ANSI/SCTE 35 2012. |
Snoddy, “The TiVo—T.V.'s nemesis?”, Times Newspapers Ltd., Sep. 1, 2000, section: Features, 2000 Times Newspapers Limited (the Times London). |
“TiVo and replay sign cable deals to boost PVR distribution”, Warren's Cable Regulation Monitor, Aug. 21, 2000, section: This Week's News, 2000 Warren Publishing, Inc. |
UTF-32, IBM, retrieved from http://publib.boulder.ibm.com/infocenter/iseries/v5r3/index.jsp?topic=%2Fnls%2Frbagsutf32.htm on Aug. 28, 2013. |
Zarnbelli, The Apparatus and Methods of HS Smooth Streaming Technical Overview, Mar. 2009. |
DLNA (Digital Living Network Alliance) protocols described in DLNA Networked Device Interoperability Guidelines Expanded, Mar. 2006 and subsequent expanded version dated Oct. 2006. |
DOCSIS 3.0 Management Features Differences Technical Report CM-TR-MGMTv3 0-DIFF-V01-071228, pp. 1-62, (2007). |
DOCSIS 3.0 OSSI Configuration Management Technical Report CM-TR-OSSIv3 0-CM-V01-080926, pp. 1-84, (2008). |
MPEG Headers Quick Reference, http://dvd.sourceforge.net/dvdinfo/mpeghdrs.html, Mar. 6 2006. |
OpenCable Specifications, Tuning Resolver Interface Specification, OS-SP-TRIF-I01-080130, Jan. 30, 2008, pp. 1-50. |
Siebenlist F., et al., “Global Grid Forum Specification Roadmap towards a Secure OGSA,” Jul. 2002, pp. 1-22. |
Florin L., et al., “Content Delivery and Management in Networked MPEG-4 System,” 2000 10th European Signal Processing Conference, IEEE, Sep. 4, 2000 (Sep. 4, 2000), pp. 1-4, XP032755920, ISBN: 978-952-15-0443-3 [retrieved on Mar. 31, 2015]. |
Pantjiaros C.A. P., et al., “Broadband Service Delivery: CY.T.A. ADSL Field Trial Experience”, Electrotechnical Conference, 2000 MELECON, 2000 10th Mediterranean, May 29-31, 2000, Piscataway, NJ, USA,IEEE, vol. 1, May 29, 2000 (May 29, 2000), pp. 221-224, XP010518859, ISBN: 978-0-7803-6290-1. |
Number | Date | Country | |
---|---|---|---|
20170099512 A1 | Apr 2017 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 13439683 | Apr 2012 | US |
Child | 15289798 | US |