Field of the Invention
Embodiments described herein relate generally to a reception apparatus, information providing apparatus, methods, non-transitory computer-readable storage mediums for providing and/or processing non-closed caption data provided in a closed caption service. More particularly, embodiments of the present application relate generally to non-closed caption data transported in a standard caption service.
Background
Embodiments of the present disclosure arise out of the need to find a reliable transport method for adjunct data such as interactive television (iTV) triggers from a content creator, through the distribution chain, and finally to an iTV receiver. A number of “roadblocks” are well known, including the presence of the HDMI interface between a cable or satellite set-top box (STB) and the iTV receiver.
According to an embodiment of the present disclosure, there is provided a reception apparatus. The reception apparatus includes a receiver, a parser, and a processor. The receiver receives closed caption service data. The closed caption service data includes closed caption data within a first service block having a service number in the range of 1-6, and non-closed caption data within a second service block having a different service number in the range of 1-6. The closed caption data includes closed caption text. The parser parses the non-closed caption data within the second service block having the different service number in the range of 1-6. The processor performs a function based on the parsed non-closed caption data.
According to an embodiment of the present disclosure, there is provided a method of a reception apparatus for processing non-closed caption data. The method includes receiving by the reception apparatus closed caption service data. The closed caption service data includes closed caption data within a first service block having a service number in the range of 1-6, and non-closed caption data within a second service block having a different service number in the range of 1-6. The closed caption data includes closed caption text. A parser of the reception apparatus parses the non-closed caption data within the second service block having the different service number in the range of 1-6. A processor of the reception apparatus performs a function based on the parsed non-closed caption data.
Further, in an embodiment of the present disclosure, there is provided a non-transitory computer-readable storage medium storing instructions which when executed by a computer causes the computer to perform the above-described method of the reception apparatus.
According to an embodiment of the present disclosure, there is provided an information providing apparatus. The information providing apparatus includes a closed caption unit configured to generate or receive closed caption service data associated with audio/video (A/V) content. Further, the information providing apparatus includes a communication interface configured to provide, to a reception apparatus, the A/V content and the closed caption service data. The closed caption service data includes closed caption data within a first service block having a service number in the range of 1-6, and non-closed caption data within a second service block having a different service number in the range of 1-6. The closed caption data includes closed caption text.
According to an embodiment of the present disclosure, there is provided a method of an information providing apparatus for providing non-closed caption data. The method includes generating or receiving, by the information providing apparatus, closed caption service data associated with A/V content. The information providing apparatus provides to a reception apparatus the A/V content and the closed caption service data. The closed caption service data includes closed caption data within a first service block having a service number in the range of 1-6, and non-closed caption data within a second service block having a different service number in the range of 1-6. The closed caption data includes closed caption text.
Further, in an embodiment of the present disclosure there is provided a non-transitory computer-readable storage medium storing instructions which when executed by a computer causes the computer to perform the above-described method of the information providing apparatus.
A more complete appreciation of the invention and many of the attendant advantages thereof will be readily obtained as the same becomes better understood by reference to the following detailed description when considered in connection with the accompanying drawings, wherein:
While this invention is susceptible of embodiment in many different forms, there is shown in the drawings and will herein be described in detail specific embodiments, with the understanding that the present disclosure of such embodiments is to be considered as an example of the principles and not intended to limit the invention to the specific embodiments shown and described. In the description below, like reference numerals are used to describe the same, similar or corresponding parts in the several views of the drawings.
The terms “a” or “an”, as used herein, are defined as one or more than one. The term “plurality”, as used herein, is defined as two or more than two. The term “another”, as used herein, is defined as at least a second or more. The terms “including” and/or “having”, as used herein, are defined as comprising (i.e., open language). The term “coupled”, as used herein, is defined as connected, although not necessarily directly, and not necessarily mechanically. The term “program” or “computer program” or similar terms, as used herein, is defined as a sequence of instructions designed for execution on a computer system. A “program”, or “computer program”, may include a subroutine, a program module, a script, a function, a procedure, an object method, an object implementation, in an executable application, an applet, a servlet, a source code, an object code, a shared library/dynamic load library and/or other sequence of instructions designed for execution on a computer system.
The term “program”, as used herein, may also be used in a second context (the above definition being for the first context). In the second context, the term is used in the sense of a “television program”. In this context, the term is used to mean any coherent sequence of audio/video content such as those which would be interpreted as and reported in an electronic program guide (EPG) as a single television program, without regard for whether the content is a movie, sporting event, segment of a multi-part series, news broadcast, etc. The term may also be interpreted to encompass commercial spots and other program-like content which may not be reported as a program in an electronic program guide.
Reference throughout this document to “one embodiment”, “certain embodiments”, “an embodiment”, “an implementation”, “an example,” or similar terms means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Thus, the appearances of such phrases or in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments without limitation.
The term “or” as used herein is to be interpreted as an inclusive or meaning any one or any combination. Therefore, “A, B, or C” means “any of the following: A; B; C; A and B; A and C; B and C; A, B, and C”. An exception to this definition will occur only when a combination of elements, functions, steps or acts are in some way inherently mutually exclusive.
Referring now to the drawings,
In one embodiment, the content source 10 provides content to the reception apparatus 20. The content includes for example one or more television programs, which are broadcast in digital television broadcast signals for example. The content source 10 also provides non-closed caption data that is associated with the content. In one embodiment, the content source 10 broadcasts the content and non-closed caption data in an MPEG-2 Transport Stream (TS).
Embodiments of the present disclosure utilize a caption data transport to deliver one or a combination of non-closed caption data for those cases in which closed caption data is available to the reception apparatus 20. Examples of non-closed caption data include trigger data (e.g., a short trigger), a data stream (e.g., including one or more parameters) suitable for ingestion and processing by one or more TDOs, any other data related to one or more digital television services or interactive-television applications, etc.
A basic diagram of the content source 10 is depicted in
A basic diagram of a reception apparatus 20 is depicted in
In one embodiment, when the caption service blocks include an Adjunct Data service, the caption processor 316 processes the caption service blocks corresponding to the Main service of interest, while at the same time processing caption service blocks corresponding to the mapped Adjunct Data service. Further, in one embodiment, the caption processor 316 processes caption service blocks corresponding to the Adjunct Data service whenever non-closed caption data is available or continuously processes the caption service blocks to determine the availability of the non-closed caption data. The caption processor 316 outputs the non-closed caption data to an appropriate component such as the CPU 438, illustrated in
The compositor 320 combines, for example, closed caption text from the caption processor 316 and video from the video decoder 314 for display.
A/V content may also be received via the Internet 30 via the network interface 426 for IP television content decoding. Additionally, a storage 430 can be provided for non-real time (NRT) stored content. The NRT content can be played by demultiplexing at the demultiplexer 406 in a manner similar to that of other sources of content. The reception apparatus 20 generally operates under control of a processor such as a CPU 438 which is interconnected to a working memory 440 and a program memory 442, as well as a graphics subsystem 444 via one or more buses such as a bus 450.
The CPU 438 receives closed caption service data, including closed caption data and non-closed caption data, from the demultiplexer 406 via the mechanism described herein. When the non-closed caption data includes, for example, a short trigger or one or more parameters for a TDO, the CPU 438, in one embodiment, performs a function based on, or in response to, the parsed non-closed caption data. When the non-closed caption data includes display information, the information is passed to the graphics subsystem 444 and the images are combined at the compositor 460 to produce an output suitable for processing and display on a video display.
In one embodiment, when the content is received via the network interface 426, the CPU 438 also receives the closed caption service data from the network interface 426.
Referring back to
The TPT, in one embodiment, includes a primary key (e.g., a tag element, trigger event id, etc.) that associates each element (row) in the table with an associated trigger event. A trigger, in turn, will refer to a particular event in the TPT by means of this key.
Further, in one embodiment, the TPT is a correspondence table that associates a command for controlling a TDO with a valid period and a valid time of that command. The valid period and valid time of the command are determined in keeping with the progress of content. For example, when the time acquired from a trigger from the content source 10 as indicative of the progress of content either falls within the valid period of the command or has run past a valid start time thereof on the basis of the TPT acquired from the TPT server 40, the reception apparatus 20 specifies the command as being valid. In one embodiment, the reception apparatus 20 controls the operation of the TDO. Also in keeping with the specified command, the reception apparatus 20 accesses the TDO server 50 via the Internet 30 to acquire the TDO.
The TDO server 50 stores TDOs for access by the reception apparatus 20. In one embodiment, the reception apparatus 20 retrieves a TDO from the TDO server 50 based on information included in a standard caption service, via for example a TPT.
A TDO is a downloadable software object created by a content provider, content creator, or service provider, which includes declarative content (e.g., text, graphics, descriptive markup, scripts, and/or audio) whose function is tied in some way to the content it accompanies. An embodiment of the TDO is described in U.S. application Ser. No. 12/959,529 filed Dec. 3, 2010 entitled “Announcement of Triggered Declarative Objects” to Blanchard, et al. which is hereby incorporated by reference in its entirety. However, the TDO is not limited to the structure described in Blanchard, et al. since many attributes defined therein as being a part of a TDO could be situated in a trigger or vice versa or not present at all depending upon the function and triggering of a particular TDO.
The TDO is generally considered as “declarative” content to distinguish it from “executable” content such as a Java applet or an application that runs on an operating system platform. Although the TDO is usually considered to be a declarative object, a TDO player supports a scripting language that is an object-oriented programming language. The TDOs are typically received from a content or service provider in advance of the time they are executed, so that the TDO is available when needed. Moreover, an explicit trigger signal may not be necessary and a TDO may be self-triggering or triggered by some action other than receipt of a trigger signal. Various standards bodies may define associated behaviors, appearances, trigger actions, and transport methods for content and metadata for a TDO. Additionally, requirements regarding timing accuracy of TDO behaviors relative to audio/video may be defined by standards bodies.
When the content source 10 broadcasts an MPEG-2 TS, the full broadcast multiplex may not be available to the reception apparatus 20. In some cases, due to reprocessing at a cable/satellite plant, some adjunct data may be stripped out. Examples include extra descriptors in the Program Map Table (PMT) and extra Elementary Stream (ES) components in the content.
When at least some portion of the content is delivered in compressed form, in one embodiment, MPEG or Advanced Video Coding (AVC) compressed video packets will be available. These packets contain the closed caption data stream. Some examples of cases where compressed video is available are when the reception apparatus 20 accesses the TS directly from an 8-VSB or Mobile DTV tuner, or when it has home network access to a cable/satellite/IPTV set-top box that supports DLNA protocols and offers a compressed video stream on the network interface.
The FCC has ruled that digital cable set top boxes in the United States must support network interfaces allowing devices on a network to access compressed audio/video for decoding and recording. Access to the compressed audio/video may be provided, for example, via DLNA protocols. This method affords a new path for delivery of compressed video including for example closed captioning. Thus, when the caption data stream does not make it across the current HDMI interface, in one embodiment, a partial TS can be accessed by means of DLNA methods and as required by FCC rules. In another embodiment, if the HDMI interface is modified to carry the caption data stream, the partial TS can be accessed from the HDMI interface instead of using the DLNA methods.
The CEA-708 advanced captioning standard supports multiple simultaneous caption services so that, for example, captioning in different languages can be offered for the same content, or program. CEA-708 defines a “minimum decoder” in Section 9. A minimum decoder is required to process the “Standard” service numbers 1 through 6. Processing “Extended” services 7 through 63 is optional. Quoting from CEA-708, “Decoders shall be capable of decoding all Caption Channel Block Headers consisting of Standard Service Headers, Extended Service Block Headers, and Null Block headers.” CEA-708 is incorporated by reference in its entirety.
Some embodiments of the non-closed caption data transport methods described herein involve placing one or a combination of non-closed caption data in an Adjunct Data service. In this approach Standard Service Number 6 is recognized as the Adjunct Data service according to the preferred implementation.
In one embodiment, the broadcast system 2 illustrated in
As described above, embodiments of the present disclosure place Adjunct Data in Standard service packets. All legacy decoders should be able to handle the presence of Standard service packets and are able to filter out packets corresponding to services they are not set to decode (non-selected services).
Some legacy receivers may not use the PSIP Caption Service Descriptor (CSD) to create the user interface for selection of caption services. In this case, it could be possible for the user to select caption Service #6 (the Adjunct Data channel) and attempt to decode it. The proposed method uses a “variable-length” command which would be unknown to the receiver. Receivers are expected to discard unsupported commands, thus they should be able to skip the proper number of bytes in order to discard the command. In this case, nothing would be displayed for Service #6.
Even in the case that something were to be displayed (garbage characters or whatever), the user would decide this is not a good caption service and would choose a better one. Hence, no harm would be done.
In current practice, it is rare that even two simultaneous caption services are used. Content captioned in both English and Spanish are somewhat rare, but do occur. Content captioned in more than two simultaneous languages are seldom if ever produced. Therefore, placing a variable-length command in Service #6 is not disruptive to current and most contemplated caption services delivery.
Further, it is believed that all existing receivers are able to properly skip service blocks corresponding to service numbers they are not currently decoding. Moreover, proper handling in the receiver of Standard caption services 1-6 is required by FCC rules. If any legacy receiver attempts to decode non-closed caption data (which should not normally occur, as caption services containing non-closed caption data are not announced in the Caption Service Descriptor), if the receiver is built according to CEA-708-D it will simply disregard the contents of the command. CEA-708-D is incorporated by reference in its entirety.
To optimize compatibility with legacy decoders (while not being able to absolutely guarantee that all legacy decoders would be able to properly disregard the new command), the Variable Length Command as defined in CEA-708-D Sec. 7.1.11.2 can be used. Such commands use the “C3” command (“C3 Code Set—Extended Control Code Set 2”). If properly implemented, legacy decoders should skip variable length commands further assuring that they will not take an unpredictable action.
Hence, in order to help assure that legacy decoders will not malfunction due to attempting to process non-closed caption data, Standard Service #6 (in the example preferred implementation) is used to transport the non-closed caption data. To further prevent legacy decoders from attempting to render the services, a variable-length command can be used to define the non-closed caption data in any suitable manner. While some legacy decoders may not properly implement the “skip variable length extensions” feature as defined in CEA-708, viewers may not be given an option to choose Standard Service #6 anyway since it is an “unannounced” service. Unless all six Standard Services actually carry caption services (a situation that is currently believed to be extremely rare if in existence at all), Service #6 will not be announced in the Caption Service Descriptor (CSD) defined in ATSC A/65 Program and System Information Protocol (PSIP), which is incorporated by reference in its entirety.
Although new metadata can also be added to the TS via other methods, those methods are more troublesome on the creation side and on the decoder side. The following are a number of exemplary methods.
Adaptation fields: Requires significant video encoder upgrades, and involves a new protocol between the metadata source and the encoder. Significant new standards work is required. Decoder must parse and extract adaptation fields from the TS to make them available to the decoder CPU.
Video user data: Again requires significant video encoder upgrades, and involves a new protocol between the metadata source and the encoder. Decoder must parse and extract video user data from the video stream to make it available to the decoder CPU.
Audio user data: Again requires significant audio encoder upgrades, and involves a new protocol between the metadata source and the encoder. Decoder must parse and extract video user data from the audio stream to make it available to the decoder CPU.
Elementary Streams: Requires significant video encoder upgrades, and involves a new protocol between the metadata source and the encoder.
Further, with respect to the brick wall problem, if the whole TS gets to the reception apparatus 20 (e.g., an ATSC 2.0 receiver), all these methods are about equal. If only a partial TS gets to the reception apparatus 20, what will survive? Everything except for separate Elementary Streams, which may not be included in the TS.
An exemplary method 600 of processing non-closed caption data is illustrated in
At step S606, the reception apparatus 20 parses (e.g., in a parsing computer process module) the non-closed caption data from the second standard service block having the service number 6 (or n). The non-closed caption data is then processed at step S608 (e.g., in another processor operation) to perform a function based on, or in response to, the parsed non-closed caption data.
As described above, examples of non-closed caption data include trigger data (e.g., a short trigger), a data stream (e.g., including one or more parameters) suitable for ingestion and processing by one or more triggered declarative objects (TDOs), any other data related to one or more digital television services or interactive-television applications, etc. In the case of a short trigger, the size of the short trigger is for example less than 30 bytes. The short trigger, in one embodiment, functions to identify a location of a TPT server, indicate a current media time (i.e., where in play out we are), identify an event to execute now or later (e.g., in a TPT), and/or to smooth server peak load.
In one embodiment, the content of the short trigger includes a domain of the TPT server and one or more of a media time, trigger event ID, new time of specified TPT event, and diffusion timing information. An exemplary short trigger is “xbc.tv/7a1?mt=200909.” The portion “xbc.tv” corresponds to the domain name registered to an entity that will provide additional data (e.g., interactive elements such as a TPT). The portion “/7a1” corresponds to the name/directory space managed by the registered owner of the domain. The combination “xbc.tv/7a1” identifies the server/directory where the additional data will be found. Further, the portion “?mt=200909” corresponds to a parameter portion, which may include for example a media time, an event, an event timing update, etc.
In one embodiment, by utilizing the caption data transport to transport non-closed caption data, a short trigger and trigger parameters table (TPT) approach can effectively add interactivity to linear TV and with the following advantages:
1. Short triggers can fit into small spaces, while longer ones may not fit.
2. The short trigger is human-readable text (URI+parameters), for easier creation/authoring, testing, and debugging.
3. The distribution chain is already set up for carriage of closed caption data. The trigger fits into the Society of Motion Picture and Television Engineers (SMPTE) caption data packet (CDP) that is defined as the interface between caption authoring stations and encoder/multiplexers. Thus, the amount of new or upgraded equipment that must be added in the broadcast station and distribution chain is minimized. There is already a distribution path for CDPs; no upgrades or new interfaces need to be defined for the encoders.
4. Interactivity can be added to a broadcast program simply by adding a short trigger to the caption stream, and placing the interactive content on an Internet server.
Further, in the decoder (e.g., the reception apparatus 20), the text (and/or non-closed caption data such as the short trigger) from caption service #6 can be easily captured.
As described above, in one embodiment, the non-closed caption data can be utilized in a “short trigger” scheme. The “short trigger” scheme involves universal resource identifier (URI)-based references to an entry in a TPT. An exemplary short trigger includes, or consists of, a registered Internet domain name, a “program ID” part, and an event ID. The event ID indexes an entry in the TPT. Thus, a given short trigger identifies (through the TPT) an interactive event as well as all the information that is associated with that event.
In one embodiment, the above-referenced non-closed caption data are carried in a CEA-708 compliant variable length command. In other embodiments, other multi-byte (i.e., not variable-length) commands can be used as well, for example in service number 6. It should be noted that any of the command codes that are not defined in CEA-708 (set aside for expansion) are usable in service number 6.
Embodiments described herein involve delivering non-closed caption data within a separate caption service that is known to be associated with one of the Standard caption services. However, in other embodiments, non-closed caption data is transported in a Standard service block having a service number in the range of 1-6 (e.g., service number 6), alongside actual closed caption data. The reception apparatus 20 distinguishes between the closed caption data and the non-closed caption data by means of command codes, as described below.
In accordance with this approach, Standard service #6 (or another Standard service number n=any of services 1 through 6) is defined as the Adjunct Data service. Characteristics of the Adjunct Data service include (1) Formatted as a Variable Length command (see CEA-708-D Section 7.1.11.2 Variable Length Codes from 0x90 to 0x9F) so that properly designed receivers will discard the contents of the packets; and (2) Not announced in the PSIP Caption Service Descriptor (thus properly designed receivers will not announce and offer the service containing Adjunct Data to the user).
The “trigger validity” portion is optional. It is used to smooth out server loads in certain applications.
In some embodiments, it is necessary to deliver short triggers to the reception apparatus 20 to indicate (1) the location of the TPT server; and (2) the timing of interactive events, especially when the timing is not known beforehand (e.g. for live events).
Accordingly, as described above, certain embodiments of the present disclosure involve a method for delivery of the short trigger that utilizes the closed caption transport mechanism, specifically, delivery of triggers inside standard caption service #6.
In one embodiment, non-closed caption data (e.g., short triggers) are delivered using one of the unused code points, e.g., 0x98, to deliver a variable-length short trigger. As specified in CEA-708-D, Section 7.1.11.2, variable-length commands are indicated by the EXT1 character followed by a number in the range 0x90 to 0x9F, where the “0x” notation denotes a number represented in hexadecimal format. In the command format depicted in
As noted above, in some embodiments, the EXT1+0x90-9F command sequence is used for the “variable-length” command. In other embodiments, other multi-byte (i.e., not variable-length) commands can be used as well, for example in service number 6. Any of the command codes that are not defined in CEA-708 (set aside for expansion) are usable in service number 6.
In one embodiment, the ItvTrigger( ) data structure follows the byte containing the length field. The syntax of one example of the trigger data is illustrated in
In the exemplary syntax of
Use of a variable-length DTV closed caption command in Service #6 to transport non-closed caption data such as iTV triggers provides: (1) robust (explicit) signaling of the presence of an ITV trigger; (2) signaling of the type of trigger (for future expansion); (3) a transport format that is a natural extension to the existing CEA-708 DTVCC protocol; and (4) a transport method that is transparent to legacy receivers.
The present disclosure contains references to CEA-708 and CEA-708-D. Disclosures referring to CEA-708, without the revision letter, relate to the CEA-708 standard generally and not to details that are included, or not included, by a particular revision of the standard. Further, disclosures referring to a particular version of the CEA-708 standard (e.g., CEA-708-D) are expected to apply to other revisions (e.g., successor revisions) of the standard.
Those skilled in the art will recognize, upon consideration of the above teachings, that certain of the above exemplary embodiments are based upon use of a programmed processor. However, the invention is not limited to such exemplary embodiments, since other embodiments could be implemented using hardware component equivalents such as special purpose hardware and/or dedicated processors. Similarly, general purpose computers, microprocessor based computers, micro-controllers, optical computers, analog computers, dedicated processors, application specific circuits and/or dedicated hard wired logic may be used to construct alternative equivalent embodiments.
Those skilled in the art will appreciate, upon consideration of the above teachings, that the program operations and processes and associated data used to implement certain of the embodiments described above can be implemented using disc storage as well as other forms of storage such as non-transitory storage devices including as for example Read Only Memory (ROM) devices, Random Access Memory (RAM) devices, network memory devices, optical storage elements, magnetic storage elements, magneto-optical storage elements, flash memory, core memory and/or other equivalent volatile and non-volatile storage technologies without departing from certain embodiments of the present invention. The term non-transitory does not suggest that information cannot be lost by virtue of removal of power or other actions. Such alternative storage devices should be considered equivalents.
Certain embodiments described herein, are or may be implemented using a programmed processor executing programming instructions that are broadly described above in flow chart form that can be stored on any suitable electronic or computer readable storage medium. However, those skilled in the art will appreciate, upon consideration of the present teaching, that the processes described above can be implemented in any number of variations and in many suitable programming languages without departing from embodiments of the present invention. For example, the order of certain operations carried out can often be varied, additional operations can be added or operations can be deleted without departing from certain embodiments of the invention. Error trapping can be added and/or enhanced and variations can be made in operational flow, user interface and information presentation without departing from certain embodiments of the present invention. Such variations are contemplated and considered equivalent.
While certain illustrative embodiments have been described, it is evident that many alternatives, modifications, permutations and variations will become apparent to those skilled in the art in light of the foregoing description.
This application is a continuation of U.S. patent application Ser. No. 13/800,818, filed Mar. 13, 2013, which is also related and claims priority to U.S. provisional patent application No. 61/613,869, filed Mar. 21, 2012, which are incorporated by reference in their entirety. This application is related to U.S. provisional patent application Nos. 61/452,247 filed Mar. 14, 2011, to Mark Eyer; 61/415,924 filed Nov. 22, 2010, entitled “Service Linkage to Caption Disparity Data Transport” to Mark Eyer, et al.; 61/415,457 filed Nov. 19, 2010, entitled “Disparity Data Signaling and Transport for 3D Captioning” to Mark Eyer, et al.; 61/346,652 filed May 20, 2010, entitled “Disparity Data Transport” to Mark Eyer, et al.; 61/313,612 filed Mar. 12, 2010, to Mark Eyer et al.; 61/316,733 filed Mar. 23, 2010, entitled “Extended Command Stream for CEA-708 Captions” to Mark Eyer et al., and 61/378,792 filed Aug. 31, 2010, entitled “Efficient Transport of Frame-by-Frame Change in Captioning Disparity Data” to Mark Eyer. This application is also related to U.S. non-provisional patent application Ser. Nos. 13/022,828, 13/022,817, and 13/022,810 which were each filed on Feb. 8, 2011. Each of the above applications is hereby incorporated by reference in their entirety.
Number | Name | Date | Kind |
---|---|---|---|
5519443 | Salomon | May 1996 | A |
5543852 | Yuen | Aug 1996 | A |
5617146 | Duffield | Apr 1997 | A |
6115074 | Ozkan | Sep 2000 | A |
6507369 | Kim | Jan 2003 | B1 |
6637032 | Feinleib | Oct 2003 | B1 |
6766524 | Matheny | Jul 2004 | B1 |
6824044 | Lapstun | Nov 2004 | B1 |
7019787 | Park | Mar 2006 | B2 |
7028327 | Dougherty et al. | Apr 2006 | B1 |
7631338 | Del Sesto | Dec 2009 | B2 |
7646431 | Lee | Jan 2010 | B2 |
7889964 | Barton | Feb 2011 | B1 |
8595783 | Dewa | Nov 2013 | B2 |
8619192 | Smith | Dec 2013 | B2 |
8705933 | Eyer | Apr 2014 | B2 |
8839338 | Eyer | Sep 2014 | B2 |
8842974 | Kitazato | Sep 2014 | B2 |
8863171 | Blanchard et al. | Oct 2014 | B2 |
8872888 | Kitazato | Oct 2014 | B2 |
8875169 | Yamagishi | Oct 2014 | B2 |
8875204 | Kitazato | Oct 2014 | B2 |
8884800 | Fay | Nov 2014 | B1 |
8886009 | Eyer | Nov 2014 | B2 |
20020162120 | Mitchell | Oct 2002 | A1 |
20040032486 | Shusman | Feb 2004 | A1 |
20050005303 | Barone, Jr. et al. | Jan 2005 | A1 |
20050071889 | Liang | Mar 2005 | A1 |
20050262539 | Barton et al. | Nov 2005 | A1 |
20070022437 | Gerken | Jan 2007 | A1 |
20070124796 | Wittkotter | May 2007 | A1 |
20070177466 | Ando et al. | Aug 2007 | A1 |
20090034556 | Song et al. | Feb 2009 | A1 |
20090244373 | Park | Oct 2009 | A1 |
20090296624 | Ryu et al. | Dec 2009 | A1 |
20090320064 | Soldan et al. | Dec 2009 | A1 |
20100050217 | Suh | Feb 2010 | A1 |
20100095337 | Dua | Apr 2010 | A1 |
20100134701 | Eyer | Jun 2010 | A1 |
20100146376 | Reams | Jun 2010 | A1 |
20100157025 | Suh | Jun 2010 | A1 |
20100162307 | Suh | Jun 2010 | A1 |
20100215340 | Pettit et al. | Aug 2010 | A1 |
20110088075 | Eyer | Apr 2011 | A1 |
20110128443 | Blanchard et al. | Jun 2011 | A1 |
20110221863 | Eyer | Sep 2011 | A1 |
20110243536 | Eyer | Oct 2011 | A1 |
20110246488 | Eyer | Oct 2011 | A1 |
20110247028 | Eyer | Oct 2011 | A1 |
20110298981 | Eyer | Dec 2011 | A1 |
20110299827 | Eyer | Dec 2011 | A1 |
20110302599 | Eyer | Dec 2011 | A1 |
20110302611 | Eyer | Dec 2011 | A1 |
20110307920 | Blanchard et al. | Dec 2011 | A1 |
20120044418 | Eyer | Feb 2012 | A1 |
20120047531 | Eyer | Feb 2012 | A1 |
20120050619 | Kitazato et al. | Mar 2012 | A1 |
20120050620 | Kitazato | Mar 2012 | A1 |
20120054214 | Yamagishi et al. | Mar 2012 | A1 |
20120054235 | Kitazato et al. | Mar 2012 | A1 |
20120054267 | Yamagishi et al. | Mar 2012 | A1 |
20120054268 | Yamagishi | Mar 2012 | A1 |
20120054784 | Kitazato et al. | Mar 2012 | A1 |
20120054816 | Dewa | Mar 2012 | A1 |
20120060197 | Kitahara et al. | Mar 2012 | A1 |
20120063508 | Hattori et al. | Mar 2012 | A1 |
20120072965 | Dewa | Mar 2012 | A1 |
20120081607 | Kitazato | Apr 2012 | A1 |
20120082266 | Kitazato et al. | Apr 2012 | A1 |
20120084802 | Kitazato | Apr 2012 | A1 |
20120084829 | Kitazato | Apr 2012 | A1 |
20120180109 | Chen | Jul 2012 | A1 |
20120185888 | Eyer et al. | Jul 2012 | A1 |
20120236113 | Eyer | Sep 2012 | A1 |
20120253826 | Kitazato et al. | Oct 2012 | A1 |
20120274848 | Kitahara et al. | Nov 2012 | A1 |
20130024894 | Eyer | Jan 2013 | A1 |
20130024897 | Eyer | Jan 2013 | A1 |
20130031569 | Eyer | Jan 2013 | A1 |
20130036440 | Ever | Feb 2013 | A1 |
20130055313 | Eyer | Feb 2013 | A1 |
20130103716 | Yamagishi | Apr 2013 | A1 |
20130145414 | Yamagishi | Jun 2013 | A1 |
20130167171 | Kitazato et al. | Jun 2013 | A1 |
20130191860 | Kitazato et al. | Jul 2013 | A1 |
20130198768 | Kitazato | Aug 2013 | A1 |
20130201399 | Kitazato et al. | Aug 2013 | A1 |
20130205327 | Eyer | Aug 2013 | A1 |
20130212634 | Kitazato | Aug 2013 | A1 |
20130215327 | Kitazato et al. | Aug 2013 | A1 |
20130250173 | Eyer | Sep 2013 | A1 |
20130254824 | Eyer | Sep 2013 | A1 |
20130271653 | Kim et al. | Oct 2013 | A1 |
20130282870 | Dewa et al. | Oct 2013 | A1 |
20130283311 | Eyer | Oct 2013 | A1 |
20130283328 | Kitazato | Oct 2013 | A1 |
20130291022 | Eyer | Oct 2013 | A1 |
20130291049 | Kitazato | Oct 2013 | A1 |
20130340007 | Eyer | Dec 2013 | A1 |
20140013347 | Yamagishi | Jan 2014 | A1 |
20140013379 | Kitazato et al. | Jan 2014 | A1 |
20140020038 | Dewa | Jan 2014 | A1 |
20140020042 | Eyer | Jan 2014 | A1 |
20140040965 | Kitazato et al. | Feb 2014 | A1 |
20140040968 | Kitazato et al. | Feb 2014 | A1 |
20140043540 | Kitazato et al. | Feb 2014 | A1 |
20140047496 | Kim et al. | Feb 2014 | A1 |
20140053174 | Eyer et al. | Feb 2014 | A1 |
20140067922 | Yamagishi et al. | Mar 2014 | A1 |
20140099078 | Kitahara et al. | Apr 2014 | A1 |
20140122528 | Yamagishi | May 2014 | A1 |
20140137153 | Fay et al. | May 2014 | A1 |
20140137165 | Yamagishi | May 2014 | A1 |
20140143811 | Lee | May 2014 | A1 |
20140150040 | Kitahara et al. | May 2014 | A1 |
20140157304 | Fay et al. | Jun 2014 | A1 |
20140173661 | Yamagishi | Jun 2014 | A1 |
20140186008 | Eyer | Jul 2014 | A1 |
20140208375 | Fay et al. | Jul 2014 | A1 |
20140208380 | Fay et al. | Jul 2014 | A1 |
20140229580 | Yamagishi | Aug 2014 | A1 |
20140229979 | Kitazato et al. | Aug 2014 | A1 |
20140253683 | Eyer et al. | Sep 2014 | A1 |
Number | Date | Country |
---|---|---|
1 380 945 | Jan 2004 | EP |
1 380 945 | Jan 2004 | WO |
WO 2005006758 | Jan 2005 | WO |
WO 2011066171 | Jun 2011 | WO |
WO 2011074218 | Jun 2011 | WO |
WO 2013012676 | Jan 2013 | WO |
Entry |
---|
Office Action dated Apr. 21, 2015 in Chinese Patent Application No. 201280026304.4 (with English translation). |
Extended European Search Report dated Mar. 12, 2015 in Patent Application No. 12829741.3. |
Extended European Search Report dated Nov. 3, 2015 in Patent Application No. 13777548.2. |
Extended European Search Report dated Oct. 12, 2015 in Patent Application No. 13765058.6. |
U.S. Appl. No. 13/934,549, filed Jul. 3, 2013, Fay et al. |
U.S. Appl. No. 13/934,615, filed Jul. 3, 2013, Eyer. |
U.S. Appl. No. 14/275,231, filed May 12, 2014, Eyer. |
U.S. Appl. No. 14/295,695, filed Jun. 4, 2014, Eyer. |
U.S. Appl. No. 14/457,290, filed Aug. 12, 2014, Eyer. |
U.S. Appl. No. 14/458,310, filed Aug. 13, 2014, Eyer. |
U.S. Appl. No. 14/490,263, filed Sep. 18, 2014, Blanchard et al. |
U.S. Appl. No. 14/493,661, filed Sep. 23, 2014, Yamagishi. |
U.S. Appl. No. 14/493,721, filed Sep. 23, 2014, Kitazato. |
U.S. Appl. No. 14/504,455, filed Oct. 2, 2014, Fay. |
U.S. Appl. No. 14/504,984, filed Oct. 2, 2014, Eyer. |
U.S. Appl. No. 14/509,200, filed Oct. 4, 2014, Eyer. |
U.S. Appl. No. 14/509,166, filed Oct. 8, 2014, Kitazato. |
U.S. Appl. No. 14/512,761, filed Oct. 13, 2014, Fay. |
U.S. Appl. No. 14/512,776, filed Oct. 13, 2014, Kitazato. |
U.S. Appl. No. 14/521,034, filed Oct. 22, 2014, Eyer. |
U.S. Appl. No. 14/529,440, filed Oct. 31, 2014, Kitazato et al. |
U.S. Appl. No. 14/529,490, filed Oct. 31, 2014, Yamagishi et al. |
U.S. Appl. No. 14/529,450, filed Oct. 31, 2014, Kitazato et al. |
U.S. Appl. No. 14/529,421, filed Oct. 31, 2014, Kitazato et al. |
U.S. Appl. No. 14/529,461, filed Oct. 31, 2014, Kitahara et al. |
International Search Report and Written Opinion dated May 17, 2013 in PCT/US2013/030646 filed Mar. 13, 2013. |
International Search Report and Written Opinion dated Jun. 17, 2013 in PCT/US2013/036075 filed Apr. 11, 2013. |
International Search Report and Written Opinion of the International Searching Authority dated May 31, 2013 in PCT/US2013/33133. |
Extended European Search Report issued Jun. 22, 2015 in European Application No. 12814551.3. |
Extended European Search Report issued Jun. 30, 2015 in European Application No. 12814180.1. |
Extended European Search Report issued Jul. 27, 2015 in European Application No. 13764907.5. |
Number | Date | Country | |
---|---|---|---|
20150062428 A1 | Mar 2015 | US |
Number | Date | Country | |
---|---|---|---|
61613869 | Mar 2012 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 13800818 | Mar 2013 | US |
Child | 14538311 | US |