The present invention relates to image processing and, more particularly, to methods and systems for mapping graphics and composite images onto image/video data.
Dynamic range (DR) relates to a span of intensity (e.g., luminance, luma) in an image. The DR in real-world scenes is usually large. Different image and video applications for the capture, representation, and presentation of image and video signals may have different DR. For example, photographic negatives can have a relatively large dynamic range, while photographic prints, some currently existing (e.g., conventional) television (TV) sets, and computer monitors may have a smaller DR.
DR also relates to a capability of the human psychovisual system (HVS) to perceive a range of intensity (e.g., luminance, luma) in an image, e.g., from darkest darks to brightest brights. In this sense, DR relates to a “scene-referred” intensity. DR may also relate to the ability of a display device to adequately or approximately render an intensity range of a particular breadth. In this sense, DR relates to a “display-referred” intensity. In another sense, DR may also refer to a “signal-referred” intensity—which may be to some extent theoretical. For example, a VDR signal may range up to 10,000 nits and HDR signals may range even higher. Most of the time, there are no grading displays for that range. Unless a particular sense is explicitly specified to have particular significance at any point in the description herein, it should be inferred that the term may be used in either sense, e.g. interchangeably.
Rendering by conventional TV sets and computer monitors is often constrained to approximately three orders of magnitude of dynamic range—typifying a low dynamic range (LDR), also referred to as a standard dynamic range (SDR). In contrast to LDR images, high dynamic range (HDR) images contain essentially all of the dynamic range in an original scene. HDR can span some 14-15 orders of magnitude of dynamic range. HDR images can be represented by any bit depth, but typically 10-16 bits or more are used to reduce overly large step sizes.
For a number of applications such as compression for distribution, encoding for HDR images may unnecessary and may in fact be somewhat computationally expensive or bandwidth consumptive. On the other hand, LDR images may simply not suffice either. Instead, such applications may advantageously use, create, store, transmit or render images that may be characterized by a visual dynamic range or variable dynamic range, VDR. VDR images, truncated in relation to HDR, encompass essentially all of the luminance and color that a typical HVS can simultaneously perceive (e.g., visually perceive at any given time). VDR spans about 5-6 orders of magnitude of dynamic range. Thus while narrower in relation to HDR, VDR nonetheless represents a wide DR breadth. Despite the DR differences between HDR and VDR images, the term EDR, as used herein, characterizes any image with an extended dynamic range compared to LDR.
Several embodiments of display systems and methods of their manufacture and use are herein disclosed.
Systems and methods for overlaying a second image/video data onto a first image/video data are described herein. The first image/video data may be intended to be rendered on a display with certain characteristics—e.g., HDR, EDR, VDR or ultra-high definition (UHD, e.g., 4K or 8K horizontal resolution) capabilities. The second image/video data may comprise graphics, closed captioning, text, advertisement—or any data that may be desired to be overlaid and/or composited onto the first image/video data. The second image/video data may be appearance mapped according to the image statistics and/or characteristics of the first image/video data. In addition, such appearance mapping may be made according to the characteristics of the display that the composite data is to be rendered. Such appearance mapping is desired to render a composite data that is visually pleasing to a viewer, rendered upon a desired display.
In one embodiment, a method for overlaying a second image data over a first image data is disclosed—comprising: receiving a first image and a second image, the first image differing in dynamic range and size than the second image; receiving first metadata regarding the first image; receiving second metadata regarding the second image; performing appearance mapping of the second image to determine an adjusted second image, said adjusted second image differing in dynamic range than the second image, according to the first metadata and the second metadata; and forming a composite image overlaying the adjusted second image onto at least a portion of the first image
In another embodiment, a system for compositing a second image data onto a first image data is disclosed—comprising: a display management module, the display management module capable of receiving a first image; a compositor module, said compositor module capable of receiving a second image; wherein further said compositor module capable of receiving metadata regarding the first image and capable of performing appearance mapping the second image to form an appearance mapped second image in accordance with said metadata regarding the first image; and a mixing module, said mixing module capable of mixing the appearance mapped second image onto the first image to form a composite image, the composite image intended to be rendered upon a display.
In another embodiment, systems and methods for dynamic advertising is disclosed in which an existing composite image—formed from a first image/video data with a second overlaid image/video data—may be mapped and/or converted into another composite image where all or a part of the second overlaid image/video data may be replaced by a third overlaid image/video data.
Other features and advantages of the present system are presented below in the Detailed Description when read in connection with the drawings presented within this application.
Exemplary embodiments are illustrated in referenced figures of the drawings. It is intended that the embodiments and figures disclosed herein are to be considered illustrative rather than restrictive.
Throughout the following description, specific details are set forth in order to provide a more thorough understanding to persons skilled in the art. However, well known elements may not have been shown or described in detail to avoid unnecessarily obscuring the disclosure. Accordingly, the description and drawings are to be regarded in an illustrative, rather than a restrictive, sense.
Introduction
In a typical visual experience with current legacy consumer TVs, there is very little control over the appearance and mapping of the incoming video stream using standards such as Rec.709 and DCI Specs. This is particularly true of graphic or video overlays that may be desired to be integrated onto a first video stream, mixed and then rendered for viewing by an individual. In one embodiment of the present system, it may be desirable to provide suitable image processing and/or mixing of such overlays onto a first image/video data stream in such a manner as to provide appearance matching and/or an improved visual experience for a viewer of the composite image and/or video.
Data stream 102 may be input into a Display Management (DM) module 108 for image processing suitable for rendering onto display 114. Compositor 110 may be another image processing/video processing module that inputs the composite data stream (106)—as well as data and/or metadata from DM 108. Compositor 110 may format the composite data stream as GUI, Closed Captioning (CC), Picture In Picture (PIP) or any other possible format for the composite data to be mixed and/or composited with the first data stream. Metadata may also include metadata regarding the rendering characteristics of the display upon which the composite image may be rendered. Such metadata may include avg., min/mean/max luminance, reflective white, whitepoint, color gamut and any other known image rendering characteristic and/or specification.
A viewer may have optional control input to the compositor 110—e.g., via a remote control, laptop, tablet, smart phone or another suitable controller (116)—in order to input viewer desires or demands for such composite rendering.
The composite image/video data and the first video stream (after whatever processing has been applied by DM 108) may be input to a mixing module 112. Mixing module 112 may suitably overlay the composite image/video data onto the first data stream—e.g., using any advanced image/video processing algorithm described herein and as known—to provide a pleasant viewing experience. As described further herein, such a pleasant viewing experience may be enhanced by appearance matching the composite image/video data to the characteristics of the first image data and/or the characteristics of the display (114) on which the final composite image/video is to be rendered.
It should be appreciated that—although the DM, compositor and the mixing modules may be resident in the display itself (as functional modules)—it is possible that DM, compositor and mixing modules may physically reside elsewhere and may be remote from each other. For example, it may be possible to place one or more of these modules in a set top box and have it be in communication with display (e.g., by any known wired or wireless configuration). In another embodiment, DM, compositor and/or mixer may be out of the physical room where the display resides. In other embodiments, it may be possible to place the functionality of any or all three of these modules into a single module. For example, a DM module may be constructed to include the functionality of typical DMs—as well as the functionality of the compositor and the mixer.
In one embodiment, it may be desirable to have an image statistics calculating module for the calculation of various statistics known in image processing (as also described further herein). Such statistics (e.g., of the first image/video data, second overlay image/video data and the like) may be used further by the present system to aid in the appearance mapping of the second overlay image/video data onto the first image/video data. Any of the modules mentioned herein may incorporate the image statistics module, as is known in the art to do.
In one embodiment, these modules may reside at an image/video front end supplier (e.g., cable operator, satellite operator and/or other media suppliers). Thus, it may be desirable to distinguish and/or note where the graphics overlay is injected into the content,—e.g., generated at the content creation side (e.g., subtitle), the broadcasting company (e.g., logos), Set Top Box (e.g., UI, TV guide, cc), TV itself (e.g. UI), AV receiver (e.g., volume bar graphics overlay, or any signal switch/modifier/AV processor that may add graphics or modify the input video stream otherwise. At any such stage, the overlay and compositing may be dealt with differently. It may also be desirable to have the UI and overlay injection points be aware of each other (i.e., pipeline-awareness). In such a case, it may be possible to avoid re-analyze and re-map UI graphics already embedded into the video stream (e.g. broadcast logos are commonly embedded early in the stream). Beside of general UI rendering, all this information may also be provided to the operating system of the playout device so that for example a web browser running on a smart TV can access it.
As mentioned above, in one embodiment, the video stream may be a HDR, EDR and/or VDR data/metadata stream and, as such, some portion of the video processing system may affect HDR, EDR and/or VDR image/video processing. Various systems, techniques and/or technologies involving HDR, EDR and VDR data and metadata processing may be found in the following co-owned patent applications:
In addition, Display Management subsystems may comprise a part of the system for providing a pleasing viewing experience for such composited image/video data on a first data stream. DM systems typically comprise a processor, computer readable storage and a set of computer readable instructions that are suitable to affect a wide array of image processing algorithms and techniques—e.g., luminance mapping, color gamut mapping, dynamic range mapping.
DM systems are further described in the following co-owned US patent applications:
In one embodiment, to provide a pleasing visual experience, it may be desirable to mix the composite signal with the first image/video signal in accordance with the characteristics of the first signal and/or the characteristics of the display. For example, as video and display technology improves, there is a trend to moving towards displays that are capable of rendering VDR/EDR/HDR data. Such data—and displays that are capable of rendering such data—provide suitable means to faithfully reproduce a movie/video in the way the director intended—i.e., to within the capabilities of the display hardware used to show and/or render the data. In one instance, higher luminance levels especially for highlights may be reproduced—which was not typically possible with legacy approaches.
Apart from the higher quality image/video data distributed to the display/TV, other image elements (not comprising necessarily the actual movie/video content) are other user interface elements—e.g., menus, cursors and other on screen display elements such as close captioning or BluRay disc menus. However, the appearance rendering of those elements is typically not defined in EDR/VDR—nor is typically with respect to legacy video.
Thus, one embodiment of a present system affects a perceptually accurate rendering of user interface elements on a display device using the system and methods disclosed herein—to affect a colorimetricy, perceptual and aesthetically correct rendering of those before-mentioned user interface (UI) elements and other image/video content.
There are many visual features to note in
Examining the legacy approach of
While the rendering of the white menu text with the maximum possible code value (e.g., slightly below EDR code value 4096 in 12-bit) may tend to cause text to be perceived as glowing, it might also create discomfort due to intense dynamic range difference between text and movie/film background. Instead of rendering with a preset code value, it may be possible to apply an absolute luminance level (e.g., 300 nits or based on results from VDR limits studies) as well as the whitepoint (e.g. averaged over scene, chapter or whole movie), as further defined by the EDR input and display device capabilities. In addition, the extent of the color gamut can be taken into account to adjust the chroma extent that is used by overlaid text and graphics (e.g. avoiding highly saturated green text on a black and white scene).
These effects are also possible for rendering subtitles in DVD and Blu-ray. The subtitles in a lot of movies may be in color. In order to maintain consistency in brightness and colors, it may be desirable to map the overlaid subtitle images to the characteristics of content. This mapping may be affected with scene based parameters since the overlays may later be mapped down to the display capability with the content, where the mapping may be scene-adaptive. The pair of mapping up and down processes may tend to make the subtitles appear perceptually correct and consistent.
By contrast,
In addition to those image statistics mentioned, a color palette may be computed for each image, scene and/or movie—e.g., by analyzing a histogram, spatial pixel correlations or other image and/or scene intrinsic properties. The combination of image statistics and/or color palette may comprise a set of metadata (e.g. 302′, 304′ and 306′, respectively). In addition, similar metadata may be gathered regarding the capabilities of the display—e.g., luminance (min, mean, max), color temperature, reflective white point, color gamut, primaries, etc. This metadata may then be used by some portion of the system—e.g. compositor, or DM (if that functionality has been incorporated into the DM itself).
Embodiments Involving EDR TV
As previously mentioned, TV and/or displays are exhibiting more capability to render higher dynamic range image/video. EDR/VDR render-capable displays (e.g. Ultra High Definition (UHD) sets) are becoming more accepted by the consumer. As such
The first image data and the composite image data may be mixed or otherwise overlaid by a mixing module 408. The output of which may be input into a HDMI transmit module 410. Module 410 may accept as input: the EDR metadata, the composited image data (i.e., the first image data mixed with the composite image data)—as well as information regarding the capabilities of the display to display EDR image data.
As seen in
The composited signal may be received by HDMI receiver module 412 and may pass that signal through (with or without additional processing) to DM module 414. DM module may provide some additional processing to ensure that the image data is in line with the capabilities of display 416 for a pleasing visual experience for the viewer.
Embodiment Involving Legacy TV/Displays
The first image/video data may be mixed or otherwise composited with the composite image/video data at mixer module 612—and thereafter sent to HDMI transceiver 614. Module 614 may receive the display capabilities from HDMI receiver module 616 (via EDID interface) and appropriate image processing may take place to be in accord with display 618 capabilities.
In another embodiment, if a legacy device without EDR capabilities (e.g. PC, games console, VCR, etc.) may display picture-in-picture (PIP) content, it may be desirable that the dynamic range of that content be managed while being superimposed into the VDR image. This management/mapping information may for example be the min/max reflective white and white point from the DM process. In yet another embodiment, if the connected device is a Personal Computer, those values could also be communicated back to the PC (e.g. via HDMI back channel) to adjust the rendering of the graphics card before sending it to the display.
As with the various embodiments disclosed herein that may encompass a number of possible configurations for overlaying and/or compositing a second image/video data onto a first image/video data,
Starting at 700, compositing module/routine may input metadata of overlay/composite image/video data (aka, a second image/video data) at 702. Such metadata may be computed and/or compiled as image statistics or color palette (e.g., as previously discussed) or any other known manner (e.g. streaming metadata with the second image/video data). At 704, module/routine may input metadata regarding the first image/video data (e.g., luminance, dynamic range, whitepoint, etc.). Such metadata may be computed and/or compiled as image statistics or color palette (e.g., as previously discussed) or any other known manner (e.g. streaming metadata with the first image/video data). At 706, module/routine may input metadata regarding the characteristics of the display upon which the composited image/video data (i.e. first image/video data together with the overlay/composite image/video data) is to be rendered. At 708, module/routine may perform an appearance mapping or otherwise compositing of the overlay/composite image/video data—that provides a pleasing visual appearance—to form or otherwise create a composite image/video data. This mapping may take into consideration many possible heuristic rules and/or goals that are embedded into the module/routine that provides such a pleasing appearance. Alternatively, the module/routine may perform appearance mapping upon the first image/video data, if desired or appropriate.
Such rule and/or goals may affect approximating good fits to luminance, dynamic range, color gamut, color appearance and the like, using various techniques. Some such methods and/or techniques for modifying display setting (e.g., dynamic range and implementing color appearance models) are further disclosed in co-owned US patent applications:
Once such mapping are calculated and/or approximated, then the composite image may be formed and the resulting image/video data may be sent forward to the display at 710. This processing may continue indefinitely while there are images/video to be overlaid occur.
Embodiment of HDR/EDR/VDR Processing
This mapping information may be sent to the compositor (or any module having processing similar to the compositor as previously mentioned). This compositor (or the like) may receive as input the overlay/composite content and map this content into DR ranges illustrated by 810a′, 810b′ and 810c′, respectively.
It should be noted that the dynamic range of the actual content on overlay/composite input image data (e.g. UI, CC, text, etc.) may not use the full dynamic range of the display. Instead, the system selects a range that will tend to be a perceptual match in dynamic range and lightness as close as possible to the first image data and the capabilities of the display.
Embodiment of Chroma Mapping
In addition to visually pleasing mapping with respect to dynamic range, it may also be desirable to map and/or composite the image/video data to a visually pleasing color gamut.
During DM processing (as depicted by
When the composite image (i.e., the first image/video data and the overlay data) is to be mapped onto the display, a further gamut mapping (as shown in
As one possible application of the composite image processing mentioned herein, it may be possible and/or desirable to consider dynamic content replacement with images/scenes and/or movies. One exemplary case would be to place advertisements in existing content—or replace advertisements that are in existing content, with other advertisements.
In
Another embodiment may be to provide a texture that may then be mapped into place on the billboard by using a geometric transform module, such as—e.g., a Graphics Processing Unit (GPU).
In addition, for real time compositing into existing EDR video stream, the appearance/DM metadata in combination with alpha channels may be used to composite information into the video image. This may be used to aid in swapping advertisements on billboards appearing in movie footage. If the appearance parameters of the movie are known, the advertisement may be mapped without the viewer realizing that composition. With increasing computing power in play-out devices, this is feasible in the near future.
Adding Additional Content to Existing Image/Video
When restoring and/or reformatting legacy image/video content, It may be desirable to identify areas within such legacy content to add new, additional content. For example, it may be possible to can identify valuable advertising areas in legacy movies/TV shows. In one embodiment, a geometric mapping function for VDR-graded movies may be employed to help perform this compositing. A pixel/vertex shader program (e.g., as often used in computer graphics/games) may also be employed to aid in the compositing of new content into such VDR streams, as discussed below.
When restoring/regrading legacy content, system 1100 may identify valuable advertising areas in collaboration with the owner of the movie/TV show 1104. This may be performed by an off-line process 1102 and may be used to create a vertex and pixel shader program 1106. Such a shader program would describe how to map any texture (e.g. a rectangular image 1112) into the 2D shapes needed to composite it into a VDR stream. In one embodiment, creating such shader program 1106 may come at a minor additional cost/effort as it only has to be done for areas of (e.g., advertisement) interest and not the full movie. This may be an automatic, semi-automatic (using computer vision approaches such as feature tracking, motion vector analysis, etc.), or manual (carried out by an artist).
Vertex and Pixel Shader program 1106 is then provided with the movie/TV show, creates a subset of VDR metadata which is provided either as metadata attached to the VDR stream or via external means (e.g. internet).
This mapping function (1106) can now be used to go from a rectangular texture (1112) to the appropriate mapped pixel data (112′) in an appropriate bounding box (1118) by using the vertex and pixel shader program (1106) in conjunction with a geometry mapper and shader (1116).
The final bounding box (1118) may now be mapped into the dynamic range and/or color gamut (or other image characteristic) using a DM module (1120) to match or substantially match the mapped output of the first image/video data (1104′, after 1130). It should be noted that both DM modules (1120 and 1130) may use the same or similar set of VDR metadata 1108 in order to create a matching mapping result. Now it may be composited into the VDR movie (1132) in conjunction with any other desired composites (e.g. GUI, CC, etc. using inputs such as 1122 and 1126) in the same way as described in
In one embodiment, program 1106 may be reused (once created) indefinitely with any new content 1108. Advertisers only have to provide a texture and, in some embodiments, appearance mapping data (e.g., accurate colorimetric description of elements of their advertisements, such as logos or products). It would be possible to appropriately map that texture into the VDR movie using the vertex and pixel shader programs created while restoring/regrading the movie to VDR. It should be noted that this embodiment is not limited to single frame textures. It is also valid using advertisement clips (e.g., short films).
A detailed description of one or more embodiments of the invention, read along with accompanying figures, that illustrate the principles of the invention has now been given. It is to be appreciated that the invention is described in connection with such embodiments, but the invention is not limited to any embodiment. The scope of the invention is limited only by the claims and the invention encompasses numerous alternatives, modifications and equivalents. Numerous specific details have been set forth in this description in order to provide a thorough understanding of the invention. These details are provided for the purpose of example and the invention may be practiced according to the claims without some or all of these specific details. For the purpose of clarity, technical material that is known in the technical fields related to the invention has not been described in detail so that the invention is not unnecessarily obscured.
This application is a continuation of U.S. patent application Ser. No. 16/037,576, filed Jul. 17, 2018, which is a continuation of U.S. patent application Ser. No. 14/768,345, filed Aug. 17, 2015 (now issued U.S. Pat. No. 10,055,866), which in turn is the 371 national stage of PCT/US2014/013218, filed Jan. 27, 2014. PCT/US2014/013218 claims priority to U.S. Provisional Patent Application No. 61/767,553, filed on Feb. 21, 2013, each of which is hereby incorporated by reference in its entirety. This application is also related to U.S. Provisional Patent Application No. 61/767,522, filed on Feb. 21, 2013, which is hereby incorporated by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
6571255 | Gonsalves | May 2003 | B1 |
6980224 | Wiant, Jr. | Dec 2005 | B2 |
7050109 | Safadi | May 2006 | B2 |
7064759 | Feierbach | Jun 2006 | B1 |
7184063 | Shum | Feb 2007 | B2 |
7394565 | Stokes | Jul 2008 | B2 |
7894524 | Demos | Feb 2011 | B2 |
7961784 | Demos | Jun 2011 | B2 |
8050323 | Demos | Nov 2011 | B2 |
8091038 | Johnson | Jan 2012 | B1 |
8422795 | Pahalawatta | Apr 2013 | B2 |
8477851 | Demos | Jul 2013 | B2 |
8525933 | Atkins | Sep 2013 | B2 |
8594188 | Demos | Nov 2013 | B2 |
8660352 | Gish | Feb 2014 | B2 |
8767004 | Longhurst | Jul 2014 | B2 |
8836796 | Dickins | Sep 2014 | B2 |
8982963 | Gish | Mar 2015 | B2 |
10097822 | Newton | Oct 2018 | B2 |
20030046401 | Abbott | Mar 2003 | A1 |
20040218097 | Huang | Nov 2004 | A1 |
20070121005 | Gutta | May 2007 | A1 |
20070268967 | Demos | Nov 2007 | A1 |
20070288844 | Zingher et al. | Dec 2007 | A1 |
20080129877 | Ohno | Jun 2008 | A1 |
20080273809 | Demos | Nov 2008 | A1 |
20080307342 | Furches | Dec 2008 | A1 |
20090086816 | Leontaris | Apr 2009 | A1 |
20090322800 | Atkins | Dec 2009 | A1 |
20100014587 | Demos | Jan 2010 | A1 |
20100053222 | Kerofsky | Mar 2010 | A1 |
20100110000 | De Greef | May 2010 | A1 |
20100118957 | Demos | May 2010 | A1 |
20100150526 | Rose | Jun 2010 | A1 |
20100158099 | Kalva | Jun 2010 | A1 |
20100231603 | Kang | Sep 2010 | A1 |
20110103470 | Demos | May 2011 | A1 |
20110194618 | Gish | Aug 2011 | A1 |
20110305391 | Kunkel | Dec 2011 | A1 |
20110311147 | Pahalawatta | Dec 2011 | A1 |
20120026405 | Atkins | Feb 2012 | A1 |
20120038782 | Messmer | Feb 2012 | A1 |
20120051635 | Kunkel | Mar 2012 | A1 |
20120074851 | Erinjippurath | Mar 2012 | A1 |
20120075435 | Hovanky | Mar 2012 | A1 |
20120127324 | Dickins | May 2012 | A1 |
20120200593 | Todd | Aug 2012 | A1 |
20120218290 | Waschbuesch | Aug 2012 | A1 |
20120229495 | Longhurst | Sep 2012 | A1 |
20120299817 | Atkins | Nov 2012 | A1 |
20120314773 | Gish | Dec 2012 | A1 |
20120314944 | Ninan | Dec 2012 | A1 |
20120315011 | Messmer | Dec 2012 | A1 |
20120320014 | Longhurst | Dec 2012 | A1 |
20120321273 | Messmer | Dec 2012 | A1 |
20130004074 | Gish | Jan 2013 | A1 |
20130027615 | Li | Jan 2013 | A1 |
20140023614 | Barawkar | Jan 2014 | A1 |
20140125696 | Newton | May 2014 | A1 |
20140168277 | Ashley | Jun 2014 | A1 |
20140210847 | Knibbeler | Jul 2014 | A1 |
20140225941 | Van Der Vleuten | Aug 2014 | A1 |
20140232614 | Kunkel | Aug 2014 | A1 |
20190156471 | Knibbeler | May 2019 | A1 |
Number | Date | Country |
---|---|---|
101438579 | May 2009 | CN |
103597812 | Feb 2014 | CN |
2230839 | Sep 2010 | EP |
2000050158 | Feb 2000 | JP |
2000286880 | Oct 2000 | JP |
2009081542 | Apr 2009 | JP |
2012501099 | Jan 2012 | JP |
2012521133 | Sep 2012 | JP |
2012172460 | Dec 2012 | WO |
Entry |
---|
Banterle, F. et al “Expanding Low Dynamic Range Videos for High Dynamic Range Applications” ACM Spring Conference on Computer Graphics, Apr. 21, 2008, pp. 1-8. |
Nishina, Y. et al “Lighting Environment Estimation by Adaptive High Dynamic Range Image Generation for Augmented Reality” pp. 185-190, Jan. 2008, No. 3. |
Number | Date | Country | |
---|---|---|---|
20200074710 A1 | Mar 2020 | US |
Number | Date | Country | |
---|---|---|---|
61767553 | Feb 2013 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 16037576 | Jul 2018 | US |
Child | 16674373 | US | |
Parent | 14768345 | US | |
Child | 16037576 | US |