This disclosure relates to a display system and method, and particularly to a display system and method capable of forming multiple images at multiple viewing zones using one or more multi-view (MV) pixels.
With advancement in display technology, display devices have become smaller, thinner and cheaper, with crisper images. The fundamental functionality of a display device, however, has remained substantially the same—a display device forms an image that simultaneously appears the same to viewers at all locations from which the display device can be seen.
According to an exemplary embodiment, a display system is provided which includes one or more multi-view (MV) pixels, wherein each MV pixel is configured to emit beamlets (individually controllable beams) in different directions in a beamlet coordinate system. The display system includes an input node which, in operation, receives a specification of multiple viewing zones located relative to the MV pixels in a viewing zone coordinate system. The display system includes a processor which is coupled to the input node. The processor associates multiple contents with the multiple viewing zones, respectively. The processor, in operation, determines (e.g., identifies, accesses) a mapping that translates between the viewing zone coordinate system (where the multiple viewing zones are specified) and the beamlet coordinate system (where the MV-pixel beamlets are emitted in different directions). For each of multiple images generated from the multiple contents, the processor, using the mapping between the two coordinate systems, identifies a bundle of beamlets from each of the MV pixels directed to one viewing zone to form the image. The bundle of beamlets directed to one viewing zone to form one image is different from the bundle of beamlets directed to another viewing zone to form another image. The processor outputs control signaling for the MV pixels, wherein the control signaling defines color and brightness of each of the beamlets in each bundle to project the corresponding image to the corresponding viewing zone. The MV pixel(s), in response to the control signaling from the processor, project the multiple images to the multiple viewing zones, respectively.
The display system constructed as described above uses a mapping that translates between the beamlet coordinate system, in which beamlets are emitted in different directions from each of the MV pixels, and the viewing zone coordinate system, in which multiple viewing zones are specified. Multiple contents are associated with the multiple viewing zones, respectively. The display system uses the mapping to identify a bundle of beamlets from each of the MV pixels directed to one viewing zone to form an image generated from the content associated with the viewing zone. The display system is capable of performing the same operation for each of the multiple viewing zones to project multiple (e.g., different) images generated from the multiple contents respectively associated with the multiple viewing zones.
The “image” as used herein may comprise one or more of a static image, a stream of images (e.g., video), a text pattern (e.g., messages, signage), a lighting pattern, and any other expression of content that is visible to human eyes.
In various embodiments, the processor associates the multiple contents with the multiple viewing zones by associating the multiple contents themselves with the multiple viewing zones, or by associating multiple content descriptors (e.g., content providers, content types) of the multiple contents with the multiple viewing zones.
In various embodiments, the display system includes a user-interface device which, in operation, receives an operator specification of the multiple viewing zones and sends the specification of the multiple viewing zones to the input node. The user-interface device may include a screen (e.g., touchscreen) capable of displaying a viewing area and specifying the multiple viewing zones in the viewing area in response to one or both of graphical input and textual input. For example, an operator may graphically specify perimeters of the multiple viewing zones (e.g., by “drawing” enclosure boxes), or textually specify coordinates of the multiple viewing zones in the viewing zone coordinate system.
In various embodiments, the display system may include a sensor configured to identify the multiple viewing zones and to send the specification of the multiple viewing zones to the input node. For example, the sensor may be configured to detect locations of multiple targets and specify the detected locations of the multiple targets as the multiple viewing zones. The multiple targets may be multiple viewers themselves (who may gesture to the sensor, for example) or multiple viewer surrogates, i.e., elements used to locate and/or track multiple viewers, such as tags the viewers may wear, trackable mobile devices (e.g., smartphones, wands) the viewers may carry, conveyances that may transport the viewers such as vehicles, or any other types of markers that may represent the viewers. When the sensor is used to identify locations of multiple targets that are moving, the input node of the display system may receive a new specification of new multiple viewing zones based on the identified locations of the multiple targets that have moved. The processor associates multiple contents with the new multiple viewing zones, respectively, and, for each of the multiple images generated from the multiple contents, uses the mapping that translates between the viewing zone coordinate system and the beamlet coordinate system to identify a bundle of beamlets from each of the MV pixels directed to each new viewing zone to form the image. The display system is capable of projecting the multiple images to the new multiple viewing zones, respectively. The multiple contents associated with the new multiple viewing zones may be updated from the multiple contents previously associated with the (old) multiple viewing zones.
In a further aspect, a display method is provided, which generally corresponds to an operation of the display system described above. The method includes generally six steps:
1) receiving a specification of multiple viewing zones located in a viewing zone coordinate system, wherein the multiple viewing zones are positioned relative to one or more multi-view (MV) pixels, and each MV pixel is configured to emit beamlets in different directions in a beamlet coordinate system;
2) associating multiple contents with the multiple viewing zones, respectively;
3) determining a mapping that translates between the viewing zone coordinate system and the beamlet coordinate system;
4) for each of multiple images generated from the multiple contents, using the mapping, identifying a bundle of beamlets from each of the MV pixels directed to one viewing zone to form the image, wherein the bundle of beamlets directed to one viewing zone to form one image is different from the bundle of beamlets directed to another viewing zone to form another image;
5) generating control signaling for the MV pixels, the control signaling defining color and brightness of each of the beamlets in each bundle to project the corresponding image to the corresponding viewing zone; and
6) in response to the control signaling, projecting, from the MV pixels, the multiple images to the multiple viewing zones, respectively.
In the drawings, identical reference numbers identify similar elements. The sizes and relative positions of elements in the drawings are not necessarily drawn to scale.
In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present invention. It will be evident, however, to one skilled in the art that the present invention may be practiced without these specific details. In other instances, well-known circuits, structures, and techniques are not shown in detail, but rather in a block diagram in order to avoid unnecessarily obscuring an understanding of this description. Thus, the specific details set forth are merely exemplary. Particular implementations may vary from these exemplary details and still be contemplated to be within the scope of the present invention. Reference in the description to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the invention. The phrase “in one embodiment” located in various places in this description does not necessarily refer to the same embodiment.
Referring back to
The viewing zone coordinate system 40 may be any suitable coordinate system, such as a Cartesian coordinate system, or a polar coordinate system in which multiple viewing zones are positioned to surround the one or more MV pixels, for example. Any suitable 3D space modeling method may be used to define the viewing zone coordinate system 40, such as a map, point cloud, wire polygon mesh, and textured polygon mesh. In some embodiments, the viewing zone coordinate system 40 may be based on the physical dimensions of a viewing area in which the multiple viewing zones 18 are defined.
In some embodiments, the viewing zone coordinate system 40 may be within sight of a 3D sensor attached to the MV pixels (e.g., a depth sensor, a stereoscopic camera) and the viewing zone coordinate system 40 can be the 3D coordinate system of the 3D sensor. For example, a real-life 3D environment is scanned by a 3D sensor (e.g., stereoscopic camera) to derive the 3D viewing zone coordinate system 40, in which multiple viewing zones may be specified.
In other embodiments, the viewing area may be within sight of a 2D camera attached to the MV pixels, wherein the 2D camera is used as a sensor to identify the multiple viewing zones. In this case the viewing zone coordinate system 40 is based on the 2D pixel coordinate system of the 2D camera. For example,
Multiple viewing zones 18 may be specified in various ways. According to some embodiments, the display system 10 may include a user-interface (UI) device 20 which, in operation, receives an operator specification of the multiple viewing zones 18 and sends the specification of the multiple viewing zones to the input node 16, as shown in
The operator may specify each viewing zone graphically, for example, by “drawing” a point, a 2D shape (e.g., a polygon, circle, oval, freeform shape) and/or a 3D shape (e.g., a box, sphere) that represents an observation point or represents (e.g., encloses) a collection of observation points. In the illustrated example of
In some embodiments, the UI device 20 need not include a screen capable of displaying a viewing area, for example, when the operator may not require a visualization of the viewing area in order to specify multiple viewing zones. In these embodiments, the UI device 20 need only include a component configured to receive the operator specification of multiple viewing zones. The component may be, without limitation, a keyboard or keypad on which the operator may type indications (e.g., seat numbers, section numbers) corresponding to viewing zones; a microphone into which the operator may speak indications of viewing zones; a touch/gesture-sensitive pad on which the operator may tap/gesture indications of viewing zones; an optical pointer the operator may use to point into the viewing area to specify each viewing zone, etc.
According to other embodiments, the display system 10 may include a sensor 26 configured to identify the multiple viewing zones 18 and to send the specification of the multiple viewing zones to the input node 16, as shown in
For example, one or more cameras having suitable lenses and lighting may be used as a sensor that can recognize and locate multiple targets 28 to correspondingly specify the multiple viewing zones 18. In some embodiments, the camera(s) may be depth-aware cameras, such as structured light or time-of-flight cameras, which can generate a depth map of what is being seen through the camera at a short range. The depth map may then be processed to approximate a 3D representation of what is being seen. In other embodiments, the camera(s) may be stereoscopic cameras and/or LIDAR sensors.
In the illustrated example of
In further embodiments, the sensor may be configured to identify (e.g., pick up) attributes of the viewing zone, such as audio (e.g., speech or other sound made by a viewer or viewer surrogate), temperature (e.g., heat emanating from a viewer or viewer surrogate), etc. The identified attributes may be used, for example, by a zones-and-contents association module 36 of the processor 50, to be described below, to select or generate appropriate content for the viewing zone (e.g., a cold drink advertisement selected/generated for a viewer in a high-temperature viewing zone).
In some embodiments, the propagation path of each beamlet may be found based on a geometric model of the one or more MV pixels. For example, the geometric definitions of and relationships among the beamlets of an MV pixel may be found in a factory via calibration measurements, or may be inferred from the opto-mechanical design of the MV pixel, such as a known radial distortion of a lens included in the MV pixel. In various embodiments, the beamlets (e.g., the sources of the beamlets) in each MV pixel are arranged in a geometric array (e.g., 2D array, circular array). Propagation paths of the beamlets arranged in a geometric array can be geometrically defined using any suitable mathematical techniques including, without limitation, linear interpolation; linear extrapolation; non-linear interpolation; non-linear extrapolation; Taylor-series approximation; linear change of reference frame; non-linear change of reference frame; polynomial, spherical and/or exponential models; and trigonometric manipulation. As a particular example, once the propagation paths of selected beamlets are geometrically defined, suitable interpolation techniques may be used to find the propagation paths of the beamlets between those geometrically-defined beamlets. In other embodiments, the propagation path of each beamlet may be found by flashing patterns on the MV pixels (e.g., by selectively turning on and off the beamlets on each MV pixel) to uniquely encode every beamlet, and capturing the images of the flashing patterns using a camera placed in a viewing area of the MV pixels. The captured images can then be plotted onto the beamlet coordinate system 42 to geometrically define respective propagation paths of the beamlets. Various encoding patterns may be used as the flashing patterns, including, without limitation, Gray-code patterns, non-return-to-zero (NRZ) digital sequences, amplitude-shift-keyed (ASK) bits, maximum-length sequences, and shift-register sequences.
Although beamlets 14 are depicted in the accompanying figures as simple lines with arrowheads indicating their directions of emission, they can have an angular component and can be in any shape. Thus, characterization of the beamlet as a simple line is an approximation, which is a valid model in some embodiments but in other embodiments the beamlet may be modeled as having a shape similar to the beam from a search light, for example. In various exemplary embodiments, each beamlet 14 is wide/large enough such that both eyes of a viewer are expected to be within the beamlet 14 and the beamlet 14 falls upon both eyes of the viewer. Thus, the viewer sees the same beamlet 14 (e.g., the same color and brightness) with both of the eyes. In other embodiments, each beamlet 14 is narrow/small enough such that two different beamlets 14 are individually controlled to fall upon two eyes of a viewer, respectively. In this case the viewer sees two beamlets 14 of possibly different colors and/or brightness with his/her two eyes, respectively.
Returning to
The processor 50 is capable of populating, updating, using and managing data in a processor-accessible memory 35, which is illustrated as part of the processor 50 in
The processor 50 receives, via the input node 16, the specification of the multiple viewing zones 18a and 18b, for example, from the UI device 20 (see
The processor 50 associates multiple contents with the multiple viewing zones 18a and 18b. This may be done by associating the multiple contents themselves with the multiple viewing zones 18a and 18b, or by associating multiple content descriptors, such as multiple content providers (e.g., cable channels, movie channels, live stream sources, news websites, social websites) or multiple content types, with the multiple viewing zones 18a and 18b.
The processor 50 determines (e.g., identifies, accesses) a mapping that translates between the viewing zone coordinate system 40 and the beamlet coordinate system 42 (
The mapping may take any of various forms, such as a table or a mathematical relationship expressed in one or more translational functions. In some embodiments, the mapping may be based on registration of reference indicia (e.g., points, lines, shapes) defined in the viewing zone coordinate system 40 and in the beamlet coordinate system 42. For example, a first camera attached to the one or more MV pixels 12 is used to capture images of a viewing area 23 of the MV pixels 12. A registration device (not shown) including a second camera and a light source (e.g., an LED) is placed in the viewing area, and the light source is flashed, which is captured by the first camera of the MV pixels 12. The location of the flashing light in the viewing area as imaged by the first camera may serve as a reference in the viewing zone coordinate system 40 (which may be based on the coordinate system of the first camera). Encoding patterns (e.g., Gray-code patterns, non-return-to-zero (NRZ) digital sequences, amplitude-shift-keyed (ASK) bits, maximum-length sequences, shift-register sequences) are flashed on the one or more MV pixels (by selectively turning on and off the beamlets on each MV pixel) to uniquely encode every beamlet emitted from each MV pixel. The beamlet from each MV pixel that is captured by the second camera of the registration device placed in the viewing area may be identified (because each beamlet is uniquely encoded) and used as a reference in the beamlet coordinate system 42. The same process may be repeated with the registration device moved to different positions in the viewing area, to thereby obtain a set of references in the viewing zone coordinate system 40 and a set of references in the beamlet coordinate system 42. The mapping that translates between the two coordinate systems 40 and 42 may be found so as to register, align or otherwise correlate these two sets of references in the two coordinate systems. Any other registration techniques in image processing, such as automatic 3D point cloud registration, may also be used to perform the registration.
As illustrated in
In
In
In each of these examples, a bundle of beamlets 14 that will “hit” one viewing zone is identified, and the color and brightness of each of the beamlets in the bundle are set, by the control signaling 54, to correspond to the content associated with the viewing zone so as to form an image based on the content at the viewing zone.
As used herein, “image” means anything that results from a pattern of illumination from the one or more MV pixels 12. The pattern of illumination is generated by turning “on” or “off” each of the beamlets emitted from each MV pixel 12 and/or controlling color and brightness (intensity) of each of the beamlets. Non-limiting examples of an image include any one or a combination of a static image, a stream of images (e.g., video), a text pattern (e.g., messages, signage), a lighting pattern (e.g., beamlets individually or collectively blinked, flashed e.g., at different or varying speeds, at different brightness/dimness levels, at different brightness/dimness increase or decrease rates, etc., or otherwise turned “on” and “off”), and any other expression of content that is visible to human eyes.
In some embodiments, the control signaling 54 may define, in addition to color and brightness, other parameters of each of the beamlets 14, such as spectral composition, polarization, beamlet shape, beamlet profile, focus, spatial coherence, temporal coherence, and overlap with other beamlets. Specifically, beamlets generally do not have a sharp edge and thus adjacent beamlets may somewhat overlap. The degree of overlap may be controlled by one of the beamlet parameters.
The control signaling 54 for the MV pixels 12 may be output from the processor 50 via any suitable medium including wireline and/or wireless medium, and via any suitable protocol (e.g., Bluetooth, Wi-Fi, cellular, optical, ultrasound).
In block 81 of
In the processor 50, a viewing zones processor 32 is responsible for processing the specification of the multiple viewing zones 18 as received via the input node 16. In some embodiments, the multiple viewing zones 18 as received via the input node 16 may be explicitly defined in the viewing zone coordinate system 40, for example, when the multiple viewing zones 18 are specified on the UI device 20 by an operator. In other embodiments, the multiple viewing zones 18 as received via the input node 16 may be implicitly defined, for example, in the form of the locations of multiple targets as identified by the sensor 26. In these embodiments, the viewing zones processor 32 receives the identified locations of multiple targets, and performs any necessary processing to explicitly specify the multiple viewing zones 18 based on the identified locations, such as by defining a point, a 2D shape, or a 3D shape that corresponds to each of the identified locations. The viewing zones processor 32 may use any of a number of image-processing techniques to process (e.g., recognize) the locations of multiple targets as identified by the sensor 26, such as stitching/registration, morphological filtering, thresholding, pixel counting, image segmentation, face detection, edge detection, and blob discovery and manipulation. The viewing zones processor 32 specifies multiple viewing zones based on the processed (e.g., recognized) locations of the multiple targets. In various embodiments, the multiple viewing zones may be stored in the memory 35 to be accessible by various components of the processor 50.
In block 82 of
The multiple contents themselves (based on which images may be generated) may be stored, or the content descriptors (e.g., content providers, content types) may be stored that can be used to access the multiple contents, for example, via a network connection. In these embodiments, the zones-and-contents association module 36 may select a particular content or content descriptor for each viewing zone. In other embodiments, the zones-and contents association module 36 may create (generate) a particular content for each viewing zone.
The association program running on the zones-and-contents association module 36 is responsible for fetching or creating multiple contents for multiple viewing zones, respectively. The association program may refer to defined association rules to associate the multiple viewing zones 18 with multiple contents. For example, the rules may be used to select or create a particular content for each viewing zone based on the characteristics of the viewing zone or, if the sensor 26 is used to detect a location of a target (e.g., a viewer or a viewer surrogate) to specify a viewing zone, based on the characteristics of the target. As a specific example, multiple contents may be associated with the locations of the viewing zones relative to the one or more MV pixels 12, such that those contents can be used as bases to generate images that are particularly selected as appropriate for display at the locations. As another example, multiple contents are associated with the targets (e.g., viewers) at the viewing zones, such that those contents can be used as bases to generate images that are particularly selected as appropriate for the targets.
In further embodiments, the specification of the multiple viewing zones 18 as received via the input node 16 may be associated with multiple contents, respectively. For example, when the UI device 20 is used to specify the multiple viewing zones 18, the UI device 20 may additionally be used to associate the specified viewing zones 18 with multiple contents, respectively, based on an operator input into the UI device 20 for example. In these embodiments, the zones-and-contents association module 36 of the processor 50 receives and/or verifies the association between the viewing zones 18 and the multiple contents as received via the input node 16.
In some embodiments, multiple contents to be associated with the multiple viewing zones 18 may be generated in real time by the zones-and-contents association module 36. For example, the association application running on the zones-and-contents association module 36 may generate content (e.g., signage, a lighting pattern) in real time for each viewing zone, for example, as a function of the characteristics of the viewing zone.
In block 83 of
Multiple mappings (e.g., one that translates from the viewing zone coordinate system 40 to the beamlet coordinate system 42, and another that translates from the beamlet coordinate system 42 to the viewing zone coordinate system 40) may be stored in the memory 35, and the mapping engine 34 may selectively access one or more suitable mapping(s) therefrom. In various embodiments, the mapping engine 34 determines (e.g., accesses) the mapping(s), and a beamlet-bundles identification module 38, to be described below, applies the mapping(s) to identify the bundle of beamlets that hit each viewing zone.
As described above, the mapping between the viewing zone coordinate system 40 and the beamlet coordinate system 42 may be pre-stored in the memory 35, or may be received into the memory 35 via the input node 16 at appropriate timings. For example, when the UI device 20 is used to specify the multiple viewing zones 18, the viewing zone coordinate system 40 used by the viewing zone specification application running on the UI device 20 may be used to generate a mapping, which may be received together with the specification of the multiple viewing zones 18, via the input node 16, from the UI device 20.
In block 84 of
In block 85 of
In block 86 of
In some embodiments, the bundle of beamlets that form one image and the bundle of beamlets that form another image are mutually exclusive of each other. For example, in reference to
The one or more MV pixels 12a-12l may be formed in any of various configurations.
VGA: 640×480=307,200 projector pixels
XGA: 1024×768=786,432 projector pixels
720p: 1280×720=921,600 projector pixels
1080p: 1920×1080=2,073,600 projector pixels
UHD 4K: 3840×2160=8,294,400 projector pixels.
Various pico-projectors suitable for use in forming the MV pixels are commercially available. Briefly, a pico-projector includes a light source (e.g., LED, laser, incandescent); collection optics, which direct the light to an imager; the imager, typically a DMD (digital micromirror device) or an LCoS (liquid-crystal-on-silicon) device, which accepts digital-display signals to shutter the light and direct the light to the projection optics; the projection (or output) optics, which project a display image on a screen and also permit additional functions such as focusing of the display image; and control electronics, including the light source drivers, interfacing circuits, and a video and graphics processor. In some embodiments, off-the-shelf pico-projectors may be modified for use as MV pixels, for example, to reduce brightness compared with conventional projection applications (as the beamlets 14 are intended to be received by viewers' eyes). The control signaling 54 from the processor 50 activates one or more of the MV pixels 12x to generate beamlets 14 from each of the MV pixels propagating in different directions, with color and brightness of each beamlet controlled.
In other embodiments, as shown in
The lens array and display panel combination to form an MV pixel may be implemented conceptually similarly to how a projector is constructed. For example, an LCD or OLED panel may be used, wherein the pixels of the LCD/OLED panel are functionally analogous to the projector pixels on the DLP/LCoS projector. With an LCD/OLED panel, it may be possible to place more than one lens in front of it to create multiple “projectors” out of a single display panel. The display panel pixels underneath each lens would form the beamlets that exit out of that lens. The number of display panel pixels underneath each lens determines the number of controllable beamlet directions for each MV pixel “projector”.
In still further embodiments, a collection of individual lights (e.g., LEDs, spotlights), each pointing in a different direction and each being individually addressable, may be grouped together to form an MV pixel, which emits multiple beamlets originating from different lights in different directions.
Referring back to
In
In some embodiments, the multiple contents associated with the new multiple viewing zones may be updated from the multiple contents previously associated with the (old) multiple viewing zones. For example, in
The various embodiments described above can be combined to provide further embodiments. These and other changes can be made to the embodiments in light of the above-detailed description. In general, in the following claims, the terms used should not be construed to limit the claims to the specific embodiments disclosed in the specification and the claims, but should be construed to include all possible embodiments along with the full scope of equivalents to which such claims are entitled. Accordingly, the claims are not limited by the disclosure.
Number | Name | Date | Kind |
---|---|---|---|
5855425 | Hamagishi | Jan 1999 | A |
6339421 | Puckeridge | Jan 2002 | B1 |
6377295 | Woodgate et al. | Apr 2002 | B1 |
7001023 | Lee et al. | Feb 2006 | B2 |
7462104 | De Cesare | Dec 2008 | B2 |
7602395 | Diard | Oct 2009 | B1 |
7990498 | Hong | Aug 2011 | B2 |
8461995 | Thornton | Jun 2013 | B1 |
9080279 | Jun et al. | Jul 2015 | B2 |
9396588 | Li | Jul 2016 | B1 |
9715827 | Ng et al. | Jul 2017 | B2 |
9743500 | Dietz et al. | Aug 2017 | B2 |
9792712 | Ng et al. | Oct 2017 | B2 |
20030065805 | Barnes, Jr. | Apr 2003 | A1 |
20030115096 | Reynolds et al. | Jun 2003 | A1 |
20030156260 | Putilin et al. | Aug 2003 | A1 |
20040252374 | Saishu | Dec 2004 | A1 |
20050093986 | Shinohara et al. | May 2005 | A1 |
20050195330 | Zacks et al. | Sep 2005 | A1 |
20090109126 | Stevenson et al. | Apr 2009 | A1 |
20090273486 | Sitbon | Nov 2009 | A1 |
20100002079 | Krijn et al. | Jan 2010 | A1 |
20100085517 | Hong | Apr 2010 | A1 |
20100207961 | Zomet | Aug 2010 | A1 |
20100214537 | Thomas | Aug 2010 | A1 |
20110159929 | Karaoguz et al. | Jun 2011 | A1 |
20110169863 | Kawai | Jul 2011 | A1 |
20110216171 | Barre et al. | Sep 2011 | A1 |
20110242298 | Bathiche et al. | Oct 2011 | A1 |
20110304613 | Thoresson | Dec 2011 | A1 |
20120026157 | Unkel et al. | Feb 2012 | A1 |
20120062565 | Fuchs et al. | Mar 2012 | A1 |
20120105445 | Sakai et al. | May 2012 | A1 |
20120114019 | Wallace et al. | May 2012 | A1 |
20120140048 | Levine | Jun 2012 | A1 |
20120218253 | Clavin | Aug 2012 | A1 |
20120268451 | Tsai | Oct 2012 | A1 |
20130013412 | Altman et al. | Jan 2013 | A1 |
20130093752 | Yuan | Apr 2013 | A1 |
20130169765 | Park et al. | Jul 2013 | A1 |
20130282452 | He | Oct 2013 | A1 |
20140015829 | Park et al. | Jan 2014 | A1 |
20140035877 | Cai et al. | Feb 2014 | A1 |
20140061531 | Faur et al. | Mar 2014 | A1 |
20140111101 | McRae | Apr 2014 | A1 |
20140300711 | Kroon | Oct 2014 | A1 |
20150020135 | Frusina et al. | Jan 2015 | A1 |
20150042771 | Jensen et al. | Feb 2015 | A1 |
20150049176 | Hinnen et al. | Feb 2015 | A1 |
20150062314 | Itoh | Mar 2015 | A1 |
20150085091 | Varekamp | Mar 2015 | A1 |
20150092026 | Baik et al. | Apr 2015 | A1 |
20150198940 | Hwang et al. | Jul 2015 | A1 |
20150229894 | Dietz | Aug 2015 | A1 |
20150279321 | Falconer et al. | Oct 2015 | A1 |
20150334807 | Gordin et al. | Nov 2015 | A1 |
20150356912 | Dietz | Dec 2015 | A1 |
20160012726 | Wang | Jan 2016 | A1 |
20160210100 | Ng et al. | Jul 2016 | A1 |
20160212417 | Ng et al. | Jul 2016 | A1 |
20160224122 | Dietz et al. | Aug 2016 | A1 |
20160227201 | Ng et al. | Aug 2016 | A1 |
20160261837 | Thompson et al. | Sep 2016 | A1 |
20160261856 | Ng et al. | Sep 2016 | A1 |
20160293003 | Ng et al. | Oct 2016 | A1 |
20160341375 | Baker | Nov 2016 | A1 |
20160341377 | Eddins | Nov 2016 | A1 |
20160366749 | Dietz et al. | Dec 2016 | A1 |
20160371866 | Ng et al. | Dec 2016 | A1 |
20170205889 | Ng et al. | Jul 2017 | A1 |
Number | Date | Country |
---|---|---|
2 685 735 | Jan 2014 | EP |
0224470 | Mar 2002 | WO |
2013183108 | Dec 2013 | WO |
2016118622 | Jul 2016 | WO |
2016141248 | Sep 2016 | WO |
Entry |
---|
International Search Report, dated Jun. 21, 2018, for International Application No. PCT/US2018/024024, 3 pages. |
International Search Report, dated Jun. 3, 2016, for International Application No. PCT/US2016/014122, 3 pages. |
International Search Report, dated May 12, 2016, for International Application No. PCT/US2016/020784, 4 pages. |
International Search Report, dated Sep. 29, 2016, for International Application No. PCT/US2016/037185, 4 pages. |
Number | Date | Country | |
---|---|---|---|
20180277032 A1 | Sep 2018 | US |