This relates generally to electronic devices and, more particularly, to electronic devices with displays.
Electronic devices such as cellular telephones, tablet computers, and other electronic equipment may include displays for presenting images to a user. In a typical configuration, a rectangular array of display pixels is located in a central active region in the display. An inactive border region surrounds the central active region. Components such as driver circuits can be formed in the inactive border region. The inactive border must generally contain sufficient space for these components, because these components are used in controlling the operation of the display. Nevertheless, excessively large inactive border regions may make a device overly large and may detract from device aesthetics. It would therefore be desirable to be able to provide improved displays for an electronic device.
An electronic device may have a display panel for displaying images. The display panel may include an array of organic light-emitting diode pixels. A display cover layer may overlap the display panel. Portions of the surface of the display cover layer may have curved profiles.
An optical coupling layer may be included in the electronic device. The optical coupling layer may have an input surface that receives light from the array of pixels. The light from the array of pixels may be conveyed from the input surface to an output surface. The output surface may be adjacent to an inner surface of the display cover layer. The output surface may have different dimensions than the display panel and may have any desired shape. The optical coupling layer may be formed from a coherent fiber bundle or a layer of Anderson localization material.
Using the optical coupling layer may present challenges in ensuring an image of a desired appearance is displayed for the viewer of the display. The location where light is emitted from the pixel on the active area may be different than the location where light is visible on the output surface of the optical coupling layer. Additionally, it may be desirable for the display to appear as though it is a planar display without edge curvature (even if the output surface has edge curvature).
To account for the displacement of light between the active area and the outer surface of the optical coupling layer and to ensure the output image is perceived with the desired distortion, image data may be rendered for the output surface then modified to account for the distortion and displacement that will occur later when the image is transported by the optical coupling layer from the display active area to the output surface of the optical coupling layer. Image distortion control circuitry may modify the rendered image data based on a distortion map that includes displacement vectors for each pixel in the active area of the display.
Electronic devices may be provided with displays. The displays may have planar surfaces and curved surfaces. For example, a display may have a planar central portion surrounded by bent edges. The bent edges may have curved surface profiles. Arrangements in which displays exhibit compound curvature may also be used. Electronic devices having displays with curved surfaces may have an attractive appearance, may allow the displays to be viewed from a variety of different angles, and may include displays with a borderless or nearly borderless configuration.
A schematic diagram of an illustrative electronic device having a display is shown in
Device 10 may include control circuitry 20. Control circuitry 20 may include storage and processing circuitry for supporting the operation of device 10. The storage and processing circuitry may include storage such as nonvolatile memory (e.g., flash memory or other electrically-programmable-read-only memory configured to form a solid state drive), volatile memory (e.g., static or dynamic random-access-memory), etc. Processing circuitry in control circuitry 20 may be used to gather input from sensors and other input devices and may be used to control output devices. The processing circuitry may be based on one or more microprocessors, microcontrollers, digital signal processors, baseband processors and other wireless communications circuits, power management units, audio chips, application specific integrated circuits, etc.
To support communications between device 10 and external equipment, control circuitry 20 may communicate using communications circuitry 22. Circuitry 22 may include antennas, radio-frequency transceiver circuitry, and other wireless communications circuitry and/or wired communications circuitry. Circuitry 22, which may sometimes be referred to as control circuitry and/or control and communications circuitry, may support bidirectional wireless communications between device 10 and external equipment over a wireless link (e.g., circuitry 22 may include radio-frequency transceiver circuitry such as wireless local area network transceiver circuitry configured to support communications over a wireless local area network link, near-field communications transceiver circuitry configured to support communications over a near-field communications link, cellular telephone transceiver circuitry configured to support communications over a cellular telephone link, or transceiver circuitry configured to support communications over any other suitable wired or wireless communications link). Wireless communications may, for example, be supported over a Bluetooth® link, a WiFi® link, a 60 GHz link or other millimeter wave link, a cellular telephone link, or other wireless communications link. Device 10 may, if desired, include power circuits for transmitting and/or receiving wired and/or wireless power and may include batteries or other energy storage devices. For example, device 10 may include a coil and rectifier to receive wireless power that is provided to circuitry in device 10.
Device 10 may include input-output devices such as devices 24. Input-output devices 24 may be used in gathering user input, in gathering information on the environment surrounding the user, and/or in providing a user with output. Devices 24 may include one or more displays such as display(s) 14. Display 14 may be an organic light-emitting diode display, a liquid crystal display, an electrophoretic display, an electrowetting display, a plasma display, a microelectromechanical systems display, a display having a pixel array formed from crystalline semiconductor light-emitting diode dies (sometimes referred to as microLEDs), and/or other display. Display 14 may have an array of pixels configured to display images for a user. The display pixels may be formed on a substrate such as a flexible substrate (e.g., display 14 may be formed from a flexible display panel). Conductive electrodes for a capacitive touch sensor in display 14 and/or an array of indium tin oxide electrodes or other transparent conductive electrodes overlapping display 14 may be used to form a two-dimensional capacitive touch sensor for display 14 (e.g., display 14 may be a touch sensitive display). Alternatively, a touch sensor for display 14 may be formed from opaque metal deposited between the pixels.
Sensors 16 in input-output devices 24 may include force sensors (e.g., strain gauges, capacitive force sensors, resistive force sensors, etc.), audio sensors such as microphones, touch and/or proximity sensors such as capacitive sensors (e.g., a two-dimensional capacitive touch sensor integrated into display 14, a two-dimensional capacitive touch sensor overlapping display 14, and/or a touch sensor that forms a button, trackpad, or other input device not associated with a display), and other sensors. If desired, sensors 16 may include optical sensors such as optical sensors that emit and detect light, ultrasonic sensors, optical touch sensors, optical proximity sensors, and/or other touch sensors and/or proximity sensors, monochromatic and color ambient light sensors, image sensors, fingerprint sensors, temperature sensors, sensors for measuring three-dimensional non-contact gestures (“air gestures”), pressure sensors, sensors for detecting position, orientation, and/or motion (e.g., accelerometers, magnetic sensors such as compass sensors, gyroscopes, and/or inertial measurement units that contain some or all of these sensors), health sensors, radio-frequency sensors, depth sensors (e.g., structured light sensors and/or depth sensors based on stereo imaging devices), optical sensors such as self-mixing sensors and light detection and ranging (lidar) sensors that gather time-of-flight measurements, humidity sensors, moisture sensors, gaze tracking sensors, and/or other sensors. In some arrangements, device 10 may use sensors 16 and/or other input-output devices to gather user input (e.g., buttons may be used to gather button press input, touch sensors overlapping displays can be used for gathering user touch screen input, touch pads may be used in gathering touch input, microphones may be used for gathering audio input, accelerometers may be used in monitoring when a finger contacts an input surface and may therefore be used to gather finger press input, etc.).
If desired, electronic device 10 may include additional components (see. e.g., other devices 18 in input-output devices 24). The additional components may include haptic output devices, audio output devices such as speakers, light-emitting diodes for status indicators, light sources such as light-emitting diodes that illuminate portions of a housing and/or display structure, other optical output devices, and/or other circuitry for gathering input and/or providing output. Device 10 may also include a battery or other energy storage device, connector ports for supporting wired communication with ancillary equipment and for receiving wired power, and other circuitry.
The output surface of display layer 32 is a two-dimensional surface (e.g., parallel to the XY-plane) corresponding to the active area AA. The output surface of display layer 32 is surrounded by the inactive border region IA that does not emit light. It may be desirable for the output surface of the display to instead be a three-dimensional surface without an inactive border region.
To achieve the desired three-dimensional output surface without an inactive border region, display 14 may include one or more structures that transport image light from the two-dimensional surface of the array of pixels 44 to another surface while preventing the light from spreading laterally and thereby preserving the integrity of the image. This allows the image produced by an array of pixels in a flat display to be transferred from an input surface of a first shape at a first location to an output surface with compound curvature or other desired second shape at a second location. The optical coupling layer may therefore move the location of an image while changing the shape of the surface on which the image is presented. Examples of layers of material that can transfer image light in this way include coherent fiber bundles and Anderson localization material. These layers of material may sometimes be referred to herein as optical coupling layers or optical coupling structures.
An illustrative optical coupling layer 74 is shown in
As shown in
In the example of
Optical coupling layer 74 of
A viewer 48 of the display may typically view the display from the front. In this scenario, the viewer looks in a direction 50 that is orthogonal to the active area of the display. It is desirable that light from each fiber is directed towards viewer 48. Therefore, light from each fiber may be emitted from output surface 82 in a direction that is orthogonal to active area AA.
Each fiber may extend along a respective longitudinal axis between input surface 72 and output surface 82. For example, consider a fiber in the center of the active area. A fiber in the center of the active area may have a longitudinal axis that extends between input surface and output surface 82. The light emitted from the fiber may be emitted in the direction of the longitudinal axis. The longitudinal axis may be orthogonal to active area AA at input face 72. The longitudinal axis may also be orthogonal to active area AA at output face 82, ensuring that light is emitted from the fiber to viewer 48.
Next, consider a fiber in the periphery of the active area that extends over inactive area IA. A fiber in the periphery of the active area may also extend along a longitudinal axis between input surface 72 and output surface 82. The longitudinal axis 88-2 may be orthogonal to active area AA at input face 72. At output surface 82, the curvature of the optical coupling layer may result in the surface normal of the output face of the fiber being at a non-orthogonal angle relative to active area AA. Therefore, if light was emitted from the fiber in a direction along the surface normal of the output face, the light would be directed away from viewer 48. Therefore, the fiber may be bent to ensure that the longitudinal axis is at an angle approximately orthogonal to the active area at the output face.
The longitudinal axis of each fiber may be approximately (e.g., within 20° of, within 15° of, within 10° of, within 5° of, etc.) orthogonal to the active area at the output face of the optical coupling layer. This arrangement ensures that light from each fiber is directed towards viewer 48 at the front of the display.
The longitudinal axis of each fiber may be at approximately (e.g., within 20° of, within 15° of, within 10° of, within 5° of, etc.) the same angle relative to the active area at both the input face and the output face of the optical coupling layer. This may remain true even when the fibers are bent between the input surface and output surface (as with fibers in the edge of the optical coupling layer, for example).
Optical coupling layer 74 and display layer 32 may be separated by a distance of less than 500 microns, less than 100 microns, less than 50 microns, between 50 and 150 microns, between 50 and 500 microns, or any other desired distance.
The fibers may be bent at any desired bend angle and may have any desired maximum bend angle (e.g., 110°, 90°, 75°, etc.). The bend radius of the fibers may be selected to prevent excessive loss. In particular, the minimum bend radius of each fiber may be equal to ten times the radius of that fiber. This example is merely illustrative and may depend upon the tolerance for loss in a particular display. The minimum bend radius of each fiber may be equal to eight times the radius of the fiber, twelve times the radius of the fiber, five times the radius of the fiber, fifteen times the radius of the fiber, etc. The bend radius of each fiber may be greater than or equal to 50 microns, greater than or equal to 100 microns, greater than or equal to 25 microns, greater than or equal to 150 microns, greater than or equal to 200 microns, greater than or equal to 300 microns, greater than or equal to 400 microns, greater than or equal to 500 microns, etc.
Some of the fibers may have a uniform cross-sectional area whereas some of the fibers may have a varying cross-sectional area (e.g., some of the fibers may have a cross-sectional area at the input face that is different than the cross-sectional area at the output face). For example, fibers in the center of the optical coupling layer may have a cross-sectional area at the input face that is the same as the cross-sectional area at the output face. The cross-sectional areas at the input face and output face may be within 5% of each other, within 10% of each other, within 1% of each other, etc. Fibers in the edge of the optical coupling layer may have a cross-sectional area at the input face that is less than the cross-sectional area at the output face. For example, the cross-sectional areas at the input face and output face may differ by more than 10%, more than 20%, more than 50%, more than 100%, more than 200%, etc. The shape of the cross-section of the fiber may also change along the length of the fiber. For example, at the input face the fiber may have a circular or hexagonal cross-section. At the output face the fiber may have a different cross-sectional shape (e.g., an oval, distorted hexagon, etc.).
The curved portion of output surface 82 may be considered an arc with a central angle of greater than 10°, greater than 25°, greater than 45°, greater than 60°, greater than 70°, greater than 80°, greater than 90°, between 45° and 90°, etc.
The example of optical coupling layer 74 being formed from a coherent fiber bundle is merely illustrative. In an alternate embodiment, optical coupling layer 74 may be formed from Anderson localization material. Anderson localization material is characterized by transversely random refractive index features (higher index regions and lower index regions) of about two wavelengths in lateral size that are configured to exhibit two-dimensional transverse Anderson localization of light (e.g., the light output from the display of device 10). These refractive index variations are longitudinally invariant (e.g., along the direction of light propagation, perpendicular to the surface normal of a layer of Anderson localization material). The transversely random refractive index features may have widths of between 1 and 2 microns, between 0.5 microns and 2 microns, or other desired widths.
Including optical coupling layers (e.g., of the type shown in
Each pixel 44 emits light into optical coupling layer 74. The optical coupling layer 74 guides light from a given pixel 44 to a respective output location 96. However, output location 96 is at a different position within the XY-plane than the original pixel 44.
The illustrative shapes shown in
In order to display a desired image from output surface 82 of optical coupling layer 74, the relationship between the location of light emitted from the active area and where that light is emitted from the output surface 82 must be accounted for. For example, each pixel in active area AA may have an associated vector 102 indicating how the light is shifted within the XY-plane when emitted from output surface 82. To determine these displacement vectors for each pixel, a system as shown in
During distortion characterization, the pixels 44 of display layer 32 may be used to display a target image. In one example, the target image may include a checkerboard pattern of black and white rectangles. This type of pattern may provide intersection points that can be easily compared to the original pixel data to determine displacement. However, this example is merely illustrative and the target image may be any desired image with any desired characteristics.
While display layer 32 displays the known target image (which is output from optical coupling layer 74), image sensor 108 (sometimes referred to as camera 108) captures an image of the electronic device through lens 106. Lens 106 may be a telecentric lens. Using a telecentric lens results in an orthographic view of electronic device 10 (e.g., the chief rays are orthogonal to image sensor 108). This example is merely illustrative and other types of lenses may be used if desired. Lens 106 may include more than one lens element and may sometimes be referred to as a lens module. Lens 106 and image sensor 108 may sometimes collectively referred to as a camera module.
Image sensor 108, lens module 106, and/or electronic device 10 (and display layer 32) may be controlled by host 110. Host 110 may include computing equipment such as a personal computer, laptop computer, tablet computer, or handheld computing device. Host 110 may include one or more networked computers. Host 110 may maintain a database of results, may be used in sending commands to image sensor 108, lens module 106, and/or electronic device 10, may receive data from image sensor 108, lens module 106, and/or electronic device 10, etc.
By capturing an image of the target displayed on electronic device 10 using image sensor 108, the correlation between a pixel's location in display layer 32 and where that pixel's light is visible on optical coupling layer 74 may be determined.
As shown in
The number and location of pixels directly measured (which, accordingly, determines the number of pixels determining using interpolation) may be selected based on the specific design requirements for the electronic device. In general, obtaining a displacement vector for every pixel may be the most accurate technique, but may be time consuming and require excessive computing power. If a displacement vector is obtained for too few pixels, however, the accuracy of the interpolated values may decrease. The optimum amount of interpolation may depend on the exact specifications of the display layer and optical coupling layer used and may depend on whether accuracy or speed is prioritized. In general, the number of pixels between selected pixels for which a displacement vector is obtained may be zero (i.e., a displacement vector is obtained for every pixel), one, two, three, four, five, more than five, more than eight, more than ten, more than twenty, more than thirty, more than fifty, more than sixty, more than eighty, more than one hundred, more than two hundred, more than three hundred, more than five hundred, less than one hundred, less than fifty, less than ten, between three and one hundred between fifty and seventy-five, between three and ten, etc.
The pixels selected for direct displacement vector measurement may be arranged in a uniform grid (as shown in
In some cases, extrapolation may be used (instead of interpolation) to determine a displacement vector for a pixel that does not have a measured displacement vector.
After determining the vector map using measurement, interpolation, and extrapolation, it is possible to map an intended output image (from the optical coupling layer) to pixel values to be displayed using display layer 32. When operating electronic device 10, for example, a desired image to be displayed may be rendered with desired brightness values for each optical coupling layer pixel 96. The vector map is then used to determine which display pixel 44 in layer 32 should be used to provide the desired brightness to each optical coupling layer pixel 96. The vector map may be associated with a perceived distortion of the output surface. For example, when a telecentric lens (as described in connection with
Next, at step 204, the remaining pixels in the display active area that were not characterized in step 202 may be characterized using interpolation or extrapolation. Any desired type of interpolation and/or extrapolation may be used for each pixel. By characterizing the remaining pixels in step 204, a displacement vector may be determined for every pixel in the active area of the display. In other words, after step 204 a complete distortion map is available to determine how the optical coupling layer modifies the location of light from the display active area. If desired, multiple distortion maps may be obtained at step 204, with each distortion map having an associated perceived optical coupling layer output surface shape (e.g., planar, having edges that curve away from the viewer, etc.).
Steps 206 and 208 may take place during operation of the electronic device. This is in contrast to step 202 (and optionally step 204), which may take place during manufacturing of the electronic device before the electronic device is ready for normal operation. In step 206, the determined distortion map may be used to determine display pixel data that corresponds to desired optical coupling layer pixel data. In other words, because the surface of the optical coupling layer is ultimately what is observed by the viewer, images to be displayed to the viewer may be rendered based on the optical coupling layer output surface (e.g., to fit output surface 82). The rendered optical coupling layer pixel values are then converted to display pixel values at step 206 using the distortion map (e.g., the vector values determined in steps 202 and 204). At step 208, the display pixel values may be provided to the display pixels in the active area. The display pixels in the active area will emit light that is then distorted by the optical coupling layer back into the intended image. If multiple distortion maps are available, a distortion map may optionally be selected (e.g., based on sensor data) at step 206.
The method of
In some cases, optical coupling layer 74 may be formed from fibers that have a uniform cross-sectional area. In these cases, the size of each optical coupling layer pixel may 96 match the size of each display pixel 44. In other words, if a given display pixel 44 is used to emit light, the corresponding light-emitting area on the output surface of the optical coupling layer will have the same surface area as the given display pixel. However, in some cases, optical coupling layer 74 may be formed from fibers with varying cross-sectional areas. For example, the cross-sectional area of the fibers may be larger at the output surface of the optical coupling layer than at the input surface.
An example of this is shown in the diagram of
The varying areas of each optical coupling layer pixel shown in
As shown in
When multiple rendered pixels 116 overlap one optical coupling layer pixel 96, each rendered pixel cannot necessarily be displayed as desired. For example, consider an example of two rendered pixels 116 that overlap one optical coupling layer pixel 96. One of the rendered pixels 116 may be intended to be black whereas the other rendered pixel may be intended to be white. However, a single optical coupling layer pixel 96 (which can only output one uniform type of light from one corresponding display pixel 44) is used to display both of the rendered pixels. To account for situations such as this, a resampling process may be performed.
Rendered pixels 116 may be initially rendered (e.g., by a graphics processing unit) with given brightness levels. For simplicity, the brightness level may be considered on a scale of 0 to 1, with 1 being the brightest (e.g., white) and 0 being the darkest (e.g., black). The resampling process may involve using the rendered pixels to produce a representative brightness that is used by the corresponding display pixel 44.
Consider the example of
In column C1 of optical coupling layer pixels 96, each optical coupling layer pixel includes both black and white rendered pixels. To obtain a single brightness value for each optical coupling layer pixel, resampling may be performed. The resampling may involve taking the average brightness level of the rendered pixels in the optical coupling layer pixels or any other desired techniques. For example, the average brightness level of the rendered pixels 116 in optical coupling layer pixels 96 in column C1 is 0.6. Therefore, a brightness level of 0.6 may be provided to the corresponding display pixels 44 associated with the optical coupling layer pixels in column C1.
Similarly, in column C3 of optical coupling layer pixels 96, each optical coupling layer pixel includes both black and white rendered pixels. To obtain a single brightness value for each optical coupling layer pixel, resampling may be performed. For example, the average brightness level of the rendered pixels 116 in optical coupling layer pixels 96 in column C3 is 0.44. Therefore, a brightness level of 0.44 may be provided to the corresponding display pixels 44 associated with the optical coupling layer pixels in column C3.
The image data may be provided from GPU 122 to pixel pipeline 124. Pixel pipeline 124 may include image distortion control circuitry 126. Image distortion control circuitry 126 may use distortion map(s) 128 to modify the image data. The modified image data is provided from the pixel pipeline to display driver circuitry 130, which may supply the image data to data lines of the display. Display driver circuitry 130 may also include gate driver circuitry which is used to assert gate line signals on gate lines of display 14. Using display driver circuitry 130, the modified image data is provided to and displayed on the active area AA of display layer 32.
Each frame of image data provided by GPU 122 may include a representative brightness value for each rendered pixel 116. Image distortion control circuitry 126 may modify the brightness value for each pixel based on the distortion map. For example, there may be a distortion map associated with a perceived planar output surface. More than one distortion maps may optionally be stored in image distortion control circuitry 126. A distortion map may be selected from the one or more distortion maps based on sensor data, a perceived distortion setting, etc.
Ultimately, image distortion control circuitry 126 may determine a brightness value for each pixel in the active area of display layer 32 (e.g., pixels 44). The brightness value of each pixel 44 may be a function of the brightness value(s) of one or more rendered pixels 116 from GPU 122. The rendered pixels 116 that contribute to the brightness of a given pixel 44 may be determined based on the distortion map 128.
The display may optionally have different distortion modes (sometimes referred to as distortion settings) and the image data may be modified based on the present distortion mode. The image distortion control circuitry 126 may modify the image data based both on the desired perceived distortion (e.g., the distortion setting) and sensor data from one or more sensors within the electronic device (e.g., sensors 16 in
GPU 122, pixel pipeline 124, display driver circuitry 130, and display layer 32 as shown in
In the example of
The foregoing is merely illustrative and various modifications can be made by those skilled in the art without departing from the scope and spirit of the described embodiments. The foregoing embodiments may be implemented individually or in any combination.
This application claims the benefit of provisional patent application No. 62/731,468, filed Sep. 14, 2018, which is hereby incorporated by reference herein in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
4349817 | Hoffman et al. | Sep 1982 | A |
4534813 | Williamson et al. | Aug 1985 | A |
5329386 | Birecki et al. | Jul 1994 | A |
5502457 | Sakai et al. | Mar 1996 | A |
5659378 | Gessel | Aug 1997 | A |
6046730 | Bowen et al. | Apr 2000 | A |
6407785 | Yamazaki | Jun 2002 | B1 |
6467922 | Blanc et al. | Oct 2002 | B1 |
6680761 | Greene et al. | Jan 2004 | B1 |
6845190 | Smithwick et al. | Jan 2005 | B1 |
7228051 | Cok et al. | Jun 2007 | B2 |
7542209 | McGuire, Jr. | Jun 2009 | B2 |
7823309 | Albenda | Nov 2010 | B2 |
7856161 | Tabor | Dec 2010 | B2 |
8045270 | Shin et al. | Oct 2011 | B2 |
8723824 | Myers et al. | May 2014 | B2 |
8824779 | Smyth | Sep 2014 | B1 |
8976324 | Yang et al. | Mar 2015 | B2 |
9268068 | Lee | Feb 2016 | B2 |
9274369 | Lee | Mar 2016 | B1 |
9312517 | Drzaic et al. | Apr 2016 | B2 |
9342105 | Choi et al. | May 2016 | B2 |
9509939 | Henion et al. | Nov 2016 | B2 |
9591765 | Kim et al. | Mar 2017 | B2 |
9755004 | Shieh et al. | Sep 2017 | B2 |
9818725 | Bower et al. | Nov 2017 | B2 |
9907193 | Lee et al. | Feb 2018 | B2 |
10048532 | Powell et al. | Aug 2018 | B2 |
10052831 | Welker et al. | Aug 2018 | B2 |
20040184011 | Raskar | Sep 2004 | A1 |
20060016448 | Ho | Jan 2006 | A1 |
20070097108 | Brewer | May 2007 | A1 |
20070291233 | Culbertson | Dec 2007 | A1 |
20080144174 | Lucente et al. | Jun 2008 | A1 |
20080186252 | Li | Aug 2008 | A1 |
20100177261 | Jin et al. | Jul 2010 | A1 |
20100238090 | Pomerantz et al. | Sep 2010 | A1 |
20110025594 | Watanabe | Feb 2011 | A1 |
20110057861 | Cok et al. | Mar 2011 | A1 |
20110102300 | Wood et al. | May 2011 | A1 |
20110242686 | Wantanbe | Oct 2011 | A1 |
20120218219 | Rappoport et al. | Aug 2012 | A1 |
20130081756 | Franklin et al. | Apr 2013 | A1 |
20130082901 | Watanabe | Apr 2013 | A1 |
20130083080 | Rappoport et al. | Apr 2013 | A1 |
20130235560 | Etienne et al. | Sep 2013 | A1 |
20130279088 | Raff et al. | Oct 2013 | A1 |
20140037257 | Yang et al. | Feb 2014 | A1 |
20140092028 | Prest et al. | Apr 2014 | A1 |
20140092346 | Yang et al. | Apr 2014 | A1 |
20140183473 | Lee et al. | Jul 2014 | A1 |
20140240985 | Kim et al. | Aug 2014 | A1 |
20140328041 | Rothkopf et al. | Nov 2014 | A1 |
20140354920 | Jang et al. | Dec 2014 | A1 |
20150093087 | Wu | Apr 2015 | A1 |
20150163376 | Pan | Jun 2015 | A1 |
20150227227 | Myers et al. | Aug 2015 | A1 |
20160231784 | Yu et al. | Aug 2016 | A1 |
20160234362 | Moon et al. | Aug 2016 | A1 |
20170235341 | Huitema et al. | Aug 2017 | A1 |
20180052312 | Jia et al. | Feb 2018 | A1 |
20180088416 | Jiang et al. | Mar 2018 | A1 |
20180113241 | Powell | Apr 2018 | A1 |
20180372958 | Karafin | Dec 2018 | A1 |
Number | Date | Country |
---|---|---|
20180034832 | Apr 2018 | KR |
Entry |
---|
Onufrak, Michelle A. Visbal, Raymond L. Konger, and Young L. Kim. “Telecentric suppression of diffuse light in imaging of highly anisotropic scattering media.” Optics letters 41.1 (2016): 143-146. (Year: 2016). |
Luster, Spencer. “Using telecentric lenses in inspection systems.” Vis. Syst. Des. 10.1 (2005): 28. (Year: 2005). |
Number | Date | Country | |
---|---|---|---|
62731468 | Sep 2018 | US |