This disclosure relates to non-projection display devices that are also capable of capturing scanned image data of objects placed in front of the display devices. This disclosure further relates to techniques and devices that may be used with interferometric modulator (IMOD) electromechanical systems.
Electromechanical systems include devices having electrical and mechanical elements, actuators, transducers, sensors, optical components (such as mirrors) and electronics. Electromechanical systems can be manufactured at a variety of scales including, but not limited to, microscales and nanoscales. For example, microelectromechanical systems (MEMS) devices can include structures having sizes ranging from about a micron to hundreds of microns or more. Nanoelectromechanical systems (NEMS) devices can include structures having sizes smaller than a micron including, for example, sizes smaller than several hundred nanometers. Electromechanical elements may be created using deposition, etching, lithography, and/or other micromachining processes that etch away parts of substrates and/or deposited material layers, or that add layers to form electrical and electromechanical devices.
One type of electromechanical systems device is called an interferometric modulator (IMOD). As used herein, the term interferometric modulator or interferometric light modulator refers to a device that selectively absorbs and/or reflects light using the principles of optical interference. In some implementations, an interferometric modulator may include a pair of conductive plates, one or both of which may be transparent and/or reflective, wholly or in part, and capable of relative motion upon application of an appropriate electrical signal. In an implementation, one plate may include a stationary layer deposited on a substrate and the other plate may include a metallic membrane separated from the stationary layer by an air gap. The position of one plate in relation to another can change the optical interference of light incident on the interferometric modulator. Interferometric modulator devices have a wide range of applications, and are anticipated to be used in improving existing products and creating new products, especially those with display capabilities.
The systems, methods and devices of the disclosure each have several innovative aspects, no single one of which is solely responsible for the desirable attributes disclosed herein.
One innovative aspect of the subject matter described in this disclosure can be implemented in various ways.
In some implementations, an apparatus is provided that includes a non-projection display screen, a collection light guide, and one or more light detectors positioned about the periphery of the collection light guide. The collection light guide may be overlaid on the non-projection display screen and may have a first surface facing the non-projection display screen and a second surface facing away from the non-projection display screen. The second surface may be substantially parallel to, and coextensive with, the first surface. The collection light guide may be configured to redirect light entering the collection light guide via the second surface towards the periphery of the collection light guide.
In some implementations of the apparatus, the collection light guide may be a planar light guide containing light-turning structures, the light-turning structures configured to redirect the light entering the collection light guide via the second surface towards the periphery of the collection light guide. In some further implementations, each of the one or more light detectors positioned about the periphery of the collection light guide may be positioned so as to detect light emitted from a face of the collection light guide, the face having an edge generally defining a portion of the second surface.
In some implementations of the apparatus, the collection light guide may be substantially coextensive with the non-projection display screen.
In some implementations of the apparatus, the periphery of the collection light guide may include four sides substantially forming a rectangle, each of the sides having at least one of the one or more light detectors positioned so as to detect light emitted from the collection light guide via the side. The collection light guide may also include four quadrants, and light entering the collection light guide via the second surface may be substantially redirected towards a side correlated to the quadrant of the collection light guide where the light entered the collection light guide.
In some implementations of the apparatus, each of the one or more light detectors may have a primary axis of light detection, and each of the one or more light detectors may be oriented such that the primary axis of light detection is substantially normal to the first surface.
In some implementations of the apparatus, each of the one or more light detectors may have a primary axis of light detection and may be oriented such that the primary axis of light detection is substantially parallel to the first surface.
In some implementations of the apparatus, the apparatus may further include a front light guide with a third surface and a fourth surface substantially parallel to and coextensive with the third surface, as well as one or more light sources positioned along the periphery of the front light guide. In such implementations, the front light guide may be interposed between the collection light guide and the non-projection display screen, the third surface may face the non-projection display screen, the fourth surface may face the collection light guide, and the front light guide may be configured to redirect light from the one or more light sources entering the front light guide via the periphery of the front light guide towards the non-projection display screen.
In some implementations of the apparatus, the non-projection display screen may be a reflective display screen. In some other implementations, the non-projection display screen may be a transmissive, backlit display screen.
In some implementations of the apparatus, the collection light guide may be configured to permit substantially more light to pass through from the first surface to the second surface than from the second surface to the first surface.
In some implementations of the apparatus, the apparatus may further include a control system. The control system may include at least one processor configured to process image data and at least one memory device communicatively connected with the at least one processor. The at least one memory device may store instructions executable by the at least one processor. The instructions may include instructions to control the at least one processor to cause the non-projection display screen to display a plurality of image patterns, each image pattern including bright pixels and dark pixels; collect light intensity data from the one or more light detectors while each image pattern is displayed; correlate the collected light intensity data with each image pattern; and construct an image of an object, wherein the object is positioned proximate to the second surface while the image patterns are displayed.
In some further implementations of the apparatus, the apparatus may also include a driver circuit configured to send at least one signal to the display screen. In some further implementations of the apparatus, the apparatus may also include a controller configured to send at least a portion of the image data to the driver circuit. In yet some further implementations of the apparatus, the apparatus may also include an image source module configured to send the image data to the at least one processor. In some implementations of the apparatus, the image source module may include at least one of a receiver, transceiver, and transmitter.
In some implementations of the apparatus, the apparatus may further include an input device configured to receive input data and to communicate the input data to the processor.
In some implementations of the apparatus, each image pattern may be a pseudorandom image pattern of bright pixels and dark pixels, and the image of the object may be constructed using compressing sampling techniques.
In some implementations of the apparatus, the second surface may be subdivided into a plurality of parallel light-receiving zones in a first direction. Each light-receiving zone may correspond to at least one of the one or more light detectors, and light passing into the collection light guide from each light-receiving zone may be redirected and channeled along a mean path substantially perpendicular to the first direction and parallel to the second surface. The light from each light-receiving zone may be kept substantially isolated from the light from the other light-receiving zones during redirection and channeling, and the at least one light detector corresponding to each light-receiving zone may be positioned so as to detect the light channeled from each light-detecting zone.
In some further implementations of the apparatus, each image pattern may have dark pixels and an array of bright pixels extending across the non-projection display screen in a direction parallel to the first direction. In some further implementations of the apparatus, each image pattern may be monochromatic and the instructions may also include instructions to control the at least one processor to correlate the collected light intensity data for each image pattern with the color of the image pattern.
In some implementations, a machine-readable, non-transitory storage medium is provided. The machine-readable, non-transitory storage medium may have computer-executable instructions stored thereon for controlling one or more processors to cause a non-projection display screen to display a plurality of image patterns, each image pattern including bright pixels and dark pixels and to collect light intensity data from one or more light detectors while each image pattern is displayed. The one or more light detectors may be positioned about the periphery of a collection light guide overlaid on the non-projection display screen, and the collection light guide may be configured to take light entering the collection light guide and travelling towards the non-projection display screen and redirect the light towards the periphery of the collection light guide. The machine-readable, non-transitory storage medium may have further computer-executable instructions stored thereon for controlling one or more processors to correlate the collected light intensity data with each image pattern and to construct an image of an object, wherein the object is positioned proximate to the second surface while the image patterns are displayed.
In some implementations of the machine-readable, non-transitory storage medium, each image pattern may be monochromatic and the computer-executable instructions may further include instructions to control the one or more processors to correlate the collected light intensity data for each image pattern with the color of the image pattern.
In some implementations of the machine-readable, non-transitory storage medium, each image pattern may be monochromatic and the machine-readable, non-transitory storage medium may have further computer-executable instructions stored thereon for further controlling the one or more processors to determine the light intensity data correlated with each image pattern by summing together individual light intensity data from each of the light detectors in the one or more light detectors.
In some implementations of the machine-readable, non-transitory storage medium, the machine-readable, non-transitory storage medium may have further computer-executable instructions stored thereon for further controlling the one or more processors to display each image pattern multiple times. The image patterns may be monochromatic and each display of a given image pattern in the plurality of image patterns may be in a different color.
In some implementations of the machine-readable, non-transitory storage medium, the machine-readable, non-transitory storage medium may have further computer-executable instructions stored thereon for further controlling the one or more processors to construct the image of the object using compressive sampling techniques, and each of the image patterns may be a pseudorandom pattern of bright pixels and dark pixels.
In some implementations, an apparatus is provided that includes a non-projection display means, the non-projection display means configured to display digital images, and a means for redirecting light traveling towards the non-projection display means, the means for redirecting light overlaid on, and substantially coextensive with, the non-projection display means. The means for redirecting light may be configured to redirect the light towards the periphery of the means for redirecting light and may also be planar. The light detection means may be positioned about the periphery of the means for redirecting light and configured to detect light redirected towards the periphery of the means for redirecting light.
In some implementations of the apparatus, the apparatus may further include a controller means. The controller means may be configured to cause the non-projection display means to display a plurality of image patterns, each image pattern including bright pixels and dark pixels, to collect light intensity data from the light detection means while each image pattern is displayed, to correlate the collected light intensity data with each image pattern, and to construct an image of an object positioned proximate to the means for redirecting light while the image patterns are displayed.
In some such implementations of the apparatus, the controller means may construct the image of the object using compressive sampling techniques, and each of the image patterns may be a pseudorandom pattern of bright pixels and dark pixels.
Details of one or more implementations of the subject matter described in this specification are set forth in the accompanying drawings and the description below. Other features, aspects, and advantages will become apparent from the description, the drawings, and the claims. Note that the relative dimensions of the following figures may not be drawn to scale.
Like reference numbers and designations in the various drawings indicate like elements.
The following detailed description is directed to certain implementations for the purposes of describing the innovative aspects. However, the teachings herein can be applied in a multitude of different ways. The described implementations may be implemented in any device that is configured to display an image, whether in motion (for example, video) or stationary (for example, still image), and whether textual, graphical or pictorial. More particularly, it is contemplated that the implementations may be implemented in or associated with a variety of electronic devices such as, but not limited to, mobile telephones, multimedia Internet enabled cellular telephones, mobile television receivers, wireless devices, smartphones, bluetooth devices, personal data assistants (PDAs), wireless electronic mail receivers, hand-held or portable computers, netbooks, notebooks, smartbooks, printers, copiers, scanners, facsimile devices, GPS receivers/navigators, cameras, MP3 players, camcorders, game consoles, wrist watches, clocks, calculators, television monitors, flat panel displays, electronic reading devices (for example, e-readers), computer monitors, auto displays (for example, odometer display, etc.), cockpit controls and/or displays, camera view displays (for example, display of a rear view camera in a vehicle), electronic photographs, electronic billboards or signs, projectors, architectural structures, microwaves, refrigerators, stereo systems, cassette recorders or players, DVD players, CD players, VCRs, radios, portable memory chips, washers, dryers, washer/dryers, packaging (for example, MEMS and non-MEMS), aesthetic structures (for example, display of images on a piece of jewelry) and a variety of electromechanical systems devices. The teachings herein also can be used in non-display applications such as, but not limited to, electronic switching devices, radio frequency filters, sensors, accelerometers, gyroscopes, motion-sensing devices, magnetometers, inertial components for consumer electronics, parts of consumer electronics products, varactors, liquid crystal devices, electrophoretic devices, drive schemes, manufacturing processes, electronic test equipment. Thus, the teachings are not intended to be limited to the implementations depicted solely in the Figures, but instead have wide applicability as will be readily apparent to one having ordinary skill in the art.
A non-projection display screen may be coupled to a collection light guide that allows light emitted or reflected from the non-projection display screen to pass through the light guide without substantial loss. A non-projection display screen, as used herein, refers to a video display screens such as liquid crystal displays (LCDs), plasma display screens, e-ink displays, interfermetric modulator displays, and other display technologies that do not require optical lensing systems to produce coherent optical output. Non-projection display screens typically produce a source image that is the same scale as the image that is viewed by a user. By contrast, projection display screens, such as rear- or front-projection displays, may include one or more optical elements such as lenses or mirrors that magnify and project a source image that is considerably smaller than the image that is viewed by a user. Projection display screens are typically bulky in comparison to non-projection display screens due to the need to accommodate the focal distances and fields of view of the optical elements used.
The collection light guide may also be configured such that light entering the collection light guide and travelling towards the non-projection display screen is redirected towards the periphery of the collection light guide. One or more light detectors positioned about the periphery of the collection light guide may be used to detect such redirected light. Such implementations allow for image capture of objects placed on, or in close proximity to, the non-projection display screen/collection light guide. In some implementations, light may be emitted from the non-projection display screen, or reflected off of the non-projection display screen, towards such an object. In turn, the object may then reflect some of the light back towards the collection light guide and non-projection display screen. Such light may then be redirected towards the periphery of the collection light guide.
In some implementations, a series of structured image patterns may be displayed on the non-projection display screen in association with an image capture operation; each pattern may have a different arrangement of light and dark pixels, and may cause different amounts of light to be reflected off of the object that is the subject of the image capture operation and into the collection light guide. Accordingly, the light detector(s) positions about the periphery of the collection light guide may measure light intensity values that are then each associated with the corresponding structured image pattern that produced the measured light intensity. The measured light intensity, in conjunction with the structured image patterns, may, using compressive sampling techniques, be used to construct an image of the object.
Particular implementations of the subject matter described in this disclosure may be implemented to realize one or more of the following potential advantages. For example, a non-projection display screen may be provided that is capable of both displaying video image data as a standard video display would, and of capturing digital image data of objects placed on or in close proximity to the non-projection display screen. Moreover, since many of the features that enable such are separate from the pixel elements of the non-projection display screen, implementations of such a non-projection display screen may not require modification of pixel features across the face of the non-projection display screen. Various other advantages may be apparent from the discussions below.
One example of a suitable MEMS device, to which the described implementations may apply, is a reflective display device. Reflective display devices can incorporate interferometric modulators (IMODs) to selectively absorb and/or reflect light incident thereon using principles of optical interference. IMODs can include an absorber, a reflector that is movable with respect to the absorber, and an optical resonant cavity defined between the absorber and the reflector. The reflector can be moved to two or more different positions, which can change the size of the optical resonant cavity and thereby affect the reflectivity of the interferometric modulator. The reflectivity spectrums of IMODs can create fairly broad spectral bands which can be shifted across the visible wavelengths to generate different colors. The position of the spectral band can be adjusted by changing the thickness of the optical resonant cavity, for example, by changing the position of the reflector.
The IMOD display device can include a row/column array of IMODs. Each IMOD can include a pair of reflective layers, for example, a movable reflective layer and a fixed partially reflective layer, positioned at a variable and controllable distance from each other to form an air gap (also referred to as an optical gap or cavity). The movable reflective layer may be moved between at least two positions. In a first position, for example, a relaxed position, the movable reflective layer can be positioned at a relatively large distance from the fixed partially reflective layer. In a second position, for example, an actuated position, the movable reflective layer can be positioned more closely to the partially reflective layer. Incident light that reflects from the two layers can interfere constructively or destructively depending on the position of the movable reflective layer, producing either an overall reflective or non-reflective state for each pixel. In some implementations, the IMOD may be in a reflective state when unactuated, reflecting light within the visible spectrum, and may be in a dark state when unactuated, reflecting light outside of the visible range (for example, infrared light). In some other implementations, however, an IMOD may be in a dark state when unactuated, and in a reflective state when actuated. In some implementations, the introduction of an applied voltage can drive the pixels to change states. In some other implementations, an applied charge can drive the pixels to change states.
The depicted portion of the pixel array in
In
The optical stack 16 can include a single layer or several layers. The layer(s) can include one or more of an electrode layer, a partially reflective and partially transmissive layer and a transparent dielectric layer. In some implementations, the optical stack 16 is electrically conductive, partially transparent and partially reflective, and may be fabricated, for example, by depositing one or more of the above layers onto a transparent substrate 20. The electrode layer can be formed from a variety of materials, such as various metals, for example indium tin oxide (ITO). The partially reflective layer can be formed from a variety of materials that are partially reflective, such as various metals, for example, chromium (Cr), semiconductors, and dielectrics. The partially reflective layer can be formed of one or more layers of materials, and each of the layers can be formed of a single material or a combination of materials. In some implementations, the optical stack 16 can include a single semi-transparent thickness of metal or semiconductor which serves as both an optical absorber and conductor, while different, more conductive layers or portions (for example, of the optical stack 16 or of other structures of the IMOD) can serve to bus signals between IMOD pixels. The optical stack 16 also can include one or more insulating or dielectric layers covering one or more conductive layers or a conductive/absorptive layer.
In some implementations, the layer(s) of the optical stack 16 can be patterned into parallel strips, and may form row electrodes in a display device as described further below. As will be understood by one having skill in the art, the term “patterned” is used herein to refer to masking as well as etching processes. In some implementations, a highly conductive and reflective material, such as aluminum (Al), may be used for the movable reflective layer 14, and these strips may form column electrodes in a display device. The movable reflective layer 14 may be formed as a series of parallel strips of a deposited metal layer or layers (orthogonal to the row electrodes of the optical stack 16) to form columns deposited on top of posts 18 and an intervening sacrificial material deposited between the posts 18. When the sacrificial material is etched away, a defined gap 19, or optical cavity, can be formed between the movable reflective layer 14 and the optical stack 16. In some implementations, the spacing between posts 18 may be on the order of 1-1000 um, while the gap 19 may be on the order of <10,000 Angstroms (Å).
In some implementations, each pixel of the IMOD, whether in the actuated or relaxed state, is essentially a capacitor formed by the fixed and moving reflective layers. When no voltage is applied, the movable reflective layer 14 remains in a mechanically relaxed state, as illustrated by the IMOD 12 on the left in
The processor 21 can be configured to communicate with an array driver 22. The array driver 22 can include a row driver circuit 24 and a column driver circuit 26 that provide signals to, for example, a display array or panel 30. The cross section of the IMOD display device illustrated in
In some implementations, a frame of an image may be created by applying data signals in the form of “segment” voltages along the set of column electrodes, in accordance with the desired change (if any) to the state of the pixels in a given row. Each row of the array can be addressed in turn, such that the frame is written one row at a time. To write the desired data to the pixels in a first row, segment voltages corresponding to the desired state of the pixels in the first row can be applied on the column electrodes, and a first row pulse in the form of a specific “common” voltage or signal can be applied to the first row electrode. The set of segment voltages can then be changed to correspond to the desired change (if any) to the state of the pixels in the second row, and a second common voltage can be applied to the second row electrode. In some implementations, the pixels in the first row are unaffected by the change in the segment voltages applied along the column electrodes, and remain in the state they were set to during the first common voltage row pulse. This process may be repeated for the entire series of rows, or alternatively, columns, in a sequential fashion to produce the image frame. The frames can be refreshed and/or updated with new image data by continually repeating this process at some desired number of frames per second.
The combination of segment and common signals applied across each pixel (that is, the potential difference across each pixel) determines the resulting state of each pixel.
As illustrated in
When a hold voltage is applied on a common line, such as a high hold voltage VCHOLD
When an addressing, or actuation, voltage is applied on a common line, such as a high addressing voltage VCADD
In some implementations, hold voltages, address voltages, and segment voltages may be used which always produce the same polarity potential difference across the modulators. In some other implementations, signals can be used which alternate the polarity of the potential difference of the modulators. Alternation of the polarity across the modulators (that is, alternation of the polarity of write procedures) may reduce or inhibit charge accumulation which could occur after repeated write operations of a single polarity.
During the first line time 60a, a release voltage 70 is applied on common line 1; the voltage applied on common line 2 begins at a high hold voltage 72 and moves to a release voltage 70; and a low hold voltage 76 is applied along common line 3. Thus, the modulators (common 1, segment 1), (1,2) and (1,3) along common line 1 remain in a relaxed, or unactuated, state for the duration of the first line time 60a, the modulators (2,1), (2,2) and (2,3) along common line 2 will move to a relaxed state, and the modulators (3,1), (3,2) and (3,3) along common line 3 will remain in their previous state. With reference to
During the second line time 60b, the voltage on common line 1 moves to a high hold voltage 72, and all modulators along common line 1 remain in a relaxed state regardless of the segment voltage applied because no addressing, or actuation, voltage was applied on the common line 1. The modulators along common line 2 remain in a relaxed state due to the application of the release voltage 70, and the modulators (3,1), (3,2) and (3,3) along common line 3 will relax when the voltage along common line 3 moves to a release voltage 70.
During the third line time 60c, common line 1 is addressed by applying a high address voltage 74 on common line 1. Because a low segment voltage 64 is applied along segment lines 1 and 2 during the application of this address voltage, the pixel voltage across modulators (1,1) and (1,2) is greater than the high end of the positive stability window (in other words, the voltage differential exceeded a predefined threshold) of the modulators, and the modulators (1,1) and (1,2) are actuated. Conversely, because a high segment voltage 62 is applied along segment line 3, the pixel voltage across modulator (1,3) is less than that of modulators (1,1) and (1,2), and remains within the positive stability window of the modulator; modulator (1,3) thus remains relaxed. Also during line time 60c, the voltage along common line 2 decreases to a low hold voltage 76, and the voltage along common line 3 remains at a release voltage 70, leaving the modulators along common lines 2 and 3 in a relaxed position.
During the fourth line time 60d, the voltage on common line 1 returns to a high hold voltage 72, leaving the modulators along common line 1 in their respective addressed states. The voltage on common line 2 is decreased to a low address voltage 78. Because a high segment voltage 62 is applied along segment line 2, the pixel voltage across modulator (2,2) is below the lower end of the negative stability window of the modulator, causing the modulator (2,2) to actuate. Conversely, because a low segment voltage 64 is applied along segment lines 1 and 3, the modulators (2,1) and (2,3) remain in a relaxed position. The voltage on common line 3 increases to a high hold voltage 72, leaving the modulators along common line 3 in a relaxed state.
Finally, during the fifth line time 60e, the voltage on common line 1 remains at high hold voltage 72, and the voltage on common line 2 remains at a low hold voltage 76, leaving the modulators along common lines 1 and 2 in their respective addressed states. The voltage on common line 3 increases to a high address voltage 74 to address the modulators along common line 3. As a low segment voltage 64 is applied on segment lines 2 and 3, the modulators (3,2) and (3,3) actuate, while the high segment voltage 62 applied along segment line 1 causes modulator (3,1) to remain in a relaxed position. Thus, at the end of the fifth line time 60e, the 3×3 pixel array is in the state shown in
In the timing diagram of
The details of the structure of interferometric modulators that operate in accordance with the principles set forth above may vary widely. For example,
As illustrated in
In implementations such as those shown in
The process 80 continues at block 84 with the formation of a sacrificial layer 25 over the optical stack 16. The sacrificial layer 25 is later removed (for example, at block 90) to form the cavity 19 and thus the sacrificial layer 25 is not shown in the resulting interferometric modulators 12 illustrated in
The process 80 continues at block 86 with the formation of a support structure for example, a post 18 as illustrated in
The process 80 continues at block 88 with the formation of a movable reflective layer or membrane such as the movable reflective layer 14 illustrated in
The process 80 continues at block 90 with the formation of a cavity, for example, cavity 19 as illustrated in
The IMODs as described herein may be components of reflective displays, for example, ambient light reflects off of the “bright” IMODs to form an image. In ambient environments with little or no illumination, for example, nighttime, a front light may be used to provide a source of light that is directed towards the IMODs and reflected off of the bright IMODs. In contrast, traditional transmissive LCDs may rely on a backlight that emits light that passes through the LCD pixels.
Reflective, including IMOD, and transmissive, including LCD, displays can thus both be made to emit light. The amount of light that is emitted by either type of display may be varied by varying the graphical content displayed on the display. Thus, reflective and transmissive displays may be used as a light source to illuminate an object for scanning purposes. In other words, light that is emitted from such a display that is then reflected off of the object may be detected and used to construct an image of the object. This may be particularly useful in the context of non-projection display screens, e.g., cell phone display screens, e-reader display screens, tablet display screens, laptop display screens, etc., allowing such devices to possess scanning functionality in addition to display functionality using the non-projection display screen.
One technique that may be used to combine image acquisition capability with a non-projection display screen is to use a “wedge display.”
Another variant of a wedge display seeks to address some of the packaging issues present in the wedge display discussed above by “folding” the wedge waveguide back on itself.
While such an arrangement does not result in the wedge waveguide 901′ having as large a footprint in the XY plane (in the context of this disclosure, the XY plane is generally defined as being parallel to a display screen, for example, LCD 902, and the Z direction is generally defined as being normal to the display screen), the wedge waveguide 901′ is still subject to the constraints regarding the image propagation portion 908 discussed above. While the folded portion 909 allows the image propagation portion 908 to be located “underneath” the LCD 902 and the backlight 903, this configuration causes the overall scanning display assembly to increase substantially in thickness, in other words, the wedge waveguide 901′ essentially doubles in thickness (it is to be understood that the term “underneath” in this disclosure with reference to a display screen is meant to refer to items located behind the display screen when the display surface of the display screen is viewed from a direction substantially normal to the display screen). In modern electronic devices, such packaging volume sacrifices are unacceptable.
The collection light guide 1006, as well as other collection light guides discussed herein, may be a substantially planar light guide having two major surfaces, for example, a first surface that faces the non-projection display screen 1004, and a second surface that faces away from the non-projection display screen 1004. The first surface and the second surface may be substantially parallel to, and coextensive with, each other. The collection light guide 1006 also may include a number of side faces spanning between the first surface and the second surface and located about the periphery of the collection light guide.
The collection light guide 1006 may include a number of light-turning structures 1010 inside. The light-turning structures may be configured to generally allow light emitted from the non-projection display device 1004, for example, light following first light path 1022, to pass through the collection light guide 1006 and towards the object 1002, but to redirect the light that is reflected off of the object 1002 and back into the collection light guide 1006 towards the periphery of the collection light guide 1006 and into, for example, light detector 1016. Further details of such low-packaging-volume scanning displays are discussed with reference to additional Figures. As mentioned above, it is to be noted that the collection light guide 1006, as well as other collection light guides discussed in this disclosure, may be a planar light guide, in other words, the major opposing faces of the collection light guide may be generally parallel to each other, as opposed to the tapered orientation of the major faces in the wedge waveguides discussed above with respect to
Also shown in
Similarly, the light emitted by the middle bright pixel 1142 may be reflected off of the object 1102 and may follow a second light path 1124 into, for example, the left light detector 1116.
In contrast, the light emitted by the rightmost bright pixel 1142 may follow the third light path 1126 and exit the collection light guide 1106 and may not encounter the object 1102 since the object 1102 is not located above the rightmost bright pixel 1142. Thus, the light emitted from the right-most bright pixel 1142 in this example may follow the third light path 1126 and not be reflected back into either of the light detectors 1116.
In this example, a non-projection display screen 1204 is shown with a front light guide 1208 interposed between a collection light guide 1206 and the display surface of the non-projection display screen 1204. In this particular implementation, the non-projection display screen 1204 is a reflective display screen, for example, an IMOD display screen, which is illuminated by a light source 1218 via the front light guide 1208. A light detector 1216 is located along one side of the collection light guide 1206. An object 1202 that is to be scanned may be placed atop the collection light guide 1206.
Also shown in
The front light guide 1208 may, in many ways, be similar in construction to the collection light guide 1206. For example, the front light guide also may include two substantially parallel and coextensive major surfaces similar to the first surface and the second surface of the collection light guide 1206. To avoid confusion, these surfaces may, with respect to the front light guide 1208, be referred to herein as the “third surface” and the “fourth surface.” The third surface may face the non-projection display screen, and the fourth surface may face away from the non-projection display screen.
Thus, for example, light may be emitted from the light source 1218 and into the front light guide 1208, where it may travel along the length of the front light guide 1208, either directly or via total internal reflection. At some point, the light may strike a front light light-turning structure 1214 that may cause the light to exit the front light guide 1208 and strike the pixels 1240. If the light strikes a dark pixel 1244, little, if any, of the light may be reflected back out of the scanning display 1200. If the light strikes a bright pixel 1242, the light may be reflected back out through the front light guide 1208 and the collection light guide 1206 and may strike, if present, the object 1202. The light may then be reflected off of the object 1202 and travel back through the collection light guide 1206, where it may be redirected by a light-turning structure 1210 included within the collection light guide 1206 so that the light travels towards the periphery of the collection light guide. The redirected light may then be detected by the light detector 1216 at the periphery of the collection light guide 1206.
As can be seen in this example, light following a first light path 1222 may be emitted from the light source 1218, travel through the front light guide 1208, and strike one of the front light light-turning structures 1214. The light may then be reflected off of the front light-turning structure 1214, and directed towards the non-projection display screen 1204. Upon striking the left bright pixel 1242, the light may continue to follow the first light path 1222 back through the front light guide 1208 and the collection light guide 1206, and may then exit the collection light guide 1206 and strike the object 1202. The light may reflect off of the object 1202, and then travel back into the collection light guide 1206, where it may strike one of the light-turning structures 1210 and be redirected towards the periphery of the collection light guide 1206. Upon exiting the periphery of the light-turning guide 1206, the light may be detected by the light detector 1216 positioned at the periphery of the collection light guide 1206.
Similarly, light following a second light path 1224 may be emitted from the light source 1218, travel through the front light guide 1208, and strike one of the front light-turning structures 1214. The light may then be reflected off of the front light-turning structure 1214, and directed towards the non-projection display screen 1204. Upon striking the right bright pixel 1242, the light may continue to follow the second light path 1224 back through the front light guide 1208 and the collection light guide 1206, and may then exit the collection light guide 1206. At this point, the behavior of the light following the second light path 1224 may diverge from the behavior of the light following the first light path 1222. For example, the light following the light path 1224 may exit the collection light guide 1206 and not encounter the object 1202 since the object 1202 is not located above the right bright pixel 1242. Thus, the light emitted from the right bright pixel 1242 may simply follow the second light path 1224 and not be reflected back into the light detector 1216.
Generally speaking, the scanning displays disclosed in this disclosure may be implemented using either backlit or front-lit displays as needed using the requisite parts (for example, for backlit displays, a transmissive display and backlight, and for front-lit displays, a reflective display, a front light guide and light source(s)). The light guides used, for example, collection light guides and, when appropriate, front light guides, may be made from clear plastics or glass, and may be composite structures featuring materials with different refractive indexes in various locations in order to produce light-turning structures.
As can be seen, the light detectors in the pictured implementations may be positioned in close proximity to the non-projection display screen, for example, abutting the non-projection display screen. This is possible because the light detectors used in various implementations of the scanning displays discussed herein may be single-pixel, or low-resolution detectors. For example, in some implementations, the resolution of the light detectors may be substantially lower than the resolution of the image that will be captured of the object. In some implementations, the resolution of the light detectors can be one or more orders of magnitude less in resolution, or several orders of magnitude less in resolution. In contrast, more complex light detectors, such as CCD pixel arrays used with digital cameras, require complex optical/lensing systems in order to project a real image onto the image sensor. Such complex optical/lensing systems may, for example, be used with the wedge displays shown in
As noted, the light detectors of the scanning displays discussed herein may be simple, single element light detectors. Such a light detector may be configured to measure the average amount of illumination or luminance striking the light detector. Thus, in a scanning display with a single light detector, only one luminance value may be detected at any given moment. In a scanning display with, for example, four light detectors, for example, one along each side of a rectangular collection light guide, four luminance values may be detected at any given moment. Various types of light detectors may be used with such scanning display implementations, including CMOS, CCD, photodiodes, photoresistors, etc. While lensing is not required to be used with such light detectors in the implementations discussed herein, some lensing may be desired in some implementations to increase light collection efficiency. In some implementations, it may be unnecessary for such lensing to project a real image of an object being imaged onto the light detector(s) since the light detector(s) used would generally be of too low a resolution to directly capture any detail in the real image.
Various techniques for using such scanning displays and light detectors are discussed later in this document.
Because lensing is not required for the scanning displays of
In the implementations shown, for example, in
Implementations that produce a single measurement of detected light intensity may, for example, be used to obtain scanned images of an object illuminated by the scanning display through a single-pixel (or cluster of pixels) raster scan.
After illumination of the object is complete, in block 1714, an image of the object may be constructed by mapping the measured light intensity/luminance values measured by the light detector(s) to the XY locations associated with each measurement. Such a technique generally requires that at least one light intensity/luminance measurement be taken and used for each pixel in the constructed image.
Another technique that may be used to construct an image of a scanned object using scanning displays such as those shown in
This technique allows for an image of an object illuminated by a series of different compressive image patterns to be constructed from single-pixel measurements of light reflected from the object in response to each such compressive image pattern illumination. The compressive image patterns may, for example, be generated by producing a pseudorandom pattern of bright and dark pixels. The summation of the light detected from across the collection light guide by each light detector acts as a mathematical projection of the image of the object onto the compressive image pattern. To construct, for example, a 100-pixel image of an object, approximately 10 different compressive image patterns may be created and used to illuminate the object, one at a time. In conjunction with each such illumination by one of the compressive image patterns, the amount of light from the compressive image pattern illumination that is reflected from the object and into the light detector or light detectors about the periphery of the collection light guide of a scanning display may be measured and associated with the corresponding compressive image pattern. After the required number of different compressive image patterns, for example, 10 patterns in this scenario, have been displayed and intensity/luminance data for each such compressive image pattern associated with the corresponding compressive image pattern, the resulting compressive image pattern/luminance/intensity pairings may be processed to construct, for example, a 100-pixel image of the object using compressive sampling techniques.
The scanning displays discussed herein utilize compressive sampling techniques in conjunction with planar light guides with light-turning features and one or more light detectors to realize a display unit with image scanning capabilities that may be presented in a very compact packaging volume and without requiring precise optical element alignment as is required of image capture solutions utilizing a conventional optical camera system. Since optical lenses are not required in this technique, unlike conventional camera and scanning systems, the disclosed implementations may be achieved using a very thin form factor. By contrast, conventional camera-based systems using lenses require distances to project images of the object to be scanned due to the constraints placed upon such systems by the focal lengths of the lenses involved in such conventional systems.
In block 1806, a compressive image pattern may be displayed on the non-projection display screen of the scanning display and the light emitted from the non-projection display screen as a result the display of the compressive image pattern may be used to illuminate the object to be scanned. In block 1808, a measurement or measurements of the amount of light from the compressive image pattern that is displayed on the non-projection display screen, reflected off of the object and into a collection light guide associated with the scanning display, and then detected by one or more light detectors placed about the periphery of the collection light guide may be made. In block 1810, each such measurement may be associated with the particular compressive image pattern that produced the intensity/luminance measurement.
In block 1812, a determination may be made as to whether further compressive image pattern illumination data is required. If additional compressive image patterns in the plurality of compressive image patterns generated in block 1804 have not yet been displayed, then the technique may return to block 1804, and a different compressive image pattern may be displayed.
Block 1814 may occur after all of the desired compressive image patterns have been used to illuminate the object to be scanned, and after the resulting light intensity/luminance associated with each such compressive image pattern display/illumination and detected by the light detector(s) has been measured. In block 1814, the intensity/luminance measurement/compressive image pattern pairings may be processed using compressive sampling techniques.
In the discussions above, the collection light guides used have been largely omni-directional. In other implementations, however, the collection light guide may be configured to segregate redirected light based on, for example, the XY location of where light reflected from a scanned object enters the collection light guide. This may allow for situations in which multiple light detectors may be used, and each such light detector may be used to detect light entering a particular area of the collection light guide. This may allow for the raster scanning techniques and the compressive sampling techniques described above to be implemented, for example, in parallel over subsections of the scanning display.
For example, a collection light guide may be divided into four quadrants, each associated with a different light detector located along the periphery of the collection light guide. Such an implementation may require the use of directional light-turning structures such that light falling into each quadrant is directed in a different direction and towards a different light detector or detectors.
The segments of a collection light guide such as the collection light guide 2006 may, for example, be viewed as light-receiving zones, each of which corresponds with one or more light detectors. Light entering each light-receiving zone may, due to the presence of the directional light-turning structures 2010, be kept substantially isolated from the other light-receiving zones while being redirected towards the periphery of the light collection guide.
A scanning display such as scanning display 2000 in
A scanning display such as scanning display 2000 in
Further implementations of a scanning display may allow for line-scanning operations using a collimated collection light guide.
The scanning display 2100 of
In some implementations, rather than scanning a line of illuminated pixels across the scanning display 2100 row-by-row (for example, in rows that are perpendicular to light guide segments 2106), each collimated segment of the collection light guide 2106 may be used in a manner similar to the segments discussed with respect to
The scanning displays discussed above may be controlled using a display controller, light detector controller, processor, and/or similar hardware.
In
Also included in device 2368 are a display controller 2360, a light detector controller 2362, a processor 2364, and a memory 2366, which may be operatively connected with one another. The display controller 2360 may be configured to receive data describing graphical output for the non-projection display screen 2304, and may be configured to transform the data into a signal that instructs the non-projection display screen 2304 to produce a desired pattern of bright and dark pixels, in other words, display the graphical content. The display controller 2360 may be used to control the non-projection display screen 2304 to, for example, display the single pixels/pixel clusters, compressive image patterns, or scan lines or bands discussed above.
The light detector controller 2362 may be configured to receive detected light intensity or luminance data from the light detector(s) used in the scanning display 2300. The light detector controller 2362 also may be configured to receive data from the processor 2364 or the display controller 2360 regarding the content that is displayed by the non-projection display screen 2304 at the time that the light intensity or luminance data is measured and to associate the content data with the measurement data.
The processor 2364 may be configured to receive the light intensity or luminance data from the light detector controller and, using data describing the graphical content associated with each piece of light intensity or luminance data, construct an image of the scanned object 2302 using such data. The processor 2364 may, for example, be programmed to perform any of the techniques outlined in, for example,
It is to be understood that the functionality of the display controller 2360, the light detector controller 2362, and the processor 2364 may, of course, be combined into a lesser number of components or into a single component, or apportioned between a greater number of components, without deviating from the scope of this disclosure.
In the examples provided herein, the illumination used to illuminate the scanned object has generally been described in “bright” and “dark” terms. In some implementations, the scanning display may be configured to obtain greyscale images and, the bright pixels may emit, for example, white light, and the dark pixels no light. In some other implementations, however, color images may be constructed by repeating the detected light measurements multiple times using different wavelengths of light. For example, if compressive sampling is used, each compressive image pattern could be displayed three times: once with green bright pixels, once with blue bright pixels, and once with red bright pixels. The resulting green, blue, and red light intensity or luminance measurements may be paired with the corresponding compressive image patterns that resulted in the measurements and used to construct three mono-chromatic images of a scanned object that may then be combined to provide a single, broad-spectrum color image. An alternative to this process may be to use white light (or other broad, fixed-spectrum light) from, for example, a monochrome non-projection display screen to illuminate a scanned object. The light that is reflected off of the object and back into the collection light guide may then be directed towards a plurality of light detectors, each selectively sensitive to one of several colors that make up the white light spectrum, for example, red-, green-, and blue-sensitive detectors may be used. The resulting spectrum-specific illumination datasets may then be used to produce mono-chromatic images of the scanned object that may be combined to produce a broad-spectrum image of the scanned object. Alternatively, the light may be passed through an active, tunable filter interposed between the collection light guide and the light detector(s). The tunable filter wavelength transmissivity may then be altered to allow only light of certain wavelengths to reach the light detectors. For example, for any given display of a compressive image pattern, the tunable filter could be cycled through successive modes where only green light, only red light, and only blue light are allowed to reach the light detector(s). The resulting compressive image pattern/light intensity or luminance measurement pairings may then be used as described above to produce constructed images in each of several wavelengths. The constructed images may then be combined to provide a broad-spectrum image. In this manner, a scanning display with a monochromatic non-projection display screen may be used to produce broad-spectrum scanned images of an object placed on or above the scanning display.
In block 2404, a D×N propagation matrix may be determined for a given patterned illumination scanning display with N pixels and D light detectors. Each element M(d,n) of the propagation matrix may represent the fraction of light emitted by a single pixel n of the scanning display that is reflected off of a calibration object with uniform reflectivity, for example, a matte white surface, across the entire scanning area of the scanning display and redirected by a collection light guide into a light detector d for measurement.
In block 2406, a brightness pre-scale matrix S may be defined by dividing the minimum element of the matrix M by each element of the matrix M. In this example, the number of detectors in the scanning display is 1, so the M matrix and the resulting S matrix simplify to one-dimensional matrices, in other words, vectors. This example will refer to these matrices as “vectors” for the purpose of this discussion. It is to be understood, however, that the principles outlined herein may also be used with multi-detector systems with suitable modifications to the underlying technique. For example, in a multi-detector system, the pre-scaling factors may be chosen to minimize the range of detected light values at each detector resulting from illumination of the calibration object by the set of all pixels. Any remaining variation in the light contribution of each pixel may be compensated for use post-scaling techniques (discussed later below).
In block 2408, a maximum number of compressive sampling patterns to display is established. Generally speaking, this may be set to a default such as approximately one tenth of the total number of pixels in the scanning display, or may be user- or machine-selectable or adjustable depending on the desired image quality. For example, when a lower quality image is desired, fewer compressive imaging samples may be used. Conversely, when a higher quality image is desired, an increased number of compressive imaging samples may be used.
The number of compressive imaging samples required may be approximately four times the number of pixels, N, times a compressibility factor. The compressibility factor indicates the approximate sparseness of the image in the linear-algebra basis used in the construction. The compressibility factor may be selected to correlate with the human perception of the image quality. For example, if an N pixel image can be compressed using the JPEG-2000 compression algorithm to a factor of 40:1, and maintain a suitable image quality, then using the same linear-algebra basis as JPEG-2000 in the compressive sampling construction and choosing N/10 as the number of compressive imaging samples would provide similar image quality.
In block 2410, all pixels in the scanning display are turned on simultaneously with individual brightnesses defined by the brightness pre-scale vector S. The light from the pixels reflected off of a subject object placed in front of, or on top of, the scanning display and then redirected towards the light detector by the collection light guide may then be measured by the light detector.
In block 2412, a counter z is initialized at zero, and in block 2414, the amount of light emitted from the scanning display that is then measured by the light detector is saved as q(z). After this initial measurement, the counter z is incremented by 1 in block 2416. In block 2418, a pseudo-random bit generator (PRBG) may be set to a known seed and used to generate N pseudorandom bits of 0 or 1, which may be represented in a matrix R(z,N), in block 2420. For each iteration of z, a different vector of pseudorandom bits may be generated. The values of these bits determine whether or not each corresponding pixel n is “light” or “dark.”
In block 2422, pixel brightnesses B(z,n) for each individual pixel in the scanning display are calculated by multiplying the bit value R(z,n) associated with each pixel by the brightness pre-scale value S(n) associated with that pixel. Each display pixel in the scanning display associated with a non-zero R(z,n) value may then be illuminated simultaneously in block 2424 according to the associated pixel brightness B(z,n) values. For pixels with associated R(z,n) values of 0, the resulting brightness B(z,n) will be 0. For each pixel with an associated R(z,n) value of 1, the resulting brightness B(z,n) will be a value greater than 0 and less than or equal to the maximum brightness supported by the display pixels. The resulting image, which may be a random pattern of black, full-brightness, and various intermediate-brightness pixels, may be referred to as a compressive sampling pattern.
The amount of light emitted from the display pixels that is reflected off of the subject object and that is then redirected by the light collection guide into the light detector may be measured in block 2426 and saved as q(z). In block 2428, a determination may be made whether z is equal to the maximum number of compressive sampling patterns Z determined in block 2408. If z is not equal to Z, then the technique may return to block 2416, and further compressive sampling patterns may be used. If z equals Z, then the technique may proceed to block 2430, where a vector y(Z) may be defined, each element y(z) defined by multiplying q(z) by 2 and subtracting q(0) from this product. It is to be understood that vector y(Z) is defined for z=1 to Z, in other words, there is no z=0 element for vector y(Z).
In block 2432, a z by N matrix A may be created where each element A(z,n) is defined by the minimum element value of matrix M multiplied by the quantity of two times R(z,n) minus 1. It is to be understood that matrix A is defined for z=1 to Z, in other words, there is no z=0 element for matrix A. In block 2434, a two-dimensional discrete wavelet transform dwt may be selected, for example, one of the set of Cohen-Daubechies-Feauveau wavelet transforms used with the JPEG-2000 algorithm, where dwt has the dimensions P by Q, where P and Q correspond to the number of rows and columns of pixels, respectively, in the scanning display, in other words, N=P×Q. In other implementations, a different class of transforms may be used, for example, the discrete cosine transform, depending on the choice of a linear-algebra basis to optimize for better compression, processing requirements, etc.
In block 2436, a standard quadratic programming technique, for example, an interior point method algorithm, may be used to solve for a vector x that minimizes the expression:
The resulting vector x may then be used in block 2438 to produce a vector I, where I(z) is determined by adding one half to the quantity x(z) divided by two. The vector I represents the N-pixel image of the subject object. There is no direct mapping from the detector values, q, to the image of the subject, I, since q has dimension Z and I has dimension N, and Z<<N. However, the quadratic programming algorithm will choose I that has a best-fit to the detector values q. Because real-life images are compressible, the sparsity assumption of compressive sampling (given by minimizing the L1 norm of dwt(x) in the quadratic programming problem) provides nearly exact recovery of the compressible image with fewer than N samples.
The above example technique 2400 presumes that all of the pixels in the scanning display are used during the scanning process. However, similar techniques may be used for scanning display image capture when fewer than all of the pixels in the scanning display are used. In such implementations, blocks where actions involve all of the pixels in the scanning display may be modified to only use the pixels that are actively involved in the scanning process.
In some implementations, multiple scanning display image acquisition techniques may be performed in parallel using different light detectors and directional collection light guides. For example, a directional light collection guide with four sides may be configured to direct light to each of the four sides depending on which quadrant of the light collection guide received the light. The scanning display pixels and light detector associated with each quadrant may be treated as separate scanning display sub-systems for the purposes of the example technique 2400. When such an implementation is used to capture an image of a scanned object, the coherent images output by the four scanning display sub-systems may be stitched together to form a single, larger image of the entire scanning field.
It is also to be understood that certain steps of the example technique 2400 may be performed once or may be recycled from one implementation of the technique to the next. For example, blocks 2404 and 2406 may occur during production of an electronic device with scanning display functionality as part of a calibration process. It may not be necessary for blocks 2404 and 2406 to be performed again, unless recalibration of the scanning display is desired. For example, if dust or dirt contaminates the scanning display or a protective cover glass over the scanning display, recalibration may be desired.
Similarly, data from blocks 2418 through 2422 and blocks 2432 and 2434 may be recycled from one implementation of the example technique 2400 to the next. This may save considerable computational overhead, as it is not necessary to always calculate, for example, a Z by N matrix of pseudorandom bits—the same Z by N matrix of pseudorandom bits may be used from implementation to implementation of the technique 2400.
In block 2504, a D×N propagation matrix may be determined for a given patterned illumination scanning display with N pixels and D light detectors. Each element M(d,n) of the propagation matrix may represent the fraction of light emitted by a single pixel n of the scanning display that is reflected off of a calibration object with uniform reflectivity, for example, a matte white surface, across the entire scanning area of the scanning display and redirected by a collection light guide into a light detector d for measurement.
In block 2506, a maximum number of compressive sampling patterns to display is determined. This maximum number Z may be determined in the same manner that the maximum number of compressive sampling patterns from block 2408 is determined. In block 2508, a counter z may be initialized at zero.
In block 2510, all of the pixels that will potentially be used to illuminate an object to be scanned by the scanning display may be turned on to full illumination. The light from the pixels reflected off of a subject object placed in front of, or on top of, the scanning display and then redirected towards the light detector by the collection light guide may then be measured by the light detector and saved as value q(z) in block 2512. After this initial measurement, the counter z is incremented by 1 in block 2514. In block 2516, a PRBG may be set to a known seed and used to generate N pseudorandom bits of 0 or 1, which may be represented in a matrix R(z,N), in block 2518. For each iteration of z, a different vector of pseudorandom bits may be generated. The values of these bits determine whether or not each corresponding pixel n is “light” or “dark.”
In block 2520, each display pixel in the scanning display associated with a non-zero R(z,n) value may be illuminated simultaneously. Pixels with associated R(z,n) values of 0 may be unilluminated. For each pixel with an associated R(z,n) value of 1. The resulting image, which may be a random pattern of black and bright pixels, may be referred to as a compressive sampling pattern.
The amount of light emitted from the display pixels that is reflected off of the subject object and that is then redirected by the light collection guide into the light detector may be measured in block 2522 and saved as q(z). In block 2524, a determination may be made whether z is equal to the maximum number of compressive sampling patterns Z determined in block 2526. If z is not equal to Z, then the technique may return to block 2514, and further compressive sampling patterns may be used. If z equals Z, then the technique may proceed to block 2526, where a vector y(Z) may be defined, each element y(z) defined by multiplying q(z) by 2 and subtracting q(0) from this product. It is to be understood that vector y(Z) is defined for z=1 to Z, in other words, there is no z=0 element for vector y(Z).
In block 2528, a Z by N matrix A may be created where each element A(z,n) is defined by the corresponding element value of matrix M(n) multiplied by the quantity of two times R(z,n) minus 1. It is to be understood that matrix A is defined for z=1 to Z, in other words, there is no z=0 element for matrix A. In block 2530, a two-dimensional discrete wavelet transform dwt may be selected, for example, one of the set of Cohen-Daubechies-Feauveau wavelet transforms used with the JPEG-2000 algorithm, where dwt has the dimensions P by Q, where P and Q correspond to the number of rows and columns of pixels, respectively, in the scanning display, in other words, N=P×Q. In other implementations, a different class of transforms may be used, for example, the discrete cosine transform, depending on the choice of a linear-algebra basis to optimize for better compression, processing requirements, etc.
In block 2532, a standard quadratic programming technique, for example, an interior point method algorithm, may be used to solve for a vector x that minimizes the expression:
The resulting vector x may then be used in block 2534 to produce a vector I, where I(z) is determined by adding one half to the quantity x(z) divided by two. The vector I represents the N-pixel image of the subject object. There is no direct mapping from the detector values, q, to the image of the subject, I, since q has dimension Z and I has dimension N, and Z<<N. However, the quadratic programming algorithm will choose I that has a best-fit to the detector values q. Because real-life images are compressible, the sparsity assumption of compressive sampling (given by minimizing the L1 norm of dwt(x) in the quadratic programming problem) provides good recovery of the compressible image with fewer than N samples.
The above example technique 2500 presumes that all of the pixels in the scanning display are used during the scanning process. However, similar techniques may be used for scanning display image capture when fewer than all of the pixels in the scanning display are used. In such implementations, blocks where actions involve all of the pixels in the scanning display may be modified to only use the pixels that are actively involved in the scanning process.
In some implementations, multiple scanning display image acquisition techniques may be performed in parallel using different light detectors and directional collection light guides. For example, a directional light collection guide with four sides may be configured to direct light to each of the four sides depending on which quadrant of the light collection guide received the light. The scanning display pixels and light detector associated with each quadrant may be treated as separate scanning display sub-systems for the purposes of the example technique 2500. When such an implementation is used to capture an image of a scanned object, the coherent images output by the four scanning display sub-systems may be stitched together to form a single, larger image of the entire scanning field.
It is also to be understood that certain steps of the example technique 2500 may be performed once or may be recycled from one implementation of the technique to the next. For example, blocks 2504 and 2506 may occur during production of an electronic device with scanning display functionality as part of a calibration process. It may not be necessary for blocks 2504 and 2506 to be performed again, unless recalibration of the scanning display is desired. For example, if dust or dirt contaminates the scanning display or a protective cover glass over the scanning display, recalibration may be desired.
Similarly, data from blocks 2516 and 2518 and blocks 2528 and 2530 may be recycled from one implementation of the example technique 2500 to the next. This may save considerable computational overhead, as it is not necessary to always calculate, for example, a Z by N matrix of pseudorandom bits—the same Z by N matrix of pseudorandom bits may be used from implementation to implementation of the technique 2500.
The display device 40 includes a housing 41, a display 30, an antenna 43, a speaker 45, an input device 48, and a microphone 46. The housing 41 can be formed from any of a variety of manufacturing processes, including injection molding, and vacuum forming. In addition, the housing 41 may be made from any of a variety of materials, including, but not limited to: plastic, metal, glass, rubber, and ceramic, or a combination thereof. The housing 41 can include removable portions (not shown) that may be interchanged with other removable portions of different color, or containing different logos, pictures, or symbols.
The display 30 may be any of a variety of displays, including a bi-stable or analog display, as described herein. The display 30 also can be configured to include a flat-panel display, such as plasma, EL, OLED, STN LCD, or TFT LCD, or a non-flat-panel display, such as a CRT or other tube device. In addition, the display 30 can include an interferometric modulator display, as described herein.
A collection light guide 6 may be included as well and be overlaid on the display 30. The collection light guide 6 may be, for example, one of the collection light guides discussed earlier, and may allow, for example, the cellular or mobile phone to scan objects placed on or near the display 30.
The components of the display device 40 are schematically illustrated in
Also shown in
The components illustrated in
The network interface 27 includes the antenna 43 and the transceiver 47 so that the display device 40 can communicate with one or more devices over a network. The network interface 27 also may have some processing capabilities to relieve, for example, data processing requirements of the processor 21. The antenna 43 can transmit and receive signals. In some implementations, the antenna 43 transmits and receives RF signals according to the IEEE 16.11 standard, including IEEE 16.11(a), (b), or (g), or the IEEE 802.11 standard, including IEEE 802.11a, b, g or n. In some other implementations, the antenna 43 transmits and receives RF signals according to the BLUETOOTH standard. In the case of a cellular telephone, the antenna 43 is designed to receive code division multiple access (CDMA), frequency division multiple access (FDMA), time division multiple access (TDMA), Global System for Mobile communications (GSM), GSM/General Packet Radio Service (GPRS), Enhanced Data GSM Environment (EDGE), Terrestrial Trunked Radio (TETRA), Wideband-CDMA (W-CDMA), Evolution Data Optimized (EV-DO), 1xEV-DO, EV-DO Rev A, EV-DO Rev B, High Speed Packet Access (HSPA), High Speed Downlink Packet Access (HSDPA), High Speed Uplink Packet Access (HSUPA), Evolved High Speed Packet Access (HSPA+), Long Term Evolution (LTE), AMPS, or other known signals that are used to communicate within a wireless network, such as a system utilizing 3G or 4G technology. The transceiver 47 can pre-process the signals received from the antenna 43 so that they may be received by and further manipulated by the processor 21. The transceiver 47 also can process signals received from the processor 21 so that they may be transmitted from the display device 40 via the antenna 43.
In some implementations, the transceiver 47 can be replaced by a receiver. In addition, the network interface 27 can be replaced by an image source, which can store or generate image data to be sent to the processor 21. The processor 21 can control the overall operation of the display device 40. The processor 21 receives data, such as compressed image data from the network interface 27 or an image source, and processes the data into raw image data or into a format that is readily processed into raw image data. The processor 21 can send the processed data to the driver controller 29 or to the frame buffer 28 for storage. Raw data typically refers to the information that identifies the image characteristics at each location within an image. For example, such image characteristics can include color, saturation, and gray-scale level.
The processor 21 can include a microcontroller, CPU, or logic unit to control operation of the display device 40. The conditioning hardware 52 may include amplifiers and filters for transmitting signals to the speaker 45, and for receiving signals from the microphone 46. The conditioning hardware 52 may be discrete components within the display device 40, or may be incorporated within the processor 21 or other components.
The driver controller 29 can take the raw image data generated by the processor 21 either directly from the processor 21 or from the frame buffer 28 and can re-format the raw image data appropriately for high speed transmission to the array driver 22. In some implementations, the driver controller 29 can re-format the raw image data into a data flow having a raster-like format, such that it has a time order suitable for scanning across the display array 30. Then the driver controller 29 sends the formatted information to the array driver 22. Although a driver controller 29, such as an LCD controller, is often associated with the system processor 21 as a stand-alone Integrated Circuit (IC), such controllers may be implemented in many ways. For example, controllers may be embedded in the processor 21 as hardware, embedded in the processor 21 as software, or fully integrated in hardware with the array driver 22.
The array driver 22 can receive the formatted information from the driver controller 29 and can re-format the video data into a parallel set of waveforms that are applied many times per second to the hundreds, and sometimes thousands (or more), of leads coming from the display's x-y matrix of pixels.
In some implementations, the driver controller 29, the array driver 22, and the display array 30 are appropriate for any of the types of displays described herein. For example, the driver controller 29 can be a conventional display controller or a bi-stable display controller (for example, an IMOD controller). Additionally, the array driver 22 can be a conventional driver or a bi-stable display driver (for example, an IMOD display driver). Moreover, the display array 30 can be a conventional display array or a bi-stable display array (for example, a display including an array of IMODs). In some implementations, the driver controller 29 can be integrated with the array driver 22. Such an implementation is common in highly integrated systems such as cellular phones, watches and other small-area displays.
In some implementations, the input device 48 can be configured to allow, for example, a user to control the operation of the display device 40. The input device 48 can include a keypad, such as a QWERTY keyboard or a telephone keypad, a button, a switch, a rocker, a touch-sensitive screen, or a pressure- or heat-sensitive membrane. The microphone 46 can be configured as an input device for the display device 40. In some implementations, voice commands through the microphone 46 can be used for controlling operations of the display device 40.
The power supply 50 can include a variety of energy storage devices as are well known in the art. For example, the power supply 50 can be a rechargeable battery, such as a nickel-cadmium battery or a lithium-ion battery. The power supply 50 also can be a renewable energy source, a capacitor, or a solar cell, including a plastic solar cell or solar-cell paint. The power supply 50 also can be configured to receive power from a wall outlet.
In some implementations, control programmability resides in the driver controller 29 which can be located in several places in the electronic display system. In some other implementations, control programmability resides in the array driver 22. The above-described optimization may be implemented in any number of hardware and/or software components and in various configurations.
The various illustrative logics, logical blocks, modules, circuits and algorithm steps described in connection with the implementations disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. The interchangeability of hardware and software has been described generally, in terms of functionality, and illustrated in the various illustrative components, blocks, modules, circuits and steps described above. Whether such functionality is implemented in hardware or software depends upon the particular application and design constraints imposed on the overall system.
The hardware and data processing apparatus used to implement the various illustrative logics, logical blocks, modules and circuits described in connection with the aspects disclosed herein may be implemented or performed with a general purpose single- or multi-chip processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, or, any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, for example, a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. In some implementations, particular steps and methods may be performed by circuitry that is specific to a given function.
In one or more aspects, the functions described may be implemented in hardware, digital electronic circuitry, computer software, firmware, including the structures disclosed in this specification and their structural equivalents thereof, or in any combination thereof. Implementations of the subject matter described in this specification also can be implemented as one or more computer programs, in other words, one or more modules of computer program instructions, encoded on a computer storage media for execution by, or to control the operation of, data processing apparatus.
If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. The steps of a method or algorithm disclosed herein may be implemented in a processor-executable software module which may reside on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that can be enabled to transfer a computer program from one place to another. A storage media may be any available media that may be accessed by a computer. By way of example, and not limitation, such computer-readable media may include RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that may be used to store desired program code in the form of instructions or data structures and that may be accessed by a computer. Also, any connection can be properly termed a computer-readable medium. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media. Additionally, the operations of a method or algorithm may reside as one or any combination or set of codes and instructions on a machine readable medium and computer-readable medium, which may be incorporated into a computer program product.
Various modifications to the implementations described in this disclosure may be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other implementations without departing from the spirit or scope of this disclosure. Thus, the disclosure is not intended to be limited to the implementations shown herein, but is to be accorded the widest scope consistent with the claims, the principles and the novel features disclosed herein. The word “exemplary” is used exclusively herein to mean “serving as an example, instance, or illustration.” Any implementation described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other implementations. Additionally, a person having ordinary skill in the art will readily appreciate, the terms “upper” and “lower” are sometimes used for ease of describing the figures, and indicate relative positions corresponding to the orientation of the figure on a properly oriented page, and may not reflect the proper orientation of the IMOD as implemented.
Certain features that are described in this specification in the context of separate implementations also can be implemented in combination in a single implementation. Conversely, various features that are described in the context of a single implementation also can be implemented in multiple implementations separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the implementations described above should not be understood as requiring such separation in all implementations, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products. Additionally, other implementations are within the scope of the following claims. In some cases, the actions recited in the claims can be performed in a different order and still achieve desirable results.