The invention pertains to computational imaging and light-field sensors, and more specifically to lensless camera arrangements leveraging a wide range of curved, polygon, rigid, flexible, elastic, and plastic attributes.
A family of technologies relating to lensless imaging wherein an (even primitive) light sensing array is configured by (simple or more complex) optical structures to create a light-field sensor and focused images are obtained via numerical computation employing algorithms executed on one or more instances of a computational environment (for example comprising a computer, microprocessor, Graphical Processing Unit (GPU) chip, Digital Signal Processing (DSP) chip, etc.) has been described in earlier patent filings by the present inventor. These include for example:
U.S. Pat. No. 8,885,035 “Electronic imaging flow-microscope for environmental remote sensing, bioreactor process monitoring, and optical microscopic tomography”
As indicated in the above patent filings, an immense number of capabilities and features result from this approach. For example, although, the light sensing array can comprise a CMOS imaging chip, the light sensing array can comprise an array of printed organic-semiconductor photodiodes, including printed Organic Light Emitting Diodes (OLEDs) that are electrically interfaced and/or physically structured to operate at least as light sensing (square) “pixel” or (rectangular) “pel” elements; because such a sensing array is fabricated by layered printing of inks (comprising conductive materials, semiconducting materials, and insulating materials, the types inks including transparent types) on arbitrary surfaces (such as glass, rigid plastics, or flexible plastics) that may be flat or curved, the optical sensing array can be rendered in a wide variety of ways, shapes, sizes, curvatures, etc. on a flat, curved, bendable, deformable surface that may also be used for performing other function. Further, the optical structures required to invoke light-field sensing capabilities can be as simple as a crudely formed array of vignetting passages, and these optical structures can be for example printed using light-path impeding inks, an applicable light-field sensor array can be entirely fabricated by printing layers of electrical, structural, and optical inks on flat, curved, bendable, and/or deformable surface that can also be used for performing other structural, electrical, sensing, physically-supporting, physical-boundary, and/or physical-surface functions. It is noted that in addition to printed organic-semiconductor photodiodes and/or OLEDs, the light sensing array can alternatively or additionally comprise one or more of printed organic-semiconductor phototransistors, silicon or other crystal-lattice photodiodes, silicon or other crystal-lattice phototransistors, silicon or other crystal-lattice LEDs, silicon or other crystal-lattice CMOS light sensors, charge-coupled light sensors, printed non-organic semiconductor photodiodes printed non-organic semiconductor LEDs, printed non-organic semiconductor phototransistors, or other type of electrically-responsive light-sensing elements. As described and implied in the above patent materials, and as to be further described and implied throughout the present patent applications, these advanced lensless light-field imaging systems and methods for enabling a wide range of entirely new applications.
These earlier inventor's patent families and the inventor's present patent application individually and collectively (1) employ many independent advancements in material science, organic electronics, and manufacturing processes together with (2) novel adaptations of and structures for optoelectronic devices, novel physical, optical, electronic and optoelectronic device configurations and (3) corresponding novel mathematical and signal flow structures arranged to be implemented by signal processing algorithms, and other novel system elements and method steps. The aforementioned independent advancements in material science organic electronics, and manufacturing processes include:
Novel adaptations of and structures for optoelectronic devices, novel physical, optical, electronic and optoelectronic device configurations comprised by the inventor's earlier patent families and used in the inventor's present patent application include for example but are not limited to:
Corresponding novel mathematical and signal flow structures arranged to be implemented by signal processing algorithms, and other novel system elements and method steps also configurations comprised by the inventor's earlier patent families and used in the inventor's present patent application include for example but are not limited to:
The inventor's early inventions (as taught in the afore-cited patents) and present invention competes favorably with other lensless computational imaging approaches on their own terms in a number of ways, for example:
The inventor's early inventions (as taught in the afore-cited patents) and present invention provides a number of features not available from other lensless computational imaging approaches, including but not limited to:
Similarly,
A 2008-2009 patent family of the present inventor contributed addtional implementations, features, and applications that include use as a display and a touch and touch-gesture user interface. A 2010 patent family and 2011 patent family of the present inventor contributed additional implementations, features, and applications that include use not only as a display touch user interface, and touch-gesture user interface but also as a lensless imaging camera, a touch and touch-gesture user interface, free-space hand gesture interface, document scanner, fingerprint sensor, secure information exchange. Although not all of the applications, arrangements, and configurations of those 2008-2009, 2010, and 2011 patent families are explicitly considered in the present patent application, the technologies, systems, and methods described the present patent application are in various ways directly applicable and to the applications, arrangements, and configurations described in those patent families.
A 2009-2010 patent family, and a 2013 patent family of the present inventor contributed the addition of optical tomography capabilities employing controlled light sources is also noted in
Relations to and Developments in Related Technologies
A brief review of the following related concepts and technologies are next provided:
Coded apertures are planar, binary-valued (partially-opaque/partially-transmitting) optical masks gratings, grids, etc. positioned in front of an image sensor array and designed to cast structured shadows that permit mathematic calculations characterizing and permitting the imaging of incoming radiation fields. Originally developed for high-energy photon (x-rays, gamma rays, and other classes of high-energy non-visible wavelength photons) radiation-imaging that cannot be focused by lenses of curved mirrors, the beginnings of coded aperture imaging date back to at least 1968 [P62]. A number of coded aperture telescopes use this high-energy imaging approach for imaging astronomical X-ray ad gamma ray sources.
Partially predating and later developing in parallel with the inventor's comprehensive lensless light-field imaging program (beginning with the inventor's 1999 patent family) is the related and now recently-popularized (2011-2016) coded aperture lensless imaging (recently termed a ‘Lensless “Computational Renaissance’” [P6]). Coded aperture imaging appears to have continued to develop in the exclusive context of high-energy photon non-visible (x-rays, gamma rays, etc.) radiation-imaging for decades but developing a rich mathematical theory (see for example [P31], [P32], [P63], [P64]) relating to the theory of information codes and the relatively “flat” spectral properties of the optical Modulation Transfer Function (discussed in part in various earlier papers but see [P45]) when the coded aperture employs various types of “Uniformly Redundant Array” codes (“URA”, “MURA,” etc.), variations or alternatives to these, or indeed randomly-generated patterns. At this writing it is not yet clear when attention to these coded aperture imaging approaches were first adapted for use in visible-light imaging, but at this writing it does not appear this was explored or discussed in the literature before 2000. There was some work circa 1973 involving shadow casting and coded masks relating to holography [B8]. Further brief historical treatment and reviews of technology developments in coded aperture lensless imaging are provided in [B1], [P5], and [P65].
By 2006 various forms of computer-controlled spatial-light modulators were being used to implement the optical coded aperture function [P34], [P35], [P47], notably using the LCD of an LCD display screen as part of the MIT “BiDi Screen” [P34]. The MIT “BiDi Screen” also featured distance ranging obtained from the coded aperture, described earlier in [P68]. Although the MIT “BiDi Screen” included coded aperture imaging, the images produced were not focused when any non-proximate distance from the screen.
As will be discussed later, most of these systems formulate the image recovery transformation as (1) an ill-posed, usually regularized, inverse problem and/or (2) a spectral-method transform problem.
Broader views of computations imaging have subsequently to appear in the literature (see for example [P66]) which migrate the “optical coding” paradigm/abstraction to other contexts, for example wavefront coding, including a lens, sensor-plane coding, etc). In addition, there have also been several relatively recent systems replacing coded apertures with other types of lensless optical elements:
Although the coded aperture imaging area and the three alternatives described above each have their own “world,” it is possible to create an overarching framework that includes all of these. As will be discussed, it is the inventor's view that the inventor's comprehensive lensless light-field imaging program (beginning with the inventor's 1999 patent family) arguably if not straightforwardly includes and provides a framework admitting most of these in at least some sense, as well as including the many other original innovations from the inventor's comprehensive lensless light-field imaging program). For other selective perspectives and far more historical and contextual information see for example [P6], [P66]. From an even broader view, all such approaches can be abstracted into the notion of “computational imaging” from which various fundamental principles can be explored; for example'see [P66], [P73].
As to further how the above compare and sequence over time with respect to the inventor's comprehensive lensless light-field imaging program,
As an additional note, since the fate of many captured images is to be compressed by image compression algorithms comprising at least some linear transformation operations, the coded aperture could conceptually be modified to impose additional coding functions, in particular those useful in compressing an image suitable for image decompression on the viewing or applications side. This has been shown to be possible and considerable work has been done in this area; see for example.
B. Relations to Lens-Based Light-Field Imaging
A light-field is most generally a 5-dimensional vector function representation of physical arrangement of directional light paths and intensities at each point in a space of optical propagation. The ideas date back to Faraday but was named and formalized in 1936 by Andrey Gershun. That said, the concept of a light field camera dates back to the 1908 “Integral Photograph” work and proposals by Gabriel Lippmann, winner that same year of the Nobel Prize for the invention of color photography (also known for being the predictor of the now widely employed converse piezoelectric effect and inventor of the telescope position compensating coelostat).
A Stanford team including Ren Ng formalized a light-field camera using a (“plenoptic”) microlens-array technique [P40], leading not long thereafter to the well-respected Lytro refocusable light-field camera [P41], Toshiba subsequently announce a 2013 refocusable light-field OEM camera module product [P42] using miniaturized similar technology. Recent (2016) developments in this area implement light-field imaging without the use of microlenses by employing layers of “optical” sensors instead [P43].
The inventor's 1999 patent family taught lensless computational imaging with light-field capabilities and (albeit for color imaging sensing) layered optical sensors. Also, as indicated in the previous subsection, coded aperture image sensors are typically capable of performing as light-field cameras (see for example [P67]).
It is notated notion, description, mathematical treatment, and measurement, of light-fields, and image rendering from them, have other historical and contemporary threads. A 2006 survey of light-field imaging from a simplified 4-dimensional representation computational imaging viewpoint employed in Image Based Rendering (IBR), computer-graphics fly-bys and related applications is presented in [P46]. Light-fields (and their analogs in acoustics, seismology, and energy fields) are also in various forms are inherently and thus fundamentally relevant to at least wave-field inversion and wave-field Inverse Source Problems (ISPs), tomography, holography, broader Image Based Rendering (IBR) applications, and 3D graphics rendering, for a ‘unified’ treatment regarding imaging, wavefield inversion and tomography see for example the book by Devaney [B3]. Additionally there are various other methods for measuring and sampling empirical light-fields; a few examples are described in the book by Zhang and Chen [B4].
Although presented before,
C. Use of LEDs as Light Sensors and Light Sensing Arrays
A number of earlier U.S. Patents and U.S. Patent Applications discuss various aspects of using LED and OLED arrays used in various combinations or sequences of light sensing and light-emitting modes and in one manor or another the integration of light sensing and light-emitting semiconductors in a common display panel, control panel, or image reader. Some of these employ time-multiplexed operating modes, some of these spatially interleave light sensing and light-emitting semiconductor elements, but none of these teach use as visual imaging camera.
U.S. Pat. No. 4,424,524 by Daniele (filed 1982) teaches a linear array of LEDs that selectively function as light emitters and light sensors for line-scanning a document on a rotating drum. No image display is involved.
These U.S. Patents discuss possible applications as camera but do not teach image formation:
Although presented before,
D. Flexible Cameras and Transparent Image Sensors
The limited work that has been done regarding flexible cameras has been largely in the enabling optics area. Many points regarding the value, radical enablement, and some applications of flexible cameras have been provided in papers and press releases stemming from projects at Columbia University [P37], [P38] involving work on bendable and deformable mini-lens arrays, elastic optics, and associated internal optical compensational adaptation for those. These efforts appeal for the need for the development of flexible image sensors; the prototyping work employs traditional (color) camera sensors and camera lenses.
Another approach to flexible cameras, as well as flexible image sensors and transparent image sensors, involves grid of flexible light-sensing fibers that direct light to remotely-located conventional camera element [P53].
Another third approach, to flexible cameras, as well as flexible image sensors and transparent image sensors, underway in Austria [P28] also is directed to enabling optics; this effort uses novel optics and luminescent concentrator thin transparent film to gather light from an area of a transparent bendable surface and direct it to the edges of the transparent bendable surface where it is provided to photodiode arrays at those edges. The images are monochrome. As to flexible image sensors, a flexible large-area photodetector array arranged as an image sensor employing organic photodiodes (discussed below) has been reported in 2008 [P54] where image-sensing capabilities have been demonstrated by projecting an image using external image projection equipment. More recently, IMEC has made several pre-commercialization developments announced in 2013 [P55] that overlap with the inventor's patents filed many years earlier.
Although presented before,
E. Transparent Electronics
Developments in transparent electronics arguably began in earnest with the discoveries, adaptations, and refinements of transparent conductor materials (see for example [P56]. Various transparent passive electronic components were subsequently developed and are important, but a key development was the invention of the first transparent thin-film transistor (TTFT) announced in 2003 ([B16] p. 1). TTFTs are presently widely used in display technology and can be fabricated by various means, including spin-deposition and printing using ink-jet or other printing methods. Information on transparent electronic materials can be found in the book by Wagner, Keszler, and Presley [B16] (see Chapter 4) as well as information on transparent resistors ([B16] section 5.2.1), transparent capacitors ([B16] sections 5.2.2 and 5.3.4), transparent inductors ([B16] section 5.2.3), transparent PN diodes ([B16] section 5.3.1), transparent MIS (Metal-Insulator-Semiconductor diodes ([B16] section 5.3.1), and transparent thin-film transistors (TTFT) ([B16] section 5.4). Additional information and applications are provided, for example, in the book by Facchetti and Marks [B17].
F. Organic Semiconductors and Organic Optoelectronics
Closely related with the areas of transparent electronics, printed electronics and flexible electronics is the area of organic semiconductors and organic optoelectronics. Organic semiconductor materials facilitate many aspects of transparent electronics, printed electronics and flexible electronics, for example (a) replacing the band gap employed in traditional crystalline semiconductors with the energy band transition between highest-occupied molecular orbitals and lowest-unoccupied molecular orbitals and (b) replacing the crystal lattice structure of traditional crystalline semiconductors with the structures of polymers. There are many other aspects of organic semiconductors besides these. An introductory discussion of organic semiconductor materials is provided for example in Chapter 2 of the 2004 book by Gamota, Brazis, Kalyanasundaram, and Zhang [B22] and other perspectives of organic semiconductor are provided in the 2013 book edited by Cantatore [B23], although dozens of suitable and more contemporary books and journal publications abound. One of many important aspects is that organic semiconductor materials can facilitating the use of solution-based (“ink”) printing fabrication (see for example [B24]) and other techniques applicable to deposit on curved surfaces and large area surfaces and which facilitate flexible/bendable active electronics. Other important aspects of organic semiconductor materials include transparent capabilities and incorporation of a wide range of new optoelectronic capabilities.
A major commercial and technology aspect of organic electronics resulted from the development of an organic optoelectronic element known as the Organic Light Emitting Diode (OLED). Attributions are made that the OLED was discovered at Kodak when researching solar cells (see for example [B20] section 3.3) and Kodak was a leader in this area for many subsequent years. Combining with thin-film transistors, active-matrix OLED-array displays became commercially available and were used in mobile phones. Early active-matrix OLED-array displays suffered from various problems and limitations, but the underlying materials, devices, system designs, and manufacturing techniques are yielding constraint improvement, and every year or so new major products appear or announced. One example is the curved-screen OLED television sets, and the used of another generation of OLED displays have been announced for forthcoming new mass-market mobile phone products. Flexible OLED displays have been repeatedly demonstrated at the annual Consumer Electronics Shows for many consecutive years, and as will be considered again later, a bendable OLED display was included in a product-concept panoramic camera [P36]. Also as will be discussed later, transparent active-matrix OLED-array displays and transparent OLED-array pixel-addressing circuits using transparent thin-film transistors (TTFTs) and transparent capacitors, and transparent conductors have been developed (see for example [B16] section 6.3.5; [B17] Chapter 12; [B18]; [B19] section 8.2 [B23] Chapter 3.
OLED-array displays can be fabricated by printing (for example using semiconducting, conducting, insulative, and resistive inks) or non-printed methods non-as discussion of non-printed fabrication methods can be found in [B19] chapter 3, sections 6.1 and section 6.3). Developments in materials and fabrication techniques also create extension to size (including use of OLED-array tiling; see for example [B19] section 8.3), and degrees of curvature (for example a dome-shaped OLED display [P10]. Flat panel imager addressing circuits employing thin-film transistors and PIN or MIS light sensors are also known; see for example [B18] sections 1.2, 2.2.1, 3.1, 3.2, 5.2, 6.1, and 6.2.
Attention is now is directed towards organic photodiode. Organic photodiodes are widely viewed as providing an enabling route to new devices and new applications that were previously impossible and likely to remain outside the reach of conventional semiconductors. Although there remains great devoted favoritism in the image sensor community for crystalline semiconductor photodiodes monolithically-integrated with CMOS electronics (drawing on belief structures and early performance metrics), organic photodiodes are rapidly gaining immense validation and radically expanding interest. As stated in the opening of [P22]: “Powerful, inexpensive and even flexible when they need to be, organic photodiodes are a promising alternative to silicon-based photodetectors.” More detail as to this is provided in the summarizing remarks in the opening of [P2] “Organic photodiodes (OPDs) are now being investigated for existing imaging technologies, as their properties make them interesting candidates for these applications. OPDs offer cheaper processing methods, devices that are light, flexible and compatible with large (or small) areas, and the ability to tune the photophysical and optoelectronic properties—both at a material and device level . . . with their performance now reaching point that they are beginning to rival their inorganic counterparts in a number of performance criteria including the linear dynamic range, detectivity, and color selectivity.” Additional important points are made in the opening remarks of [P17]: There are growing opportunities and demands for image sensors that produce higher-resolution images, even in low-light conditions. Increasing the light input areas through 3D architecture within the same pixel size can be an effective solution to address this issue. Organic photodiodes (OPDs) that possess wavelength selectivity can allow for advancements in this regard. Further important remarks are provided in the opening words of [P30]: “Organic semiconductors hold the promise for large-area, low-cost image sensors and monolithically integrated photonic microsystems . . . published structures of organic photodiodes offer high external quantum efficiencies (EQE) of up to 76% . . . we report on organic photodiodes with state-of-the-art EQE of 70% at 0 V bias, an on/off current ratio of 106 . . . , dark current densities below 10 nA/cm2 . . . , and a lifetime of at least 3000 h . . . .” Other general discussion of organic photodiodes can be found in many references, for example [B21], [P21], [P27].
Like OLEDs and often using essentially the same materials, organic photodiodes can readily be transparent [P20] or semitransparent [P19], fabricated via printing (see for example [P15], [P74]), and flexible (see for example [P2], [P54]). They can deliver high-sensitivity (see for example [P30], [P74]), overall high-performance (see for example [P2], [P17], [P19], [P22], [P30]). They can also include avalanche and photomultiplication (see for example [P18]). Additionally, like crystalline semiconductor LEDs when used as light sensors, and have been affirmed to provide wavelength selective light-sensing properties (see for example [P2], [P17], [P18], [P19], [P22], [P55]) that forgo the need for optical filters (as pointed out years many prior in several of the inventor's patents) in, for example, color imaging.
Many efforts have been directed towards combining OLED arrays with various types of photosensors to create sensors for biomedical applications. Many of these combine OLED with silicon-based photodetectors (see for example [P16], [P23]), but there have been recent efforts involving combining OLEDs with organic photodiodes (see for example [P7], [P8], [P9]). As to the enabling value of such integrations, [P23] states “Point-of-care molecular diagnostics can provide efficient and cost-effective medical care, and they have the potential to fundamentally change our approach to global health. However, most existing approaches are not scalable to include multiple biomarkers. As a solution, we have combined commercial flat panel OLED display technology with protein microarray technology to enable high-density fluorescent, programmable, multiplexed biorecognition in a compact and disposable configuration with clinical-level sensitivity.”
As with crystalline semiconductor photosensors, increased performance for some applications can often be obtained by creating photo-sensitive transistors which, in effect, replace an electrically-responsive controlling input of a transistor with light-responsive controlling input. Accordingly, there is active work in the area of organic phototransistors and organic phototransistor arrays (see for example [P25], [P27]), including for envisioned use as image detectors [P29]. Like organic photodiodes, organic phototransistors can also be wavelength selective (see for example [P24],[P26]), high performance (see for example [P24], [P25], [P26], [P29]), and flexible (see for example [P25], [P57]).
In addition to the aforedescribed transparent organic photodiode, transparent organic phototransistor and active matrix interface circuits for them, transparent charge-coupled devices are also known (see for example [B16] section 6.3.6).
G. Printed Electronics and Optoelectronics
Printed electronics and optoelectronics have already been mentioned through many of the above sections. Information regarding general materials for printed electronics can be found in a number of publications, for example [B17] p. 54, [B24], [B25], and [B26]. Organic semiconductor materials suitable for printing can be found in a number of publications, for example ([B22] Chapter 2). Printed electronics manufacturing processes can be found in a number of publications, for example [B22] Chapter 3. OLED-array display printed fabrication is discussed in, for example, [B19] section 6.2 Printed Organic Photodiodes are discussed in, for example, featuring high-performance [P74] and commercial availability [P15]. The important prospects for printable CMOS circuitry are discussed for example in [B16] p. 44; [B23] p. 124, and [P58].
H. Flexible and Bendable Electronics
Flexible and bendable electronics have already been mentioned through many of the above sections. Information regarding general materials for printed electronics can be found in a number of publications (see for example [B27], [B28], [B29], [B31], [B32], [P51]). General applications are discussed in [B27], [B28], [B29], and large area applications are discussed in [B30]. Fabrication by printing methods are discussed in many references (see for example [P51], [P52]), and performance improvements are frequently announced (see for example [P50]). The expected wide-acceptance of flexible electronics has been discussed in [P49]. The IOPscience multidisciplinary journal Flexible and Printed Electronics™ publishes cutting edge search across all aspects of printed plastic, flexible, stretchable, and conformable electronics.
I. Flexible and Bendable Optoelectronics
Flexible and bendable electronics have already been mentioned through many of the above sections. Information regarding general materials for printed electronics can be found in a number of publications (see for example [B26]). Flexible TFTs for flexible OLED displays are discussed in for example [B19] section 8.1 and [P75]. High performance flexible organic photodiode arrays are discussed in for example [P54] and [P75]. High performance flexible organic phototransistors arrays are discussed in [P25]. Large-area flexible organic photodiodes sheet image scanners are discussed in [P75]. A prototype for a commercial (panoramic camera) product employing a flexible OLED display, (c) conformation deformation sensing is described in [P36]. Although not using flexible optoelectonics, the related work on transparent, flexible, scalable and disposable image sensors using thin film luminescent concentrators is presented in [P28].
J. Summarizing Functional- And Timeline-Comparison Table
Although presented earlier,
For purposes of summarizing, certain aspects, advantages, and novel features are described herein. Not all such advantages can be achieved in accordance with any one particular embodiment. Thus, the disclosed subject matter can be embodied or carried out in a manner that achieves or optimizes one advantage or group of advantages without achieving all advantages as taught or suggested herein.
The invention provides for a rigid or flexible surface to be configured to implement a lensless light-field sensor, producing electrical signals that can be used in real time, or stored and later retrieved, and provided to a computational inverse model algorithm executing on computational hardware comprising one or more computing elements so as to implement a lensless light-field camera.
In another aspect of the invention, a rigid surface is configured to additionally function as a housing and thus operate as a “seeing housing”.
In another aspect of the invention, a rigid surface is configured to additionally function as a protective plate and thus operate as a “seeing armor”.
In another aspect of the invention, a rigid surface is configured to additionally function as an attachable tile and thus operate as a “seeing tile”.
In another aspect of the invention, a rigid surface is configured to additionally function as an attachable film and thus operate as a “seeing film”.
In another aspect of the invention, a flexible surface is configured to additionally function as an attachable film and thus operate as a “seeing film”.
In another aspect of the invention, a flexible surface is configured to additionally function as a garment and thus operate as a “seeing garment”.
In another aspect of the invention, a flexible surface is configured to additionally function as a shroud and thus operate as a “seeing shroud”.
In another aspect of the invention, a flexible surface is configured to additionally function as an enveloping skin and thus operate as a “seeing skin”.
In another aspect of the invention, the rigid or flexible surface is small in size.
In another aspect of the invention, the rigid or flexible surface is large in size.
In another aspect of the invention, the rigid or flexible surface is flat.
In another aspect of the invention, the rigid or flexible surface is curved.
In another aspect of the invention, the rigid or flexible surface is rendered as a polytope.
In another aspect of the invention, the rigid or flexible surface is rendered as a dome.
In another aspect of the invention, the rigid or flexible surface is rendered as a part of a sphere.
In another aspect of the invention, the rigid or flexible surface is rendered as a part of a spheroid.
In another aspect of the invention, the rigid or flexible surface is rendered as a sphere.
In another aspect of the invention, the rigid or flexible surface is rendered as a spheroid.
In another aspect of the invention, the rigid or flexible surface is transparent.
In another aspect of the invention, the rigid or flexible surface is translucent.
In another aspect of the invention, the rigid or flexible surface is opaque.
In another aspect of the invention, the rigid or flexible surface performs contact sensing.
In another aspect of the invention, the rigid or flexible surface is configured to perform contact image sensing with near-zero separation distance.
In another aspect of the invention, the rigid or flexible surface is configured to perform contact image sensing with zero separation distance.
In another aspect of the invention, the rigid or flexible surface performs distributed optical imaging.
In another aspect of the invention, the rigid or flexible surface performs distributed optical sensing.
In another aspect of the invention, the rigid or flexible surface performs image sensing of ultraviolet light.
In another aspect of the invention, the rigid or flexible surface performs image sensing of infrared light.
In another aspect of the invention, the rigid or flexible surface performs image sensing of selected ranges of visible color light.
In another aspect of the invention, the rigid or flexible surface performs imaging.
In another aspect of the invention, the rigid or flexible surface performs distributed chemical sensing employing optical chemical sensing properties of at least one material.
In another aspect of the invention, the rigid or flexible surface performs distributed radiation sensing employing optical radiation sensing properties of at least one material.
In another aspect of the invention, the rigid or flexible surface performs distributed magnetic field sensing employing optical magnetic field sensing properties of at least one material.
In another aspect of the invention, the rigid or flexible surface is configured to emit light.
In another aspect of the invention, the rigid or flexible surface is configured to operate as a light-emitting display.
In another aspect of the invention, the rigid or flexible surface is configured to operate as a selectively self-illuminating contact imaging sensor.
In another aspect of the invention, the computational inverse model algorithm is configured to provide variable focusing.
In another aspect of the invention, the computational inverse model algorithm is configured to mixed depth-of-field focusing.
In another aspect of the invention, the computational inverse model algorithm is configured to implement a viewpoint with a controllable location.
In another aspect of the invention, the computational inverse model algorithm is configured to implement a plurality of viewpoints, each viewpoint having a separately controllable location.
In another aspect of the invention, the computational inverse model algorithm is configured to provide pairs of outputs so as to function as a stereoscopic camera.
In another aspect of the invention, the computational inverse model algorithm is configured to capture a panoramic view.
In another aspect of the invention, the computational inverse model algorithm is configured to capture a 360-degree view.
In another aspect of the invention, the computational inverse model algorithm is configured to capture a partial spherical view.
In another aspect of the invention, the computational inverse model algorithm is configured to capture a full spherical view.
In another aspect of the invention, the rigid or flexible surface is configured to perform enveloping image sensing with near-zero separation distance.
In another aspect of the invention, the rigid or flexible surface is configured to perform contact enveloping sensing with zero separation distance.
In another aspect of the invention, the rigid or flexible surface is configured to operate as a selectively self-illuminating enveloping imaging sensor.
In another aspect of the invention, the computational inverse model algorithm is configured to operate at slow-frame video rates.
In another aspect of the invention, the computational inverse model algorithm is configured to operate at conventional video rates.
In another aspect of the invention, the computational inverse model algorithm and computational hardware is configured to operate at high-speed video rates.
In another aspect of the invention, a lensless light-field imaging system comprising
In another aspect of the invention, the light sensing elements of the array of light sensing elements are oriented in space to form a curved surface.
In another aspect of the invention, spatial positions of the plurality of focused image portions form a planar surface.
In another aspect of the invention, the light sensing elements of the array of light sensing elements are oriented in space to form a planar surface.
In another aspect of the invention, the spatial positions of the plurality of focused image portions form a curved surface.
In another aspect of the invention, the light sensing elements of the array of light sensing elements are oriented in space to form a curved surface and the spatial positions of the plurality of focused image portions form a curved surface.
In another aspect of the invention, the algorithm is controlled by a at least one separation distance parameter.
In another aspect of the invention, the algorithm is controlled by a plurality of localized separation distance parameters.
In another aspect of the invention, the first electronics comprises multiplexing electronics.
In another aspect of the invention, the first electronics comprises at least one transimpedance amplifier circuit.
In another aspect of the invention, the light sensing elements comprise organic photodiodes.
In another aspect of the invention, the light sensing elements comprise organic light emitting diodes.
In another aspect of the invention, the light sensing elements comprise organic diodes that are co-optimized for both light emission and light sensing.
In another aspect of the invention, the light sensing elements are arranged to emit light for some interval of time.
In another aspect of the invention, the light sensing elements are arranged to emit light for some interval of time under the control of the first electronics.
In another aspect of the invention, the angularly-varying sensitivity of the light sensing elements results at least in part from the structure of the light sensing elements.
In another aspect of the invention, the angularly-varying sensitivity of the light sensing elements results at least in part from a structure attached to the array of light sensing elements.
In another aspect of the invention, the array of light sensing elements is fabricated by a printing process.
In another aspect of the invention, the stricture attached to the array of light sensing elements is fabricated by a printing process.
In another aspect of the invention, the structure attached to the array of light sensing elements comprises segregated optical paths.
In another aspect of the invention, the segregated optical paths are created by separating surfaces.
In another aspect of the invention, the separating surfaces are at least partially-reflective.
In another aspect of the invention, the separating surfaces are configured to facilitate surface plasmon propagation.
In another aspect of the invention, at least one of the light sensing elements is color selective.
In another aspect of the invention, color selective property results from a band gap property of a semiconductor device element comprised by the at least one light sensor.
In another aspect of the invention, the algorithm comprises array multiplication of numerical values responsive to the plurality of electronically-represented digital numbers.
In another aspect of the invention, the algorithm comprises array multiplication of numerical values obtained from the calculation of a generalized inverse matrix.
In another aspect of the invention, the algorithm comprises array multiplication of numerical values obtained from an interpolation.
In another aspect of the invention, the algorithm comprises array multiplication of numerical values obtained from a predictive analytical model.
In another aspect of the invention, the algorithm comprises array multiplication of numerical values derived from a predictive analytical model.
In another aspect of the invention, the algorithm comprises array multiplication of numerical values derived from empirical measurements.
The above and other aspects, features and advantages of the present invention will become more apparent upon consideration of the following description of preferred embodiments taken in conjunction with the accompanying drawing figures, wherein:
a, adapted from
b, adapted from
a, adapted from the figure available on the internet at https://en.wikibooks.org/wiki/Introduction_to_Inorganic_Chemistry/Electronic_Properties _of_Materials:_Superconductors_and_Semiconductors#/media/File:PnJunction-E, PNG as retrieved Jul. 3, 2017 (top portion), depicts a representation of the active carrier flow of a forward biased switching diode wherein, by design, current-flow directional switching functions are optimized and light-emission and light-detection capabilities of PN junctions are suppressed.
b, adapted from the figure available on the internet at https://en.wikibooks.org/wiki/Introduction_to_Inorganic_Chemistry/Electronic_Properties _of_Materials:_Superconductors_and_Semiconductors#/media/File:PnJunction-E, PNG as retrieved Jul. 3, 2017 (middle portion), depicts the blocked carrier flow of a reversed biased situation for the switching diode depicted in
c, adapted from the figure available on the internet at https//en.wikibooks.org/wiki/Introduction_to_Inorganic_Chemistry/Electronic_Properties _of_Materials:_Superconductors_and_Semiconductors#/media/File:PnJunction-E,PNG as retrieved Jul. 3, 2017 (bottom portion), depicts an energy-band representation of a switching diode wherein by design, current-flow directional switching functions are optimized and light-emission and light-detection capabilities of PN junctions are suppressed.
d, adapted from the image available at http//www.learnabout-electronics.org/Semiconductors/diodes_23.php as visited on Jun. 20, 2017, depicts a representation of the physical construction of a switching diode wherein, by design, current-flow directional switching functions are optimized and light-emission and light-detection capabilities of PN junctions are suppressed.
a, adapted from the top portion of a figure available on the internet at https//en.wikipedia.org/wiki/Light-emitting_diode#/media/File:PnJunction-LED-E.svg as retrieved Jul. 3, 2017 depicts a carrier-process representation of an operating (inorganic or organic) conducting PN junction light-emitting diode (LED).
b, adapted from the bottom portion of a figure available on the internet at https://en.wikipedia.org/wiki/Light-emitting_diodes#/media/File:PnJunction-LED-E.svg as retrieved Jul. 3, 2017, depicts an energy-transition representation of an operating (inorganic or organic) semiconducting PN junction light-emitting diode (LED).
a, adapted from Figure 4.7.1 of the on-line notes “Principles of Semiconductor Devices” by B. Van Zeghbroeck, 2011, available at https://ecee,colorado.edu/˜bart/book/book/chapter4/ch4_7.htm as retrieved Jul. 3, 2017, depicts an abstracted structural representation of an example (inorganic or organic) simple (“simple-heterostructure”) semiconducting PN junction light-emitting diode (LED).
b, adapted from Figure 7.1 of the on-line table of figures available on the internet at https://www.ecse.rpi.edu/˜schubert/Light-Emitting-Diodes-dot-org/chap07/chap07.htm as retrieved Jul. 3, 2017, depicts an abstracted structural representation of an example (inorganic or organic) more complex double-heterostructure semiconducting PN junction light-emitting diode (LED), here effectively configured as a two-PN junction sandwich.
a, depicts an example energy-transition representation of an operating (inorganic or organic) simple semiconducting PN junction photodiode.
b, simplified and adapted from the first two figures in “Comparison of waveguide avalanche photodiodes with InP and InAIAs multiplication layer for 25 Gb/s operation” by J. Xiang, and Y. Zhao, Optical Engineering, 53(4), published Apr. 28, 2014, available at http://opticalengineering.spiedigitallibrary.org/article.aspx?articleid=1867195 as retrieved Jul. 3, 2017, depicts an example structural representation of an example simple layered-structure PIN (inorganic or organic) simple semiconducting PN junction photodiode.
a, adapted from
b, adapted from the first two figures in “Comparison of waveguide avalanche photodiodes with InP and InAIAs multiplication layer for 25 Gb/s operation” by J. Xiang and Y. Zhao, Optical Engineering, 53(4), published Apr. 28, 2014, available at http://opticalengineering.spiedigitallibrary.org/article.aspx?articleid=1867195 as retrieved Jul. 3, 2017, depicts an example structural representation of an example layered-structure avalanche semiconducting PN junction photodiode.
a, adapted from [P5], depicts a schematic representation of the arrangements and intended operational light paths for a pinhole camera.
b, adapted from [P5], depicts a schematic representation of the arrangements and intended operational light paths for a (simplified or single-lens) lens-based camera.
c, adapted from [P5], depicts a schematic representation of the arrangements and intended operational light paths for a mask-based camera, such as those discussed in [P62]-[P67].
a, adapted from [B7] Figure C2.5.16(a), depicts a first example pixel-cell multiplexed-addressing circuit for an individual OLED within an OLED array that includes a dedicated light-coupled monitoring photodiode for use in regulating the light output of the individual OLED so as to prevent user-observed fading or other brightness-variation processes.
b, adapted from [B7] Figure C2.5.16(b), depicts a second example pixel-cell multiplexed-addressing circuit for an individual OLED within an OLED array that includes a dedicated light-coupled monitoring photodiode for use in regulating the light output of the individual OLED so as to prevent user-observed fading or other brightness-variation processes.
a, adapted from [B18], depicts an example pixel-cell multiplexed-addressing circuit for an individual OLED within an OLED array with a monitoring feature.
b, also adapted from [B18], depicts an example pixel-cell multiplexed-addressing circuit for an isolated high-performance photodiode or phototransistor within a high-performance photodiode or phototransistor array with a forced-measurement provision.
In the following description, reference is made to the accompanying drawing figures which form a part hereof, and which show by way of illustration specific embodiments of the invention. It is to be understood by those of ordinary skill in this technological field that other embodiments may be utilized, and structural, electrical, as well as procedural changes may be made without departing from the scope of the present invention.
In the following description, numerous specific details are set forth to provide a thorough description of various embodiments. Certain embodiments may be practiced without these specific details or with some variations in detail. In some instances, certain features are described in less detail so as not to obscure other aspects. The level of detail associated with each of the elements or features should not be construed to qualify the novelty or importance of one feature over the others.
a, adapted from
The present invention provides for co-optimizing of doping, electrode configurations, structure, and other attributes for both light-sensing and light-emission, giving rise to entirely new kinds of semiconductor optoelectronic elements and devices. Rapidly evolving organic semiconductor material science methods, including polymer properties and meta-material properties, can be used to improve quantum efficiency, noise performance, transparency, size requirements, electrical characteristics, etc. as well as facilitate useful manufacturing techniques such as high-resolution printing. Additional features, such as angular-selectivity and wavelength selectivity, can also be included. Additional structures, such as vignetting or aperturing arrays, reflective optical path walls to reduce incident light-loss, angular diversity, curvature, flexibility, etc. can be co-integrated, and can for example be designed to produce predictable reproducible optical sensing behaviors. Exotic features, such as predictable or and/or reproducible surface plasmon propagation to selected light sensors to further reduce incoming light loss and use of quantum dots, can be included.
b, adapted from
a, adapted from the figure available on the internet at https://en.wikibooks.org/wiki/Introduction_to_Inorganic_Chemistry/Electronic_Properties _of_Materials:_Superconductors_and_Semiconductors#/media/File:PnJunction-E.PNG as retrieved Jul. 3, 2017 (top portion), depicts a representation of the active carrier flow of a forward biased switching diode wherein, by design, current-flow directional switching functions are optimized and light mission and light-detection capabilities of PN junctions are suppressed.
c, adapted from the figure available the internet at https://en.wikibooks.org/wiki/Introduction_to_Inorganic_Chemistry/Electronic_Properties _of_Materials:_Superconductors_and_Semiconductors#/media/File:PnJunction-E.PNG as retrieved Jul. 3, 2017 (middle portion), depicts the blocked carrier flow of a reversed biased situation for the switching diode depicted in
c, adapted from the figure available on the internet at https://en.wikibooks.org/wiki/Introduction_to_Inorganic_Chemistry/Electronic_Properties _of_Materials:_Superconductors_and_Semiconductors#/media/File:PnJunction-E.PNG as retrieved Jul. 3, 2017 (bottom portion), depicts an energy-band representation of a switching diode wherein by design, current-flow directional switching functions are optimized and light-emission and light-detection capabilities of PN junctions, are suppressed.
d, adapted from the image available at http://www.learnabout-electronics.org/Semiconductors/diodes_23.php as visited on Jun. 20, 2017, depicts a representation of the physical construction of a switching diode wherein, by design, current-flow directional switching functions are optimized and light-emission and light-detection capabilities of PN junctions are suppressed.
a, adapted from the top portion of a figure available on the internet at https://en.wikipedia.org/wiki/Light-emitting_diode#/media/File:PnJunction-LED-E.svg as retrieved Jul. 3, 2017, depicts a carrier-process representation of an operating (inorganic or organic) semiconducting PN junction light-emitting diode (LED).
b, adapted from the bottom portion of a figure available on the internet at https://en.wikipedia.org/wiki/Light-emitting_diode#/media/File:PnJunction-LED-E.svg as retrieved Jul. 3, 2017, depicts an energy-transition representation of an operating (inorganic or organic) semiconducting PN junction light-emitting diode (LED).
a, adapted from Figure 4.7.1 of the on-line notes “Principles of Semiconductor Devices” by B. Van Zeghbroeck, 2011, available at https://ecee.colorado.edu/˜bart/book/book/chapter4/ch4_7.htm as retrieved Jul. 3, 2017, depicts an abstracted structural representation of an example (inorganic or organic) simple (“single-heterostructure”) semiconducting PN junction light-emitting diode (LED).
b, adapted from Figure 7.1 of the on-line table of figures available on the Internet at https://www.ecse.rpi.edu/˜schubert/Light-Emitting-Diodes-dot-org/chap07/chap07.htm as retrieved Jul. 3, 2017, depicts an abstracted structural representation of an example (inorganic or organic) more complex double-heterostructure semiconducting PN junction light-emitting diode (LED), here effectively configured as a two-PN junction sandwich. When component layers are properly doped, a P-I-N (“P-type”/“Intrinsic”/“N-type”) structure is formed, confining charge carriers into a “small” energy gap surrounded by abrupt energy discontinuities that can be used to create a quantum well; the charge carriers recombine in the “Intrinsic” region and emit photons with wavelengths defined by corresponding discrete permissible energy transitions.
a, depicts an example energy-transition representation of an operating (inorganic or organic) simple semiconducting PN junction photodiode.
b, simplified and adapted from the first two figures in “Comparison of waveguide avalanche photodiodes with InP and InAIAs multiplication layer for 25 Gb/s operation” by J. Xiang and Y. Zhao, Optical Engineering, 53(4), published Apr. 28, 2014, available at http://opticalengineering.spiedigitallibrary.org/article.aspx?articleid=1867195 as retrieved Jul. 3, 2017, depicts an example structural representation of an example simple layered-structure PIN (inorganic or organic) simple semiconducting PN junction photodiode.
a, adapted from
b, adapted from the first two figures in “Comparison of waveguide avalanche photodiodes with InP and InAIAs multiplication layer for 25 Gb/s operation” by J. Xiang and Y. Zhao, Optical Engineering, 53(4). published Apr. 28, 2014, available at http://opticalengineering.spiedigitallibrary.org/article.aspx?articleid=1867195 as retrieved Jul. 3, 2017, depicts an example structural representation of an example layered-structure avalanche semiconducting PN junction photodiode.
a, adapted from [P5], depicts a schematic representation of the arrangements and intended operational light paths for a pinhole camera. The box represents a light-tight enclosure with a pinhole opening on the left side that blocks much of the incoming light field (depicted as approaching from the right) but permits transmission of narrow-diameter incoming light rays to enter the enclosure and travel through a region of free-space so as to widen the light area to match that of a (typically rectangular) image sensor, film emulsion display surface, etc.
b, adapted from [P5], depicts a schematic representation of the arrangements and intended operational light paths for a (simplified or single-lens) lens-based camera. The box represents a light-tight enclosure with a lens and supporting opening for the lens on the left side that bends most rays of the incoming light field (depicted as approaching from the right) for transmission and travel through a region of free-space defined by the lens focal length and the lens law equation so as to create focused image of a selected depth-of-field onto a (typically rectangular) image sensor, film emulsion display surface, etc.
c, adapted from [P5], depicts a schematic representation of the arrangements and intended operational light paths for a mask-based camera, such as those discussed in [P62]-[P67]. The relatively flatter box represents a light-tight enclosure with a masked opening on the left side that blocks some of the incoming light field (depicted as approaching from the right) and permits transmission the remaining incoming light rays to enter the enclosure and travel through a shorter region of free-space so as to widen the light area to match that of a (typically rectangular) image sensor.
The invention further provides for vignetting arrays, aperturing arrays, or other optical structures attached to, co-fabricated on, or co-fabricated with an array of light sensors to include for example, reflective optical path walls to reduce incident light-loss, angular diversity, curvature, flexibility etc.
The invention further provides for vignetting arrays, aperturing arrays, or other optical structures attached to, co-fabricated on, or co-fabricated with an array of light sensors to be designed to produce predictable reproducible optical sensing behaviors. The invention further provides for vignetting arrays, aperturing arrays, or other optical structures attached to, co-fabricated on, or co-fabricated with an array of light sensors to include or facilitate advance light-processing features such a predictable or and/or reproducible surface plasmon propagation to selected light sensors to further reduce incoming light loss, use of quantum dots, etc.
Additionally the invention provides for each light-sensing pixel element in a light-sensing array to comprise one or separate wavelength-selective light-sensing sub-elements, for example as taught in the inventor's 1999 and 2008 patent families. In some implementations these sub-elements can be spatially adjacent and share the same vignetting or other light-structuring pathway. In other implementations it is advantageous to stack two or more wavelength-selective light-sensing sub-elements in layers, analogous to Stacked Organic Light Emitting Diodes (SOLEDs) as discussed in the inventor's 1999 and 2008 patent families. It is further noted that structures stacking layers of two or more wavelength-selective light-sensing sub-elements can be designed to limit or advantageously structure different vignetting effects each wavelength-selective light-sensing sub-element will experience at each particular depth in the layered stack. It is noted that recent (2016) developments in this area implement light-field imaging (without the use of microlenses) employing layers of “optical” sensors [P43].
Light-Field Origins, Propagation, and Lensless-Light-Field Sensing
Returning to the depiction illustrated in
In terms of the mathematical development above, objects or situations producing reflected, refracted, and/or light-emitted contributions to the Light-Field can be represented in a spatially-quantized manner as a light-field source array.
Case A: Fixed Separation Distance:
Although it will be shown that the constraints on this arrangement can be extremely relaxed, in can be initially convenient to regard the objects or situations producing contributions to the light-field as lying in a plane parallel to an image sensor plane, and the contributions to the light-field comprising, a planar (for example rectangular, other shapes explicitly admissible) array of light-providing “light-source” spatially-quantized pixels, each “light-source” pixel emitting light that in various manners make their way to a parallel spatially-separated image sensor plane. The image sensor plane comprises a planar (for example rectangular, other shapes explicitly admissible) array of light-providing spatially-quantized “light-sensing” pixels, these “light-sensing” pixels producing an electrical signal that can be further processed. The discussion and capabilities of this development explicitly include cases with zero separation distance between at least a planar array of light-providing “light-source” pixels and at least a planar array of “light-sensing” pixels.
As an illustration of Case A,
Case B: Continuous Spatially-Varying Separation Distances —Non-Parallel Planes:
Relaxing the constraints in Case A, the above-described planes of (a) the objects or situations producing contributions to the light-field and (b) the image sensor are not parallel but rather oriented at some non-parallel and non-perpendicular dihedral angle. The resulting light-field has separation mixed distances. The discussion and capabilities of this development explicitly include cases with zero separation distance between at least a subset (even a 1-dimensional edge or even a single point) of a planar array of light-providing “light-source” pixels and a subset (even a 1-dimensional edge or even a single point) of at least a planar array of “light-sensing” pixels.
As an illustration of Case B,
As another illustration of Case B,
As yet another illustration of Case B,
Example C: Continuous Spatially-Varying Separation Distances—Curved Surfaces:
Relaxing the constraints in Case A in yet another way, one or both of (a) the objects or situations producing contributions to the light-field and/or (b) the image sensor reside on smoothly-curved non-planar surface. The resulting light-field has separation mixed distances and in some cases possible occultation depending on variations in curvature. The discussion and capabilities of this development explicitly include cases with zero separation distance between at least a subset (even a 1-dimensional edge or even a single point) of a planar array of light-providing “light-source” pixels and a subset (even a 1-dimensional edge or even a single point) of at least a planar array of “light-sensing” pixels.
As an illustration of Case C,
As another illustration of Case C,
As a variation of Case C,
Case D: Multiple Parallel Planes of Mixed Discrete Separation Distances:
Relaxing the constraints in Case A in still another way, either one or both of (a) the objects or situations producing contributions to the light-field and/or (b) the image sensor resides on more than one planar surfaces but with various separation distances, typically a mix of separation distances. A resulting light-field comprises mixed separation distances with abrupt changes in the separation distance.
As an illustration of Case D,
Case E: Combinations of at Least One Parallel Planes and at Least One Non-Multiple Parallel Plane:
Generalizing in Case D further, one or more instances of the situations of Case A and Case B are combined, resulting in a more complex light-field. In many situations occultation of portions of light fields can occur, for example in cases where one non-transparent source array blocks at least a portion of the light emitted by another source array as viewed by (one or more of the) sensor array(s).
As an illustration of Case E,
Case F: More Complex Combinations of Mixed Discrete and Continuous Spatially-Varying Separation Distances:
Generalizing in Case E further, one or more instances of the situations of at least one of Case A and Case B are combined with at least one instance of the situation of Case C, resulting in yet a more complex light-field. In many situations occultation of portions of light fields can occur, for example in cases where one non-transparent source array blocks at least a portion of the light emitted by another source array as viewed by (one or more of the) sensor array(s).
As an illustration of Case F,
Generalized Inverse/Pseudo-Inverse Remarks
There are a number of types of generalized inverses that have been developed and surveyed in the literature; for example see [B9] Section 3.3, [B10] pp. 110-111, and the tables in [B11] pp. 14-17. Some types of generalized inverses are uniquely-defined while others are non-uniquely defined in terms of infinite families. The notion of a generalized inverse applies to not only to finite-dimensional matrices but more broadly to (infinite-dimensional) linear operators; see for example [B12].
The Moore-Penrose generalized inverse, a special case of the Bjerhammar “intrinsic inverses” (see [B10] p. 105 and [P1], is uniquely-defined ([B13] p. 180), exists for any rectangular (or square) matrix regardless of matrix rank ([B13] p. 179, [14]) p. 196, [15] p. 19) and provides many properties found in matrix inverse ([B14], p. 196) and beyond.
In particular, the Moore-Penrose generalized inverse inherently provides a unique solution providing a “Least-Squares” statistical fit in cases where solvable subsets of the larger number of equations give different inconsistent solutions; see for example [B15] pp. 17-19.
There are other types of generalized inverses that also provide least-squares properties; for example see entries annotated (3.1) and (3.2) in the table pp. 14 as well as sections 3.1-3.2 of [B11] as well as section 4.4.1 of [B13].
Further, the Moore-Penrose generalized inverse can be used to determine whether a solution to a set of linear equations exists ([B13] pp. 190-191).
Various extended definitions and generalized forms of the Moore-Penrose generalized inverse exist; see for example section 4.4.3 of [B13].
Some of the other types of generalized inverses are not useful for solving over-specified systems of linear equations (more equations than variables), for example the Drazin inverse which is restricted to square matrices and has more abstract applications; see for example [B13] Section 5.5.
Use of a generalized-inverse or pseudo-inverse (and use of the Moore-Penrose pseudo-inverse in particular) in solving for a “best-fit” image from overspecified (and likely inconsistent) measurement data was introduced in a 2008 inventor's patent family. It is noted that slightly-related work in the area of improving digital Image resolution by use of “oversampling” can be found in the far earlier publication by Wiman [P3], but that is a different idea and goal. Rather, the inventor's use of use of a generalized-inverse or pseudo-inverse (and Moore-Penrose pseudo-inverse in particular) in solving for a “best-fit” image from overspecified (and likely inconsistent) measurement data provides robustness of image recovery with respect to damage or occultation of portions of the image sensor, etc.
The inventors comprehensive lensless light-field imaging program (beginning with the inventor's 1999 patent family) includes a framework covering the approaches depicted in
Lensless Light-Field Imaging as an Associated Inverse Problem
The Inverse Model depiction illustrated in
The Inverse Model can, for example, be implemented as a matrix, a 4-tensor, or other mathematical and/or data and/or logical operation.
The Inverse Model can be fixed or adjustable, can be implemented in a distributed manner, and can be unique or variationally-replicated in various manners. The optical structure can be fixed or reconfigurable, and can be arranged to be in a fixed position with respect to the Optical Sensor or can be configured to be movable in some manner with respect to the Optical Sensor. Additionally, at this level of abstraction, one or both of the Optical Sensor and Lensless Optical Structure(s) themselves can be variable in their electrical, physical, optical, mechanical, and other characteristics. For example, one or both of the Optical Sensor and Lensless Optical Structure(s) themselves can be any one or more of flat, curved, bendable, elastic, elastically-deformable, plastically-deformable, etc.
The Inverse Model can be derived from analytical optical models, empirical measurements, or combinations of these. In some embodiments the Inverse Model can be parametrized using interpolation.
Interpolation-based parameterization can be particularly useful if the Inverse Model is based on a collection of selected empirical measurements, or if the analytical optical model involves complex numerical computations.
Using an empirically-trained numerical model for representing the linear transformation invoked by the optical arrangement, it is clearly possible to train the system to focus on an arbitrarily-shaped surface, including one that is curved, bend, or irregularly-shaped; the inversion math “does not care” as long as the resulting numerical model matrix is non-singular, and the recovered image will be obtained in the same manner as if the focus-surface was a parallel plane. Accordingly, in principle a predictive analytical model can be used to generate the numerical model matrix, and by either means (empirically-trained or predictively-modeled) the system and methods can be arranged to focus on an arbitrarily-shaped surface, including one that is curved, bend, or irregularly-shaped.
There are many noise processes inherent to light sensing and associated electronics and various resulting performance limitations and tradeoffs; see for example [P44], [B2]. A very general performance perspective is provided in the book by Janesick [B2]. In the limit, highest performance will be obtained by single-electron sensors and amplifies; as to steps towards array sensors of this type see the paper by Richardson [P33]. The invention provides for inclusion of these considerations.
a, adapted from [B7] Figure C2.5.16(a), depicts a first example pixel-cell multiplexed-addressing circuit for an individual OLED within an OLED array that includes a dedicated light-coupled monitoring photodiode for use in regulating the light output of the individual OLED so as to prevent user-observed fading or other brightness-variation processes. The adjacent photodiodes used for pixel-by-pixel closed-loop feedback of OLED brightness.
b, adapted from [B7] Figure C2.5.16(b), depicts a second example pixel-cell multiplexed-addressing circuit for an individual OLED within an OLED array that includes a dedicated light-coupled monitoring photodiode for use in regulating the light output of the individual OLED so as to prevent user-observed fading or other brightness-variation processes. There are many other subsequent developments since the publishing of this books tentatively-toned remarks; for example recently-announced OLED phones are said to be using this technique.
a, adapted from [B18], depicts an example pixel-cell multiplexed-addressing circuit for an individual OLED within an OLED array with a monitoring feature.
b, also adapted from [B18], depicts an example pixel-cell multiplexed-addressing circuit for an isolated high-performance photodiode or phototransistor within a high-performance photodiode or phototransistor array with a forced-measurement provision.
Additional Functional Architectures
As described earlier,
Imaging Algorithms
The development to follow is broad enough to cover a wide variety of sensor types and imaging frameworks, and can be expanded further. Although the development to follow readily supports the advanced features made possible by curved, flexible/bendable, transparent, light-emitting, and other types of advanced sensors taught in the present patent application and earlier inventor patent families, many of the techniques can be readily applied to appreciate optical arrangements, devices, and situations employing traditional image sensors such as CMOS, CCD, vidicon, etc. Accordingly, the present invention provides for the use of a wide range of image sensor types including CMOS, COD, vidicon, flat, curved, flexible/bendable, transparent, light-emitting, and other types of advanced sensors taught in the present patent application and earlier inventor patent families, as well as other know and future types of image sensors.
Traditional Notation Conventions for Vectors, Matrices, and Matrix-Vector (Left) Multiplication
Let x be an M-dimensional column vector (i.e., an array of dimension 1×M) comprising elements {xm} 1≤m≤M
and y be a j-dimensional column vector (i.e., an array of dimension 1×M) comprising elements {yj} 1≤j≤J
Let A be a J×M matrix (an array dimension j×M) comprising elements {ajm}, 1≤j≤J, 1≤m≤M:
The “matrix product” of a J×M matrix A with an M-dimensional column vector x can produce a J-dimensional column vector y; by “left multiplication” convention this is denoted as
y=Ax
where each element yj of resulting a J-dimensional column vector y is calculated as
for each j an integer such that 1≤j≤J. Then the entire J-dimensional column vector y is given by
Use to Represent Spatial Line-Array (1-Dimensional) Imaging (as used in Fax and Spectrometry Sensors)
In the above, one is using the matrix A as a transformational mapping from column vector x to column vector y
analogous 4-Tensor Representation of Spatial Grid-Array (2-dimensional) Imaging.
For example, if column vector x represents a line-array of data (such as light source values directed to a line-array light-measurement sensor used in a fax scanner or optical spectrometer), the matrix A can represent a composite chain of linear optical processes, electro-optical processes, and interface electronics (transconductance, transimpedance, amplification, etc.) processes that result in measured data represented by a column vector y. Here the indices of the vectors and matrix signify unique well-defined discrete (step-wise) spatial positions in an underlying 1-dimensional spatial structure.
A visual image as humans experience it through vision and conventional photography, as well as other analogous phenomena, has an underlying 2-dimensional spatial structure. In digital imaging, unique well-defined discrete (step-wise) spatial positions within an underlying 2-dimensional spatial structure are identified and/or employed—in the context of this discussion these may be called “pixels.”
Mathematically, a 2-dimensional array of mathematically-valued elements, such as a matrix, can provide a mathematical representation of an image wherein the indices of an element identify an individual pixel's spatial location and the value of that mathematical element represents the “brightness” of that pixel. For example, an J×K array of measured 2-dimensional image data arranged a row and columns can be presented as a matrix of “brightness” values:
A convenient shorthand for this can be denoted as
Q={qjk}
where understood each of the two indices span a range of consecutive non-zero integer values
1≤j≤J, 1≤k≤K.
Similarly, a source image (2-dimensional array) can be represented as a matrix:
A similar convenient shorthand for this is denoted
S={Smn}
where it is understood each of the two induces span a range of consecutive non-zero integer values
1≤m≤M, 1≤n≤N.
A source image S can be transformed by linear optical processes linear sensor processes, and linear electronics processes into a measured image Q. This can be represented mathematically as a linear matrix-to-matrix transformation mapping the matrix S to the matrix Q
akin to employing the earlier matrix A as a linear vector-to-vector transformational (for example, mapping column vector x to column vector y):
Most generally this linear transformation can be represented by a 4-dimensional array 4-tensor:
={tjkmn}
1≤j≤J
1≤k≤K
1≤m≤M
1≤n≤N
with the understanding that the following multiplication rule is represented by the tensor “multiplying” the matrix S, namely each element qjk of resulting matrix Q is given by
Note this convention corresponds in form to the matrix case presented earlier:
The corresponding “multiplicative” product of the 4-tensor with matrix S to give matrix Q can be represented as
Q=S
which compares analogously to the “multiplicative” product of the matrix A with the vector x to give the vector y
y=Ax
With its four tensor-element indices and tensor-matrix product organized in this way, the 4-tensor with elements tjkmn can be readily and conveniently represented as a matrix-of-matrices where interior matrix blocks are indexed by row j and column k and the elements with each of the interior matrices are indexed by row m and column n. More specifically, the 4-tensor element tjkmn resides in an inner matrix residing in row m and column n of an inner matrix that resides in row j and column k of the outer matrix. The resulting matrix-of-a-matrices representation for the 4-tensor ={tjkmn} with
1≤j≤J, 1≤k≤K, 1≤m≤M, 1≤n≤N
would be:
This matrix-of-a-matrices structure where the mapping
is defined by
provides several important opportune outcomes, among these being:
Remark 1: It is noted that the 4-tensor as defined thus far is represented in this “matrix of matrices” structure, the interior matrices are organized so that each interior matrix having block-position index {j, k} is associated with the corresponding outcome quantity qjk. An attractive property of this representation, as called out above, is that the value of the output quantity qjk is the sum of all the pointwise products of values of source image pixels smn scaled by the corresponding elements in the interior matrix that has block-position index {j, k}. Thus, in an optical imaging context, the values of elements in the interior matrix that has block-position index {j, k} graphically show the multiplicative “gain” (or “sensitivity”) attributed to each of the same-positioned source image pixels smn in the image source matrix S. In more pedestrian but intuitively useful terms, the values of elements in the interior matrix that has block-position index {j, k} display the “heat map” of responsiveness of an observed or measured sensor pixel qjk in the observed or measured image matrix Q to source image pixels smn in the image source matrix S.
Remark 2 From this it i further noted that other kinds of 4-tensors could be reorganized in other ways that have other attractive merits. For example, a 4-tensor Ψ comprising elements Ψmnjk can be defined by the simple index reordering
Ψmnjk=tjkmn;
each interior matrix in the “matrix of matrices” structure for the 4-tensor Ψ having block-position index {m, n} represents a discrete “point-spread function” for a source pixel at position {m, n} into individual outcome pixels at position {j, k} as can be seen from the resulting relation
Although “point-spread function” representation imposed by the “matrix of matrices ” structure for 4-tensor Ψ has obvious customary attraction, the discourse will continue in terms of the 4-tensor comprising elements tjkmn as defined by
because of its organizational similarity with the conventional matrix definition
and with the understanding that all the subsequent development can be transformed from the definitions used for 4-tensor to the “point-spread” oriented for 4-tensor Ψ by the index re-mapping
Ψmnjk=tjkmn.
Remark 3: It is noted that for a (variables-separable) “separable” two-dimensional transform, such as the two-dimensional DFT, DCT, DST, etc., commonly used in traditional spectral image processing affairs of the j and m indices are handled entirely separate from affairs of the k and n indices, so qjk takes the restricted “variables-separable” form, for example when J=M and K=N
in which case
tjkmn=djmdkn
For example, for the normalized DFT matrices operating on an image of M rows and N columns, these djm and dkn element are:
where i=√{square root over (−1)} .
As another example of other kinds of 4-tensors be reorganized in other ways with attractive merits, a variables-separable 4-tensor Φ comprising elements ϕmnjk can be defined by the simple index reordering
ϕjmkn=tjkmn
which separately associates rows (indexed by j) in matrix Q with rows (indexed by m) in matrix S and separately associates columns (indexed by k) in matrix Q with columns (indexed by n) in matrix S.
Remark 4: To further illustrate details and develop intuition of the “Matrix of matrices” structure, or at least the aforedescribed organization of indices and multiplication rule, one could employ the dimension signifier (J×K)×(M×N). As some examples:
Examples of Transpose Operations for 4-Tensors
Various types of “Transpose” operations involving self-inverting index-exchange operations for one or two pairs of indices can be defined from (4·3·2·1)−=23 index re-organizations overall.
Perhaps some of the most useful of these would include:
Incidentally it is noted, for example that:
As with an N×N “Identity” matrix employing the mapping
to map an N-dimensional vector to a copy of itself, using
αmn=δmn
where δpq is the “Kronecker delta”
an “Identity” 4-tensor (for example J=M and K=N) mapping a M×N matrix to an M×N copy of itself results from employing the mapping:
with
tjknm=δjmδkm.
Note that with this (variables-separable) structure gives qjk=sjk for each 1≤j≤M, 1≤k≤N.
Using the “matrix-of-matrices” representation, a (3×3)×(3×3) Identity 4-tensor (3×3)×(3×3) would have the form:
Such a (3×3)×(3×3) Identity 4-tensor would map, a 3×3 pixel source image S to a 3×3 pixel result image Q with Q=S.
More generally Identity 4-Tensors map an M×N matrix to an M×N matrix, but the matrices need not individually (row-column) “symmetric” —that is one does not require M=N.
For example, using the “matrix-of-matrices” representation, a (3×2)×(3×2) Identity 4-Tensor (3×2)×(3×2) that maps a 3×2 matrix to a 3×2 matrix would have the form:
Such a (3×2)×(3×2) Identity 4-tensor would map a 3×2 pixel source image S to a 3×2 pixel result image Q with Q=S.
For each of these Identity 4-tensor examples, regarding Remark 1 above (as to interpreting the values of elements in the interior matrix with block-position index {j, k} in an (M×N)×(M×N) “matrix of matrices” as representing a “heat map” of responsiveness of an observed or measured sensor pixel qjk in the observed or measured image matrix Q to source image pixels smn in the image source matrix S), the structure of an M×N×M×N Identity 4-tensor is crystal clear as to it rendering qjk=sjk each 1≤j≤M, 1≤k≤N.
Re-Indexing and Reorganization a 4-Tensor-Operator Matrix-to-Matrix (Image-to-Image) Equation as a Matrix-Operator Vector-to-Vector Equation
Although in an image the row and column ordering, two-dimensional neighboring arrangement of pixels, and other such two-dimensional indexing details are essential, some linear transformations act entirely independently of the two-dimensional index structure. An example, are situations where one can regard the relationships defined by a tensor mapping between matrices such as
Q=S
as simply representing a set of simultaneous equations
In such circumstances one could without consequence uniquely re-index the variables with an indexing scheme that serializes the index sequence in an invertible way. For example, one can define two serializing indices p and q to serialize a J×K×M×N dimensional 4-tensor comprising elements tjkmn into JK×MN dimensional matrix T comprising elements tpq using the index-mapping relations
p=(j−1)K+k
r=(m−1)N+n
those relations can be inverted via
j=Mod(p−1,K)+1
k=Floor(p−1/K)+1=Ceiling [p/K]
m=Mod(r−1, N)+1
n=Float(r−1/N)+1=Ceiling[r /N]
Using these, one can define the serialized-index vectors q, comprising elements
qp1≤p≤JK,
and s, comprising elements
Sr1≤r≤MN,
which are simply “scanned” or “flattened ”versions of matrix Q, comprising elements
qjk1≤j≤J, 1≤k≤K
and matrix S, comprising elements
smn1≤m≤M, 1≤n≤N
An example “scanning”or “flattening” index correspondence is
q(j−1)K+k↔qjk1≤j≤J, 1≤k≤≤K
s(m−1)N+m↔smn1≤m≤M, 1≤n≤N
and its corresponding inverse correspondence is
qp↔qMod(p−1,N)+1,Ceiling(p/K), 1≤p≤JK
sr↔SMod(r−1,N)+1,Ceiling(r/N), 1≤r≤MN.
The last pair of these index correspondences can be used to formally define index-serializing mappings
qp=qMod(p−1,N)+1,Ceiling(p/K), 1≤p≤JK
sr=sMod(r−1,N)+1,Ceiling(r/N), 1≤r≤MN
that provide a flattening reorganization of the elements qjk comprised by the J×K-dimensional matrix Q into a vector q comprising elements qp, and a flattening reorganization of the elements smn comprised the M×N-dimensional matrix S into a vector s comprising elements sr.These result in flattening transformations Q→q and →s.
The first pair of the index correspondences can be used to formally define index-vectorizing mappings
qjk=q(j−1)K+k 1≤j≤J, 1≤k≤K
smn=s(m−1)N+N 1≤m≤M, 1≤n≤N
that provide a partitioning reorganization of the elements qp of vector q into the elements qjk comprised by the J×K-dimensional matrix Q, and a partitioning reorganization of the elements sr of vector q into the elements smn comprised the M×N-dimensional matrix S. These result in partitioning transformations and q→Q and s→S which reconstruct the matrices Q and S from the serialized vectors q and s.
In a corresponding way, one can use these same serialized-indices to correspondingly re-label and reorganize the values of the (J×K)×(M×N)-dimensional tensor to the JK×MN-dimensional matrix T. The mapping →T is given by
t(j−1)K+k, (m−1)N+n=tjkmn
1≤j≤J, 1≤k≤K, 1≤m≤M, 1≤n≤N
and the reverse mapping T→is given by
tMod(p−1,K)+1, Ceiling(p/K), Mod(r−1,K)+1, Ceiling(r/Ntpr
1≤p≤JK, 1≤r≤MN
Thus, because of the transformational equivalence between
for the same (but re-indexed) variables, this allows one to exactly represent the matrix-tensor equation
Q=S
as an equivalent vector-matrix equation
q=Ts
More generally, the index serialization functions can be arbitrary as long as they are one-to-one and onto over the full range and domain of the respective indices, and invertably map pairs of integers to single integers. For example they could be organized as a scan in other ways, or even follow fixed randomly-assigned mapping. In general one can write:
or more compactly
or more abstractly
This is extremely valuable as it allows for matrix met solve inverse problems or implement transformations on images in terms of matrices. Of course matrix methods have been used in variables-separable image processing for decades employing various ad hd constructions. Those ad hoc constructions could be formalized with the aforedescribed 4-tensor representation should one be interested in the exercises, but more importantly the computation of the aforedescribed 4-tensor representation and the formal isomorphic equivalence between 4-tensor linear transformations mapping matrices (representing images) to matrices (representing images) and matrix transformations mapping vectors to vectors allows clarity and methodology to complicated non-variables-separable linear imaging transformation, inverses, pseudo-inverses. etc. Also importantly the aforedescribed 4-tensor representation readily extends to mappings among tensors as may be useful in color, multiple-wavelength, tomographic, spatial data, and any other settings and applications.
Additionally, as an aside: the aforedescribed 4-tensor representation naturally defines eigenvalue/eigenmatrix and eigenvalue/eigentensor problems; for example the eigenvalue/eigenmatrix problem
Zi=λiZi1≤i≤JK
for a J×K×J×K 4-tensor, the collection of indexed scalars {λ1} 1≤i≤JK the scalar eigenvalues, and the collection of indexed matrices {Zi} 1≤i≤JK the eigenmatrices is equivalent to the eigenvalue/eigenvector problem
Tzi≤λizi 1≤i≤JK
via
for calculation and analysis and transformed back via
These general process can be order-extended and further generalized to similarly transform eigenvalue/eigentensor problems into equivalent eigenvalue/eigenvector problems, and extended further in various ways to replace the eigenvalue scalars with an “eigenvalue array.”
As and additional aside, these same and similar approaches employing
and other combined or more generalized reorganization methods
can be order-extended and further generalized to similarly transform the vast understanding, rules, bases, transformations, vector spaces, spaces of matrices, properties of matrices, and matrix-vector equations into a wide range of tensor understandings, tensor rules, tensor bases, tensor transformations, and properties of tensors, spaces of tensors, and tensor-matrix and tensor-tensor equations.
Attention is next directed to inversion and then to image formation, and then after first developing and using extensions of the aforedescribed 4-tensor representation to mappings among tensors) expanding these to color/multiple-wavelength imaging applications.
Inverse of a 4-Tensor
Accordingly, for
Q=S with M=J, N=K,
if all the represented individual equations are linearly independent and of full rank, then the matrix T defined by
is invertible and the pixel values of the source image S can, be obtained from the pixel values of the measurement Q by simply inverting the matrix T:
s=T−1q
where the corresponding “flattening” and “partitioning” index transformations are employed among the matrices and vectors
Further, the pixel values of the source image S can be obtained from the pixel values of the measurement Q by simply inverting the matrix T to obtain T−1, multiplying the flattened measurement data q with T−1 to obtain the vector s, and partitioning the result into the source (image) matrix S:
It is noted that effectively the column vectors of the matrix T serve as the natural linearly-independent spanning basis of the composite sensor and optical arrangement corresponding to a particular positioning situation. The natural linearly-independent spanning basis is not necessarily orthogonal, although it can of course be orthogonalized if useful using Gram-Schmitt of other methods. Additionally, the natural linearly-independent spanning basis can be transformed into other coordinate systems defined by other basis functions should that be useful. Such transformations can include the effects of discrete Fourier transforms, wavelet transforms, Walsh/Hadamard transforms, geometric rotations and scaling transforms, etc.
The simple approach employing T−1 reconstructs the image by simply reproducing individual columns of an identity matrix, more precisely a diagonal matrix whose non-zero diagonal elements represent the light amplitude at a particular pixel. The invention provides for the replacement of this simple approach with other methods fitting into the same structure or delivering the same effect; for example projection techniques matched filters, generalized inverses, SVD operations, sparse matrix operations, etc. These can be formatted in Tensor or matrix paradigms in view of the formal transformational tensor/matrix isomorphism established above. An example of this, namely the pseudo inverse case of a generalized inverse operation.
It is noted that the matrix T can become quite large, making inversion and subsequent operations described above numerically and computationally challenging. The invention provides for separating matrix T operations into smaller blocks (for example JEPG and MPEG regularly employ 8×8 and 16×16 blocks). The invention provides for these blocks to be non-overlapping, to overlap, and to be interleaved. The invention further provides for blocked inversion results involving overlapping blocks or interleaving blocks to be combined by linear or other operations to suppress block-boundary artifacts.
Pseudo-Inverse of a 4-Tensor
Further, because in image capture a system usually spatially quantizes natural source image without a pixel structure, it is additional possible to measure a larger number of pixels than will be used in the final delivered image, that is M<J and N<K.
In traditional image processing such an excess-measurement scheme can be used in various “oversampling” methods, or could be decimated via resampling. Instead of these, the excess measurements can be used to create an over-specified system of equations that provides other opportunities. For example the resulting over-specified matrix T can be used to generate a generalized inverse T+.
For example, if the 4-tensor represents a transformation of a 2-dimensional (monochromatic) “source image” represented as an M×N matrix of “brightness” values:
to a J×K array of measured 2-dimensional (monochromatic) image data represented as a J×K matrix of “brightness” values:
with M<J, N≤K, via
represented as
Q=TS
then a pseudo-inverse tensor can be defined via:
and represented as
S=T+Q
Further, the pixel values of the source image S can be obtained from the pixel values of the measurement Q by forming the pseudo-inverse of the matrix T, multiplying the flattened measurement data q with T+ to obtain the vector s, and partitioning the result into the source (image) matrix S:
There are a number of pseudo-inverses and related singular-value decomposition operators, but of these it can be advantageous for the optical imaging methods to be described for the generalized inverse T+ to be specifically the “Moore-Penrose” generalized (left) inverse defined (when a matrix T has all linearly-independent columns) using the matrix transpose TT or conjugate transpose T† of T and matrix inverse operations as:
T+=(TTT)−1TT for real-valued T
T30 =(T†T)−1T554 for complex-valued T
(There is also Moore-Penrose generalized “right” inverse defined when a matrix T has all linearly-independent rows.) The Moore-Penrose generalized inverse inherently provides a “Least-Squares” statistical fit where solvable subsets of the larger number of equations give different inconsistent solutions. This “Least-Squares” statistical fit can provide robustness to the imaging system, for example in the case where one or more sensor elements degrade, are damaged, are occulted by dirt, are occulted by objects, are altered by transparent or translucent droplets or deposits, etc.
Using the Moore-Penrose generalized inverse for real-valued pixel quantities, the pixel values of the source image S can be obtained from the pixel values of the measurement Q by forming the pseudo-inverse of the matrix T, multiplying the flattened measurement data q with T+ to obtain the vector s, and partitioning the result into the source (image) matrix S:
Configurations for Applications
Drawing on the functionality described above and taught in the Inventor's related lensless imaging patent filings listed at the beginning of this application, a wide range of additional provisions and configurations can be provided so as to support of vast number of valuable and perhaps slightly revolutionary imaging applications.
In an example generalizing assessment, the invention provides for a rigid or flexible surface to be configured to implement a lensless light-field sensor, producing electrical signals that can be used in real time, or stored and later retrieved, and provided to a computational inverse model algorithm executing on computational hardware comprising one or more computing elements so as to implement a lensless, light-field camera.
In another aspect of the invention, a rigid surface is configured to additionally function as a housing and thus operate as a “seeing housing”.
In another aspect of the invention, a rigid surface is configured to additionally function as a protective plate and thus operate as a “seeing armor”.
In another aspect of the invention, a rigid surface is configured to additionally function as an attachable tile and thus operate as a “seeing tile”.
In another aspect of the invention, a rigid surface is configured to additionally function as an attachable film and thus operate as a “seeing film”.
In another aspect of the invention, a flexible surface is configured to additionally function as an attachable film and thus operate as a “seeing film”.
In another aspect of the invention, a flexible surface is configured to additionally function as a garment and thus operate as a “seeing garment”.
In another aspect of the invention, a flexible surface is configured to additionally function as a shroud and thus operate as a “seeing shroud”.
In another aspect of the invention, a flexible surface is configured to additionally function as an enveloping skin and thus operate as a “seeing skin”.
In another aspect, of the invention, the rigid or flexible surface is small in size.
In another aspect of the invention, the rigid or flexible surface is large in size.
In another aspect of the invention, the rigid or flexible surface is flat.
In another aspect of the invention, the rigid or flexible surface is curved.
In another aspect of the invention, the rigid or flexible surface is rendered as a polytope.
In another aspect of the invention, the rigid or flexible surface is rendered as a dome.
In another aspect of the invention, the rigid or flexible surface is rendered as a part of a sphere.
In another aspect of the invention, the rigid or flexible surface is rendered as a part of a spheroid.
In another aspect of the invention, the rigid or flexible surface is rendered as a sphere.
In another aspect of the invention, the rigid or flexible surface is rendered as a spheroid.
In another aspect of the invention, the rigid or flexible surface is transparent.
In another aspect of the invention, the rigid or flexible surface is translucent.
In another aspect of the invention, the rigid or flexible surface is opaque.
In another aspect of the invention, the rigid or flexible surface performs contact sensing.
In another aspect of the invention, the rigid or flexible surface is configured to perform contact sensing with near-zero separation distance.
In another aspect of the invention, the rigid or flexible surface is configured to perform contact image sensing with zero separation distance.
In another aspect of the invention, the rigid or flexible surface performs distributed optical imaging.
In another aspect of the invention, the rigid or flexible surface performs distributed optical sensing.
In another aspect of the invention, the rigid or flexible surface performs image sensing of ultraviolet light.
In another aspect of the invention, the rigid or flexible surface performs image sensing of infrared light.
In another aspect of the invention, the rigid or flexible surface performs image sensing of selected ranges of visible color light.
In another aspect of the invention, the rigid or flexible surface performs imaging.
In another aspect of the invention, the rigid or flexible surface performs distributed chemical sensing employing optical chemical sensing properties of at least one material.
In another aspect of the invention, the rigid or flexible surface performs distributed radiation sensing employing optical radiation sensing properties of at least one material.
In another aspect of the invention, the rigid or flexible surface performs distributed magnetic field sensing employing optical magnetic field sensing properties of at least one material.
In another aspect of the invention, the rigid or flexible surface is configured to emit light.
In another aspect of the invention, the rigid or flexible surface is configured to operate as a light-emitting display.
In another aspect of the invention, the rigid or flexible surface is configured to operate as a selectively self-illuminating contact imaging sensor.
In another aspect of the invention, the computational inverse model algorithm is configured to provide variable focusing.
In another aspect of the invention, the computational inverse model algorithm is configured to mixed depth-of-field focusing.
In another aspect of the invention, the computational inverse model algorithm is configured to implement a viewpoint with a controllable location.
In another aspect of the invention, the computational inverse model algorithm is configured to implement a plurality of viewpoints, each viewpoint having a separately controllable location.
In another aspect of the invention, the computational inverse model algorithm is configured to provide pairs of outputs so as to function as a stereoscopic camera.
In another aspect of the invention, the computational inverse model algorithm is configured to capture a panoramic view.
In another aspect of the invention, the computational inverse model algorithm is configured to capture a 360-degree view.
In another aspect of the invention, the computational inverse model algorithm configured to capture a partial spherical view.
In another aspect of the invention, the computational inverse model algorithm is configured to capture a full spherical view.
In another aspect of the invention, the rigid or flexible surface is configured to perform enveloping image sensing with near-zero separation distance.
In another aspect of the invention, the rigid or flexible surface is configured to perform contact enveloping sensing with zero separation distance.
In another aspect of the invention, the rigid or flexible surface is configured to operate as a selectively self-illuminating enveloping imaging sensor.
In another aspect of the invention, the computational inverse model algorithm is configured to operate at slow-frame video rates.
In another aspect of the invention, the computational inverse model algorithm is configured to operate at conventional video rates.
In another aspect of the invention, the computational inverse model algorithm and computational hardware is configured to operate at high-speed video rates.
Closing
The terms “certain embodiments”, “an embodiment”, “embodiment ”, “embodiments”, “the embodiment”, “the embodiments”, “one or more embodiments”, “some embodiments”, and “one embodiment” mean one or more (but not all) embodiments unless expressly specified otherwise. The terms “including”, “comprising”, “having” and variations thereof mean “including but not limited to”, unless expressly specified otherwise. The enumerated listing of items does not imply that any or all of the items are mutually exclusive, unless expressly specified otherwise. The terms “a”, “an” and “the” mean “one or more”, unless expressly specified otherwise.
The foregoing description, for purpose of explanation, has been described with reference to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The embodiments were chosen and described in order to best explain the principles of the invention and its practical applications, to thereby enable others skilled in the art to best utilize the invention and various embodiments with various modifications as are suited to the particular use contemplated.
While the invention has been described in detail with reference to used embodiments, various modifications within the scope of the invention will be apparent to those of ordinary skill in this technological field. It is to be appreciated that features described with respect to one embodiment typically can be applied to other embodiments.
The invention can be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The present embodiments are therefore to be, considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein.
Although exemplary embodiments have been provided in detail, various changes, substitutions and alternations could be made thereto without departing from spirit and scope of the disclosed subject matter as defined by the appended claims. Variations described for the embodiments may be realized in any combination desirable for each particular application. Thus particular limitations and embodiment enhancements described herein, which may have particular advantages to a particular application, need not be used for all applications. Also, not all limitations need be implemented in methods, systems, and apparatuses including one or more concepts described with relation to the provided embodiments. Therefore, the invention properly to be construed with reference to the claims.
Cited Books
This application claims the benefit of U.S. Provisional Application No. 62/360,472, filed Jul. 11, 2016, and U.S. Provisional Application No. 62/528,384, filed Jul. 3, 2017, the disclosures of which are incorporated herein in their entireties by reference. A portion of the disclosure of this patent document may contain material, which is subject to copyright protection. Certain marks referenced herein may be common law or registered trademarks of the applicant, the assignee or third parties affiliated or unaffiliated with the applicant or the assignee. Use of these marks is for providing an enabling disclosure by way of example and shall not be construed to exclusively limit the scope of the disclosed subject matter to material associated with such marks.
Number | Name | Date | Kind |
---|---|---|---|
4424524 | Daniele | Jan 1984 | A |
4692739 | Dorn | Sep 1987 | A |
5340978 | Rostoker | Aug 1994 | A |
5424855 | Nakamura | Jun 1995 | A |
5929845 | Wei | Jul 1999 | A |
6057538 | Clarke | May 2000 | A |
6787810 | Choi | Sep 2004 | B2 |
6867821 | De Schipper | Mar 2005 | B2 |
7034866 | Colmenarez | Apr 2006 | B1 |
7535468 | Uy | May 2009 | B2 |
7598949 | Han | Oct 2009 | B2 |
7859526 | Konicek | Dec 2010 | B2 |
8026879 | Booth | Sep 2011 | B2 |
8125559 | Ludwig | Feb 2012 | B2 |
8284290 | Ludwig | Oct 2012 | B2 |
8305480 | Ludwig | Nov 2012 | B2 |
8754842 | Ludwig | Jun 2014 | B2 |
8816263 | Ludwig | Aug 2014 | B2 |
8830375 | Ludwig | Sep 2014 | B2 |
8885035 | Ludwig | Nov 2014 | B2 |
8890850 | Chung | Nov 2014 | B2 |
9019237 | Ludwig | Apr 2015 | B2 |
9160894 | Ludwig | Oct 2015 | B2 |
9172850 | Ludwig | Oct 2015 | B2 |
9594019 | Ludwig | Mar 2017 | B1 |
9594239 | Ludwig | Mar 2017 | B1 |
9632344 | Ludwig | Apr 2017 | B2 |
9709483 | Ludwig | Jul 2017 | B2 |
9735303 | Ludwig | Aug 2017 | B2 |
10024791 | Ludwig | Jul 2018 | B2 |
20090256810 | Pasquariello | Oct 2009 | A1 |
20120274596 | Ludwig | Nov 2012 | A1 |
20170112376 | Gill | Apr 2017 | A1 |
Number | Date | Country |
---|---|---|
2318395 | Jul 1999 | CA |
Entry |
---|
R. H. Dicke, “Scatter-Hole Cameras for X-Rays and Gamma Rays,” Astrophys. J. 153, L101 (1968). |
E. Fenimore and T. Cannon, “Coded Aperture Imaging with Uniformly Redundant Arrays,” Appl. Opt. 17, 337-347 (1978). |
Fenimore, E, “Coded aperture imaging: predicted performance of uniformly redundant arrays.” Applied Optics 17.22 (1978): 3562-3570. |
A. Busboom, H. Elders-Boll, H. Schotten, “Uniformly Redundant Arrays,” Experimental Astronomy, Jun. 1998, vol. 8, Issue 2, pp. 97-123. |
R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, P. Hanrahan, “Light Field Photography with a Hand-Held Plenoptic Camera,” Stanford University Computer Science Tech Report CSTR 2005-02, 2005, pp. 1-11. |
A. Veeraraghavan, et al. “Dappled photography: Mask enhanced cameras for heterodyned light fields and coded aperture refocusing.” ACM Trans. Graph. 26.3 (2007): 69, pp. 1-12. |
Gottesman, S. R., “Coded Apertures: Past, Present, and Future Application and Design.” Optical Engineering Applications. International Society for Optics and Photonics, 2007, pp. 1-12. |
R. Marcia, Z. Harmany R. Willett, “Compressive Coded Aperture Imaging,” Computational Imaging VII, SPIE Proceedings vol. 7246 (72460G), Feb. 3, 2009, pp. 1-13. |
M. Hirsch, BiDi Screen: Depth and Lighting Aware Interaction and Display MS Thesis, MIT, Aug. 13, 2009, pp. 1-4. |
C. Zhou, S. Nayar, “Computational cameras: Convergence of Optics and Processing.” IEEE Transactions on Image Processing 20. 12 (2011): 3322-3340. |
R. Butler, “Lytro Light Field Camera first look with Ren Ng,” Digital Photography Review, Oct. 19, 2011, pp. 1-9. |
V. Koifman, “Toshiba Announces Light Field Camera Module,” Image Sensors World, Dec. 27, 2012, pp. 1-7. |
H. Jiang, G. Huang, P. Wilford, “Multi-view lensless compressive imaging,” APSIPA Transactions on Signal and Information Processing, 3, (2014) doi:10. 1017/ATSIP.2014.16, pp. 1-4. |
D. G.Stork, P. R. Gill. “Lensless Ultra-Miniature CMOS Computational Imagers and Sensors.” Proc. Sensorcomm (2013): 186-190. |
T. Barribeau, “Shooting Full Panoramas Is Easy with Bendable Flexcam Camera,” Imaging Resource, Aug. 19, 2013, pp. 1-4. |
H. Everts, “A Flexible Camera: A Radically Different Approach to Imaging” Columbia Engineering, Apr. 13, 2016, pp. 1-4. |
D. Sims, Y. Yue, S. Nayar, “Towards Flexible Sheet Cameras: Deformable Lens Arrays with Intrinsic Optical Adaptation,” IEEE International Conference on Computational Photography (ICCP), May 2016, pp. 1-11. |
LightField Forum, “New Light Field Tech to use Transparent Sensor Layers instead of Microlenses,” LightField Forum, May 15, 2016, pp. 1-2. |
M. J. Cieslak, “Coded-Aperture Imaging Systems: Past, Present, and Future Development—A Review,” Radiation Measurements, vol. 92, Sep. 2016, pp. 59-71. |
M. S. Asif, et al, “Flatcam: Thin, Lensless Cameras Using Coded Aperture and Computation.” IEEE Transactions on Computational Imaging (2017), pp. 1-12. |
Hitachi, “Lensless-Camera Technology for Easily Adjusting Focus on Video Images after Image Capture,” Nov. 15, 2016, pp. 1-3. |
D. L. Cade, “Hitachi's Lensless Camera Uses Moire and Math, Not Glass, to Take Photos”, Hitachi Press Release, Nov. 16, 2016, pp. 1-11. |
Hitachi, “Lensless-Camera Technology for Easily Adjusting Focus on Video Images after Image Capture”, Hitachi Press Release, Nov. 15, 2016, pp. 1-4. |
G. Kim, et al., “Lensless Photography with only an image sensor,” arXiv preprint arXiv:1702.06619 (2017), pp. 1-16. |
R. Perkins, “Ultra-Thin Camera Creates Images Without Lenses,” CalTech, Jun. 21, 2017, pp. 1-2. |
Bhattacharya, R., et al. “Organic LED Pixel Array on a Dome”, Proceedings of the IEEE, vol. 93, Issue 7, Jul. 5, 2005, pp. 1273-1280. |
Levoy, “Light Fields and Computational Imaging,” IEEE Computer, vol. 39, Issue 8, Aug. 2006, pp. 46-55. |
Vuuren, et al., “Organic Photodiodes: The Future of Full Color Detection and Image Sensing,” Advanced Materials, vol. 28, Issue 24, Jun. 22, 2016, pp. 4766-4802. |
Han, et al., “Narrow-Band Organic Photodiodes for High-Resolution Imaging,” ACS Appl. Mater. Interfaces 2016, vol. 8, pp. 26143-26151. |
Pierre, et al., “Charge-Integrating Organic Heterojunction Phototransistors for Wide-Dynamic-Range Image Sensors,” Nature Photonics, vol. 11, No. 3, Feb. 2017, pp. 1-8. |
Lui, et al., “Flexible Organic Phototransistor Array with Enhanced Responsivity via Metal-Ligand Charge Transfer,” ACS Appl. Mater. Interfaces 2016, vol. 8, pp. 7291-7299. |
Yokota, et al., “Ultraflexible Organic Photonic Skin,” Science Advances, vol. 2, No. 4, pp. 1-9, Apr. 1, 2016. |
Sirringhaus, “Materials and Applications for Solution-Processed Organic Field-Effect Transistors,” Proceedings of the IEEE, vol. 97, No. 9, Sep. 2009, pp. 1570-1579. |
Number | Date | Country | |
---|---|---|---|
20180165823 A1 | Jun 2018 | US |
Number | Date | Country | |
---|---|---|---|
62528384 | Jul 2017 | US | |
62360472 | Jul 2016 | US |