Opto-electronic devices that make use of organic materials are becoming increasingly desirable for a number of reasons. Many of the materials used to make such devices are relatively inexpensive, so organic opto-electronic devices have the potential for cost advantages over inorganic devices. In addition, the inherent properties of organic materials, such as their flexibility, may make them well suited for particular applications such as fabrication on a flexible substrate. Examples of organic opto-electronic devices include organic light emitting devices (OLEDs), organic phototransistors, organic photovoltaic cells, and organic photodetectors. For OLEDs, the organic materials may have performance advantages over conventional materials. For example, the wavelength at which an organic emissive layer emits light may generally be readily tuned with appropriate dopants.
OLEDs make use of thin organic films that emit light when voltage is applied across the device. OLEDs are becoming an increasingly interesting technology for use in applications such as flat panel displays, illumination, and backlighting. Several OLED materials and configurations are described in U.S. Pat. Nos. 5,844,363, 6,303,238, and 5,707,745, which are incorporated herein by reference in their entirety. One application for phosphorescent emissive molecules is a full color display. Industry standards for such a display call for pixels adapted to emit particular colors, referred to as “saturated” colors. In particular, these standards call for saturated red, green, and blue pixels. Color may be measured using CIE coordinates, which are well known to the art.
One example of a green emissive molecule is tris(2-phenylpyridine) iridium, denoted Ir(ppy)3, which has the following structure:
In this, and later figures herein, the dative bond from nitrogen to metal (here, Ir) is depicted as a straight line.
As used herein, the term “organic” includes polymeric materials as well as small molecule organic materials that may be used to fabricate organic opto-electronic devices. “Small molecule” refers to any organic material that is not a polymer, and “small molecules” may actually be quite large. Small molecules may include repeat units in some circumstances. For example, using a long chain alkyl group as a substituent does not remove a molecule from the “small molecule” class. Small molecules may also be incorporated into polymers, for example as a pendent group on a polymer backbone or as a part of the backbone. Small molecules may also serve as the core moiety of a dendrimer, which consists of a series of chemical shells built on the core moiety. The core moiety of a dendrimer may be a fluorescent or phosphorescent small molecule emitter. A dendrimer may be a “small molecule,” and it is believed that all dendrimers currently used in the field of OLEDs are small molecules.
As used herein, “top” means furthest away from the substrate, while “bottom” means closest to the substrate. Where a first layer is described as “disposed over” a second layer, the first layer is disposed further away from substrate. There may be other layers between the first and second layer, unless it is specified that the first layer is “in contact with” the second layer. For example, a cathode may be described as “disposed over” an anode, even though there are various organic layers in between.
As used herein, “solution processible” means capable of being dissolved, dispersed, or transported in and/or deposited from a liquid medium, either in solution or suspension form.
A ligand may be referred to as “photoactive” when it is believed that the ligand directly contributes to the photoactive properties of an emissive material. A ligand may be referred to as “ancillary” when it is believed that the ligand does not contribute to the photoactive properties of an emissive material, although an ancillary ligand may alter the properties of a photoactive ligand.
As used herein, and as would be generally understood by one skilled in the art, a first “Highest Occupied Molecular Orbital” (HOMO) or “Lowest Unoccupied Molecular Orbital” (LUMO) energy level is “greater than” or “higher than” a second HOMO or LUMO energy level if the first energy level is closer to the vacuum energy level. Since ionization potentials (IP) are measured as a negative energy relative to a vacuum level, a higher HOMO energy level corresponds to an IP having a smaller absolute value (an IP that is less negative). Similarly, a higher LUMO energy level corresponds to an electron affinity (EA) having a smaller absolute value (an EA that is less negative). On a conventional energy level diagram, with the vacuum level at the top, the LUMO energy level of a material is higher than the HOMO energy level of the same material. A “higher” HOMO or LUMO energy level appears closer to the top of such a diagram than a “lower” HOMO or LUMO energy level.
As used herein, and as would be generally understood by one skilled in the art, on a conventional energy level diagram, with the vacuum level at the top, a “shallower” energy level appears higher, or closer to the top, of such a diagram than a “deeper” energy level, which appears lower, or closer to the bottom.
As used herein, and as would be generally understood by one skilled in the art, a first work function is “greater than” or “higher than” a second work function if the first work function has a higher absolute value. Because work functions are generally measured as negative numbers relative to vacuum level, this means that a “higher” work function is more negative. On a conventional energy level diagram, with the vacuum level at the top, a “higher” work function is illustrated as further away from the vacuum level in the downward direction. Thus, the definitions of HOMO and LUMO energy levels follow a different convention than work functions.
More details on OLEDs, and the definitions described above, can be found in U.S. Pat. No. 7,279,704, which is incorporated herein by reference in its entirety.
Over the last decade, artificial intelligence (AI) has reached expert level in image recognition, playing games, and natural language processing, among other endeavors. This is allowing the automation of increasingly more cognitive tasks, spurring ahead a new industrial revolution. Augmented reality (AR), where contextual graphics provided by AI are rendered on top of the real world, is poised to allow new spheres of human activity to benefit from the information age. A few examples include fixing mechanical systems, medicine, fieldwork, piloting aircraft, etc. However, the “motion-to-photon” (sensor to display) processing speeds of state-of-the-art AR systems is currently at least an order of magnitude too slow for a real-time overlay to appear seamless to a mobile human operator, which can in some instances cause “cybersickness.” This slow response is often referred to as latency—a delay in a system output with respect to a given input. Furthermore, the form factor and energy consumption of these systems should be kept small. A lack of real-time processing also limits other purely AI applications such as autonomous vehicles and live recognition. Despite the inadequate technology, the global AR market is expected to grow from $5.196 in 2016 to $63.956 in 2021, and even more beyond.
One way to overcome this issue is to enable the AR sensors and displays themselves to perform AI processing. By way of example the eye and brain process information differently from typical computer systems. There, networks of independent units—neurons—exchange spike signals to distill information in a highly parallel fashion. Analogous non-spiking platforms in the form of interweaved differential equations have been considered. Dubbed “neural networks,” these systems can be configured to perform computing tasks extremely quickly, simply through their dynamical evolution. The network structure determines the task that can be completed, for example classification with feedforward networks, model-predictive control with recurrent neural networks, image processing with convolutional neural networks or cellular neural network, associative memory with Hopfield networks, etc.
How strongly neurons are connected to each other in a neural network is determined by synaptic weights. In regular neural networks, the only connections are between the output of a neuron and the input of another. Before being fed as an input, the output of the first neuron is multiplied by some value—the weight. These weights between all neuron pairs can be tabulated, and in this setup are unambiguous.
Cellular neural networks (or retina-like networks, retinomorphic networks) are a restricted set of neural network topologies, whereby (a) each neuron is only allowed to communicate to other neurons that are their physical or virtual neighbors, and (b) in addition to communicating its (nonlinearly activated) internal state as for regular artificial neural networks (via an “A” template), a neuron can also communicate other properties such as its input (through a “B” template), its non-activated state (through a “C” template), and even nonlinear combinations of the above (through a “D” template). For example, if a cellular network is visualized as a two-dimensional grid containing pixels, a neighborhood of size N consists of a square containing N×N pixels. While this conceptual organization of neurons is convenient for physical artificial neurons that would be organized in a 2D matrix, is not less general than the definition of a general neural network in any dimension, since N can be arbitrarily large, allowing for an arbitrary number of connections and dimensions.
In some specific neural networks, for instance cellular neural networks, it is possible for neurons to receive as inputs more than the outputs of other neurons. In that case, it becomes important to differentiate between different types of weights. Because cellular neural networks typically exhibit only nearest-neighbor connections and are translationally invariant, these different sets of weights are termed “templates”. Because traditional cellular neural networks are mainly 2D, the templates are furthermore written as matrices. For example, for first nearest-neighbour connections only, the matrix is 3×3, with the connection of any given neuron to itself being the central entry, and connections to neighbours in the corresponding entries (north on top, east on right, etc.). There is one template per type of connection. For example, the output of one neuron into the input of another (the regular weight), is often called the “A” (or feedback) template. Another popular type of connection is when some external input of one neuron is communicated to another. The corresponding weight is written in a “B” (or feedforward) template. Other, more niche, templates exist: the “C” template is the set of weights that handles communication of the internal (not nonlinearly transformed) state of a neuron; the “D” template is the weight that characterizes communication of some other nonlinear mixed function of input, output, and internal state. In the most general case, A, B, C, and D can also be nonlinear operators acting on their relevant variable instead of a simple scalar multiplication.
Although similar processes have been replicated in computing by way of computational neural networks, application of such networks to AR contexts still suffers from the same motion to photon delay phenomenon as conventional computing, albeit to a slightly lesser degree because of the increased computing efficiency of neural networks.
There have been attempts to build neural network elements using only electronics. These require a large number of electronic elements to implement the required dynamics, and when networked suffer interconnectivity problems.
An integrated neuromorphic architecture composed of LEDs and detectors was described in Y. Nitta, J. Ohta, S. Tai & K. Kyuma, “Optical neurochip for image processing,” Electronics and Communications in Japan, Part 2 (Electronics), Vol. 78, No. 1, pp. 10-20, January 1995 and related articles. At the time, all-electrical analog neural networks were very popular. Optical neural networks also existed, but these were based on free-space interconnections. In such schemes, an array of light sources is illuminated on a 2D (often reconfigurable) plate that modifies its transmission in space, and the output rays are focused onto an array of photodetectors to perform summing. The referenced system, termed “optical neurochip”, was basically a 1-1 integration of such spatial optical neural networks. A gallium arsenide platform was used, and reconfigurable weights were implemented via photodetectors with tunable responsivity. However, inputs were effectively one-dimensional and outputs one-dimensional. The LED in the referenced system exists only to encode (one-dimensional) inputs, each detector performs the function of multiplying an input with a weight, and summing occurs by routing all the photocurrents in each column together. The nonlinearity in the referenced system is performed in the peripheral circuitry on the summed signals in each column.
The architecture disclosed below contrasts with, and includes several advantages over, the referenced architecture. First, the disclosed architecture is implemented in a thin-film platform. Second, the disclosed architecture does not explicitly multiplex neurons row or column-wise, allowing the disclosed architecture to implement more modern neuromorphic photonic systems where neurons are integrated and localized. Embodiments of the disclosed system integrate the nonlinearity within each pixel, and in some embodiments exploit LED physics to supply streamlined nonlinearity, which was not contemplated in the referenced system. Interconnections of the disclosed system can be optical and may use various degrees of freedom of light to distinguish signals, whereas in the referenced approach, different neurons needed to be connected electrically via peripheral electronics.
The physics of a neuron impose some changes to typical neural network theory when implemented in a thin film architecture. Additionally, a lone neuron cannot do much by itself: useful processing tasks are unlocked when networking large numbers of neurons together with tunable connections. Such tunable interconnections may be electrical or optical or a combination of both, but in some embodiments, independent and/or dynamic weighting of individual inputs to neurons in a network is essential to the proper functioning of a neural network. For neurons who communicate optically, optical weights are desirable since they result in high bandwidth/low latency, increased density, and reduced complexity.
Thus, there is a need in the art for improved neuromorphic opto-electronic devices.
Some embodiments of the invention disclosed herein are set forth belowi and any combination of these embodiments (or portions thereof) may be made to define another embodiment.
In one aspect, a thin film neuromorphic opto-electronic device comprises at least one thin film photoresponsive element, and at least one deposited mirror or optical filtering device, comprising at least two reflective thin film stacks with an interstitial medium therebetween forming at least one optical cavity.
In one embodiment, the at least one mirror or optical filtering device selectively reflects a range of frequencies and is more translucent to frequencies outside the range. In one embodiment, the device further comprises a waveguide positioned above the at least one mirror or optical filtering device. In one embodiment, the waveguide is planar or out-of-plane. In one embodiment, the waveguide comprises 3D printed microoptics. In one embodiment, the device further comprises at least one OLED. In one embodiment, the at least one mirror or optical filtering device is positioned above and/or in optical communication with the photoresponsive element. In one embodiment, the size of the cavity can be controlled via a microelectromechanical system (MEMS) device. In one embodiment, the at least one photoresponsive element is positioned in the cavity. In one embodiment, the device further comprises at least one tunable complex index of refraction thin film positioned in the cavity. In one embodiment, the cavity is configured to modify the signal being received by the photoresponsive element. In one embodiment, the device further comprises a plurality of spacers positioned above and below the at least one tunable complex index of refraction thin film positioned in the cavity. In one embodiment, the spacers are index matched. In one embodiment, the at least one tunable complex index of refraction thin film comprises an electrochromic material, a thermochromic material, a photochromic material, a phase-change material, a pn junction, an epsilon zero-change system, a liquid crystal, or an electro-optic film. In one embodiment, the index of refraction of the at least one tunable complex index of refraction thin film is used to selectively tune at least one of spectral reflectance or transmission. In one embodiment, the device is configured as an optical memristor. In one embodiment, the at least one mirror or optical filtering device comprises a Bragg mirror. In one embodiment, the cavity comprises a Fabry-Perot cavity. In one embodiment, the at least one mirror or optical filtering device is configured as a bandstop filter. In one embodiment, the device is configured to provide multiple weighting regions for a source. In one embodiment, wherein the at least one mirror or optical filtering device comprises an asymmetric mirror. In one embodiment, the at least one cavity comprises a multi-cavity. In one embodiment, a plurality of the at least one deposited mirrors or optical filtering devices are positioned over a common photoresponsive element. In one embodiment, the common photoresponsive element is configured to sum different same color signals.
In another aspect, a neuromorphic opto-electronic device comprises a backplane, a thin film photoresponsive element positioned above the backplane, at least one deposited mirror or optical filtering device positioned above the film photoresponsive element, comprising at least two reflective thin film stacks with an interstitial medium therebetween forming at least one optical cavity, wherein the at least one mirror or optical filtering device selectively reflects a range of frequencies and is translucent to frequencies outside the range, an OLED positioned above the backplane and adjacent to the photoresponsive element, and a waveguide positioned above the at least one mirror or optical filtering device and the OLED, optically connecting the at least one mirror or optical filtering device and the OLED. In one embodiment, the waveguide comprises a plurality of 3D printed microoptics.
In another aspect, a product comprising the thin film neuromorphic opto-electronic device as describe above, the product selected from the group consisting of a flat panel display, a curved display, a computer monitor, a computer, a medical monitor, a television, a billboard, a light for interior or exterior illumination and/or signaling, a heads-up display, a fully or partially transparent display, a flexible display, a rollable display, a foldable display, a stretchable display, a laser printer, a telephone, a mobile phone, a tablet, a phablet, a personal digital assistant (PDA), a wearable device, a laptop computer, a digital camera, a camcorder, a viewfinder, a micro-display, a 3-D display, a virtual reality or augmented reality display or device, a vehicle, a video wall comprising multiple displays tiled together, a theater or stadium screen, a light therapy device, and a sign.
In another aspect, neuromorphic opto-electronic device comprises a first metal or dielectric mirror, a sub-cavity tunable complex refractive index film positioned above the first mirror, a sub-cavity minimum size OLED stack positioned above the sub-cavity tunable absorber, and a second metal or dielectric mirror positioned above the sub-cavity minimum size OLED stack.
In one embodiment, the sub-cavity tunable complex refractive index film comprises a spacer layer, a first electrode layer, and an absorber layer. In one embodiment, the sub-cavity minimum size OLED stack comprises a second electrode layer, a hole transport layer, an emission layer, an electron transport layer, and a cathode layer.
In another aspect, a method of manufacturing a neuromorphic opto-electronic system comprises providing a first die or a substrate, depositing a thin film photoresponsive element on the first die or substrate, depositing a weight stack in optical communication with the photoresponsive element, depositing an anode adjacent to the weight stack on the first die or substrate, and depositing an OLED on the anode.
In one embodiment, the method further comprises forming a waveguide connecting the weight stack and the OLED. In one embodiment, the first die or substrate comprises thin film transistors or silicon based CMOS devices. In one embodiment, the waveguide is printed onto the weight stack and the OLED. In one embodiment, the waveguide is printed on a second die or a second substrate. In one embodiment, the waveguide printed on the second die or second substrate is heterogeneously integrated onto the weight stack and the OLED.
In another aspect, a neuromorphic opto-electronic system comprises a plurality of interconnected artificial optical neurons, each including at least one thin film neuromorphic opto-electronic device comprising at least one thin film photoresponsive element, and at least one deposited mirror or optical filtering device, comprising at least two reflective thin film stacks with an interstitial medium therebetween forming at least one optical cavity.
In one embodiment, the plurality of interconnected artificial optical neurons are arranged in an array or in a plurality of interconnected arrays. In one embodiment, the array has a width of greater than or equal to one neuron and a height of greater than or equal to one neuron, wherein each neuron defines a pixel. In one embodiment, a plurality of deposited mirror or optical filtering device, each comprising at least two reflective thin film stacks with an interstitial medium therebetween forming at least one optical cavity, are timed to the at least one photoresponsive element. In one embodiment, the system is configured to perform at least one of time-division multiplexing or wavelength-division multiplexing. In one embodiment, the system is configured to perform time-division multiplexing and wavelength-division multiplexing simultaneously. In one embodiment, the at least one mirror or optical filtering device selectively reflects a range of frequencies and is translucent to frequencies outside the range. In one embodiment, the system further comprises a waveguide positioned above the at least one mirror or optical filtering device, wherein the waveguide is planar or out-of-plane, and wherein the waveguide comprises 3D printed microoptics. In one embodiment, the system further comprises at least one OLED. In one embodiment, the at least one mirror or optical filtering device is positioned above and/or in optical communication with the photoresponsive element. In one embodiment, the size of the cavity can be controlled via a microelectromechanical system (MEMS) device. In one embodiment, the at least one photoresponsive element is positioned in the cavity. In one embodiment, the system further comprises at least one tunable complex index of refraction thin film positioned in the cavity. In one embodiment, the cavity is configured to modify the signal being received by the photoresponsive element. In one embodiment, the system further comprises a plurality of spacers positioned above and below the at least one tunable complex index of refraction thin film positioned in the cavity. In one embodiment, the spacers are index matched. In one embodiment, the at least one tunable complex index of refraction thin film comprises an electrochromic material, a thermochromic material, a photochromic material, a phase-change material, a pn junction, an epsilon zero-change system, or a liquid crystal. In one embodiment, the index of refraction of the at least one tunable complex index of refraction thin film is used to selectively tune at least one of spectral reflectance or transmission. In one embodiment, the device is configured as an optical memristor. In one embodiment, the at least one mirror or optical filtering device comprises a Bragg mirror. In one embodiment, the cavity comprises a Fabry-Perot cavity. In one embodiment, the at least one mirror or optical filtering device is configured as a bandstop filter. In one embodiment, the neuron is configured to provide multiple weighting regions for a source. In one embodiment, the at least one mirror or optical filtering device comprises an asymmetric mirror. In one embodiment, the at least one cavity comprises a multi-cavity.
In another aspect, a neuromorphic opto-electronic system comprises a plurality of interconnected artificial optical neurons, each including at least one thin film neuromorphic opto-electronic device comprising, a backplane, a thin film photoresponsive element positioned above the backplane, at least one deposited mirror or optical filtering device positioned above the film photoresponsive element, comprising at least two reflective thin film stacks with an interstitial medium therebetween forming at least one optical cavity, wherein the at least one mirror or optical filtering device selectively reflects a range of frequencies and is translucent to frequencies outside the range, an OLED positioned above the backplane and adjacent to the photoresponsive element, and a waveguide positioned above the at least one mirror or optical filtering device and the OLED, optically connecting the at least one mirror or optical filtering device and the OLED. In one embodiment, the waveguide comprises a plurality of 3D printed microoptics.
In another aspect, a product comprises the thin film neuromorphic opto-electronic system as described above, wherein the product is selected from the group consisting of a flat panel display, a curved display, a computer monitor, a computer, a medical monitor, a television, a billboard, a light for interior or exterior illumination and/or signaling, a heads-up display, a fully or partially transparent display, a flexible display, a rollable display, a foldable display, a stretchable display, a laser printer, a telephone, a mobile phone, a tablet, a phablet, a personal digital assistant (PDA), a wearable device, a laptop computer, a digital camera, a camcorder, a viewfinder, a micro-display, a 3-D display, a virtual reality or augmented reality display or device, a vehicle, a video wall comprising multiple displays tiled together, a theater or stadium screen, a light therapy device, and a sign.
The foregoing purposes and features, as well as other purposes and features, will become apparent with reference to the description and accompanying figures belowi which are included to provide an understanding of the invention and constitute a part of the specification, in which like numerals represent like elements, and in which:
It is to be understood that the figures and descriptions of the present invention have been simplified to illustrate elements that are relevant for a clear understanding of the present invention, while eliminating, for the purpose of clarity, many other elements found in related systems and methods. Those of ordinary skill in the art may recognize that other elements and/or steps are desirable and/or required in implementing the present invention. However, because such elements and steps are well known in the art, and because they do not facilitate a better understanding of the present invention, a discussion of such elements and steps is not provided herein. The disclosure herein is directed to all such variations and modifications to such elements and methods known to those skilled in the art.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. Although any methods and materials similar or equivalent to those described herein can be used in the practice or testing of the present invention, exemplary methods and materials are described.
As used herein, each of the following terms has the meaning associated with it in this section.
The articles “a” and “an” are used herein to refer to one or to more than one (i.e., to at least one) of the grammatical object of the article. By way of example, “an element” means one element or more than one element.
“About” as used herein when referring to a measurable value such as an amount, a temporal duration, and the like, is meant to encompass variations of ±20%, ±10%, ±5%, ±1%, and ±0.1% from the specified value, as such variations are appropriate.
Throughout this disclosure, various aspects of the invention can be presented in a range format. It should be understood that the description in range format is merely for convenience and brevity and should not be construed as an inflexible limitation on the scope of the invention. Accordingly, the description of a range should be considered to have specifically disclosed all the possible subranges as well as individual numerical values within that range. For example, description of a range such as from 1 to 6 should be considered to have specifically disclosed subranges such as from 1 to 3, from 1 to 4, from 1 to 5, from 2 to 4, from 2 to 6, from 3 to 6 etc., as well as individual numbers within that range, for example, 1, 2, 2.7, 3, 4, 5, 5.3, 6 and any whole and partial increments therebetween. This applies regardless of the breadth of the range.
The initial OLEDs used emissive molecules that emitted light from their singlet states (“fluorescence”) as disclosed, for example, in U.S. Pat. No. 4,769,292, which is incorporated by reference in its entirety. Fluorescent emission generally occurs in a time frame of less than 10 nanoseconds.
More recently, OLEDs having emissive materials that emit light from triplet states (“phosphorescence”) have been demonstrated. Baldo et al., “Highly Efficient Phosphorescent Emission from Organic Electroluminescent Devices,” Nature, vol. 395, 151-154, 1998; (“Baldo-l”) and Baldo et al., “Very high-efficiency green organic light-emitting devices based on electrophosphorescence,” Appl. Phys. Lett., vol. 75, No. 3, 4-6 (1999) (“Baldo-II”), which are incorporated by reference in their entireties. Phosphorescence is described in more detail in U.S. Pat. No. 7,279,704 at cols. 5-6, which are incorporated by reference.
More examples for each of these layers are available. For example, a flexible and transparent substrate-anode combination is disclosed in U.S. Pat. No. 5,844,363, which is incorporated by reference in its entirety. An example of a p-doped hole transport layer is m-MTDATA doped with F4-TCNQ at a molar ratio of 50:1, as disclosed in U.S. Patent Application Publication No. 2003/0230980, which is incorporated by reference in its entirety. Examples of emissive and host materials are disclosed in U.S. Pat. No. 6,303,238 to Thompson et al., which is incorporated by reference in its entirety. An example of an n-doped electron transport layer is BPhen doped with Li at a molar ratio of 1:1, as disclosed in U.S. Patent Application Publication No. 2003/0230980, which is incorporated by reference in its entirety. U.S. Pat. Nos. 5,703,436 and 5,707,745, which are incorporated by reference in their entireties, disclose examples of cathodes including compound cathodes having a thin layer of metal such as Mg:Ag with an overlying transparent, electrically conductive, sputter-deposited ITO layer. The theory and use of blocking layers is described in more detail in U.S. Pat. No. 6,097,147 and U.S. Patent Application Publication No. 2003/0230980, which are incorporated by reference in their entireties. Examples of injection layers are provided in U.S. Patent Application Publication No. 2004/0174116, which is incorporated by reference in its entirety. A description of protective layers may be found in U.S. Patent Application Publication No. 2004/0174116, which is incorporated by reference in its entirety.
The simple layered structure illustrated in
Structures and materials not specifically described may also be used, such as OLEDs comprised of polymeric materials (PLEDs) such as disclosed in U.S. Pat. No. 5,247,190 to Friend et al., which is incorporated by reference in its entirety. By way of further example, OLEDs having a single organic layer may be used. OLEDs may be stacked, for example as described in U.S. Pat. No. 5,707,745 to Forrest et al, which is incorporated by reference in its entirety. The OLED structure may deviate from the simple layered structure illustrated in
Unless otherwise specified, any of the layers of the various embodiments may be deposited by any suitable method. For the organic layers, preferred methods include thermal evaporation, ink-jet, such as described in U.S. Pat. Nos. 6,013,982 and 6,087,196, which are incorporated by reference in their entireties, organic vapor phase deposition (OVPD), such as described in U.S. Pat. No. 6,337,102 to Forrest et al., which is incorporated by reference in its entirety, and deposition by organic vapor jet printing (OVJP), such as described in U.S. Pat. No. 7,431,968, which is incorporated by reference in its entirety. Other suitable deposition methods include spin coating and other solution-based processes. Solution based processes are preferably carried out in nitrogen or an inert atmosphere. For the other layers, preferred methods include thermal evaporation. Preferred patterning methods include deposition through a mask, cold welding such as described in U.S. Pat. Nos. 6,294,398 and 6,468,819, which are incorporated by reference in their entireties, and patterning associated with some of the deposition methods such as inkjet and OVJD. Other methods may also be used. The materials to be deposited may be modified to make them compatible with a particular deposition method. For example, substituents such as alkyl and aryl groups, branched or unbranched, and preferably containing at least 3 carbons, may be used in small molecules to enhance their ability to undergo solution processing. Substituents having 20 carbons or more may be used, and 3-20 carbons is a preferred range. Materials with asymmetric structures may have better solution processibility than those having symmetric structures, because asymmetric materials may have a lower tendency to recrystallize. Dendrimer substituents may be used to enhance the ability of small molecules to undergo solution processing.
Devices fabricated in accordance with embodiments of the present disclosure may further optionally comprise a barrier layer. One purpose of the barrier layer is to protect the electrodes and organic layers from damaging exposure to harmful species in the environment including moisture, vapor and/or gases, etc. The barrier layer may be deposited over, under or next to a substrate, an electrode, or over any other parts of a device including an edge. The barrier layer may comprise a single layer, or multiple layers. The barrier layer may be formed by various known chemical vapor deposition techniques and may include compositions having a single phase as well as compositions having multiple phases. Any suitable material or combination of materials may be used for the barrier layer. The barrier layer may incorporate an inorganic or an organic compound or both. The preferred barrier layer comprises a mixture of a polymeric material and a non-polymeric material as described in U.S. Pat. No. 7,968,146, PCT Pat. Application Nos. PCT/US2007/023098 and PCT/US2009/042829, which are herein incorporated by reference in their entireties. To be considered a “mixture”, the aforesaid polymeric and non-polymeric materials comprising the barrier layer should be deposited under the same reaction conditions and/or at the same time. The weight ratio of polymeric to non-polymeric material may be in the range of 95:5 to 5:95. The polymeric material and the non-polymeric material may be created from the same precursor material. In one example, the mixture of a polymeric material and a non-polymeric material consists essentially of polymeric silicon and inorganic silicon.
Devices fabricated in accordance with embodiments of the disclosure can be incorporated into a wide variety of electronic component modules (or units) that can be incorporated into a variety of electronic products or intermediate components. Examples of such electronic products or intermediate components include display screens, lighting devices such as discrete light source devices or lighting panels, etc. that can be utilized by the end-user product manufacturers, and cameras or other devices including optical or other sensors. Such electronic component modules can optionally include the driving electronics and/or power source(s). Devices fabricated in accordance with embodiments of the disclosure can be incorporated into a wide variety of consumer products that have one or more of the electronic component modules (or units) incorporated therein. A consumer product comprising an OLED that includes the compound of the present disclosure in the organic layer in the OLED is disclosed. Such consumer products would include any kind of products that include one or more light source(s) and/or one or more of some type of visual displays. Some examples of such consumer products include flat panel displays, curved displays, computer monitors, medical monitors, televisions, billboards, lights for interior or exterior illumination and/or signaling, heads-up displays, fully or partially transparent displays, flexible displays, rollable displays, foldable displays, stretchable displays, laser printers, telephones, mobile phones, tablets, phablets, personal digital assistants (PDAs), wearable devices, laptop computers, digital cameras, camcorders, viewfinders, other imaging devices, micro-displays (displays that are less than 2 inches diagonal), 3-D displays, virtual reality or augmented reality displays, vehicles, video walls comprising multiple displays tiled together, theater or stadium screen, and a sign. Various control mechanisms may be used to control devices fabricated in accordance with the present disclosure, including passive matrix and active matrix. Many of the devices are intended for use in a temperature range comfortable to humans, such as 18 C to 30 C, and more preferably at room temperature (20-25 C), but could be used outside this temperature range, for example, from −40 C to 80 C.
Although exemplary embodiments described herein may be presented as methods for producing particular circuits or devices, for example OLEDs, it is understood that the materials and structures described herein may have applications in devices other than OLEDs. For example, other optoelectronic devices such as organic solar cells and organic photodetectors may employ the materials and structures. More generally, organic devices, such as organic transistors, or other organic electronic circuits or components, may employ the materials and structures.
In some embodiments, the OLED has one or more characteristics selected from the group consisting of being flexible, being rollable, being foldable, being stretchable, and being curved. In some embodiments, the OLED is transparent or semi-transparent. In some embodiments, the OLED further comprises a layer comprising carbon nanotubes.
In some embodiments, the OLED further comprises a layer comprising a fluorescent emitter, a delayed fluorescent emitter, a phosphorescent emitter, a thermally assisted delayed fluorescent emitter (TADF) or a phosphorescent sensitized fluorescent emitter. In some embodiments, the OLED comprises a RGB pixel arrangement or white plus color filter pixel arrangement. In some embodiments, the OLED is a mobile device, a handheld device, or a wearable device. In some embodiments, the OLED is a display panel having less than 10 inch diagonal or 50 square inch area. In some embodiments, the OLED is a display panel having at least 10 inch diagonal or 50 square inch area. In some embodiments, the OLED is a lighting panel.
In some embodiments of the emissive region, the emissive region further comprises a host. In some embodiments, the compound can be an emissive dopant. In some embodiments, the compound can produce emissions via phosphorescence, fluorescence, thermally activated delayed fluorescence, i.e., TADF (also referred to as E-type delayed fluorescence; see, e.g., U.S. application Ser. No. 15/700,352, which is hereby incorporated by reference in its entirety), triplet-triplet annihilation, or combinations of these processes.
The OLED disclosed herein can be incorporated into one or more of a consumer product, an electronic component module, and a lighting panel or an imaging device. The organic layer can be an emissive layer and the compound can be an emissive dopant in some embodiments, while the compound can be a non-emissive dopant in other embodiments.
The organic layer can also include a host. In some embodiments, two or more hosts are preferred. In some embodiments, the hosts used maybe a) bipolar, b) electron transporting, c) hole transporting or d) wide band gap materials that play little role in charge transport. In some embodiments, the host can include a metal complex. The host can be an inorganic compound.
One aspect of the disclosure improves upon one or more techniques based on a re-entrant shadow mask disclosed in U.S. Pat. No. 6,013,538 issued on Jan. 11, 2000 to Burrows et al., the contents of which is incorporated herein by reference in its entirety.
In some embodiments, at least one of the anode, the cathode, or a new layer disposed over the organic emissive layer functions as an enhancement layer. The enhancement layer comprises a plasmonic material exhibiting surface plasmon resonance that non-radiatively couples to the emitter material and transfers excited state energy from the emitter material to non-radiative mode of surface plasmon polariton. The enhancement layer is provided no more than a threshold distance away from the organic emissive layer, wherein the emitter material has a total non-radiative decay rate constant and a total radiative decay rate constant due to the presence of the enhancement layer and the threshold distance is where the total non-radiative decay rate constant is equal to the total radiative decay rate constant. In some embodiments, the OLED further comprises an outcoupling layer. In some embodiments, the outcoupling layer is disposed over the enhancement layer on the opposite side of the organic emissive layer. In some embodiments, the outcoupling layer is disposed on opposite side of the emissive layer from the enhancement layer but still outcouples energy from the surface plasmon mode of the enhancement layer. The outcoupling layer scatters the energy from the surface plasmon polaritons. In some embodiments this energy is scattered as photons to free space. In other embodiments, the energy is scattered from the surface plasmon mode into other modes of the device such as but not limited to the organic waveguide mode, the substrate mode, or another waveguiding mode. If energy is scattered to the non-free space mode of the OLED other outcoupling schemes could be incorporated to extract that energy to free space. In some embodiments, one or more intervening layer can be disposed between the enhancement layer and the outcoupling layer. The examples for intervening layer(s) can be dielectric materials, including organic, inorganic, perovskites, oxides, and may include stacks and/or mixtures of these materials.
The enhancement layer modifies the effective properties of the medium in which the emitter material resides resulting in any or all of the following: a decreased rate of emission, a modification of emission line-shape, a change in emission intensity with angle, a change in the stability of the emitter material, a change in the efficiency of the OLED, and reduced efficiency roll-off of the OLED device. Placement of the enhancement layer on the cathode side, anode side, or on both sides results in OLED devices which take advantage of any of the above-mentioned effects. In addition to the specific functional layers mentioned herein and illustrated in the various OLED examples shown in the figures, the OLEDs according to the present disclosure may include any of the other functional layers often found in OLEDs.
The enhancement layer can be comprised of plasmonic materials, optically active metamaterials, or hyperbolic metamaterials. As used herein, a plasmonic material is a material in which the real part of the dielectric constant crosses zero in the visible or ultraviolet region of the electromagnetic spectrum. In some embodiments, the plasmonic material includes at least one metal. In such embodiments the metal may include at least one of Ag, Al, Au, Ir, Pt, Ni, Cu, W, Ta, Fe, Cr, Mg, Ga, Rh, Ti, Ru, Pd, In, Bi, Ca alloys or mixtures of these materials, and stacks of these materials. In general, a metamaterial is a medium composed of different materials where the medium as a whole acts differently than the sum of its material parts. In particular, optically active metamaterials are defined as materials which have both negative permittivity and negative permeability. Hyperbolic metamaterials, on the other hand, are anisotropic media in which the permittivity or permeability are of different sign for different spatial directions. Optically active metamaterials and hyperbolic metamaterials are strictly distinguished from many other photonic structures such as Distributed Bragg Reflectors (“DBRs”) in that the medium should appear uniform in the direction of propagation on the length scale of the wavelength of light. Using terminology that one skilled in the art can understand: the dielectric constant of the metamaterials in the direction of propagation can be described with the effective medium approximation. Plasmonic materials and metamaterials provide methods for controlling the propagation of light that can enhance OLED performance in a number of ways.
In some embodiments, the enhancement layer is provided as a planar layer. In other embodiments, the enhancement layer has wavelength-sized features that are arranged periodically, quasi-periodically, or randomly, or sub-wavelength-sized features that are arranged periodically, quasi-periodically, or randomly. In some embodiments, the wavelength-sized features and the sub-wavelength-sized features have sharp edges.
In some embodiments, the outcoupling layer has wavelength-sized features that are arranged periodically, quasi-periodically, or randomly, or sub-wavelength-sized features that are arranged periodically, quasi-periodically, or randomly. In some embodiments, the outcoupling layer may be composed of a plurality of nanoparticles and in other embodiments the outcoupling layer is composed of a plurality of nanoparticles disposed over a material. In these embodiments the outcoupling may be tunable by at least one of varying a size of the plurality of nanoparticles, varying a shape of the plurality of nanoparticles, changing a material of the plurality of nanoparticles, adjusting a thickness of the material, changing the refractive index of the material or an additional layer disposed on the plurality of nanoparticles, varying a thickness of the enhancement layer, and/or varying the material of the enhancement layer. The plurality of nanoparticles of the device may be formed from at least one of metal, dielectric material, semiconductor materials, an alloy of metal, a mixture of dielectric materials, a stack or layering of one or more materials, and/or a core of one type of material and that is coated with a shell of a different type of material. In some embodiments, the outcoupling layer is composed of at least metal nanoparticles wherein the metal is selected from the group consisting of Ag, Al, Au, Ir, Pt, Ni, Cu, W, Ta, Fe, Cr, Mg, Ga, Rh, Ti, Ru, Pd, In, Bi, Ca, alloys or mixtures of these materials, and stacks of these materials. The plurality of nanoparticles may have additional layer disposed over them. In some embodiments, the polarization of the emission can be tuned using the outcoupling layer. Varying the dimensionality and periodicity of the outcoupling layer can select a type of polarization that is preferentially outcoupled to air. In some embodiments the outcoupling layer also acts as an electrode of the device.
The terms “photosensitive element” and “photoresponsive element” as used in this disclosure refer to any electronic device whose electrical properties change in response to light. Examples of photosensitive elements include, but are not limited to, photodetectors, photodiodes, phototransistors, or photogates. Various exemplary embodiments of devices or systems may be presented herein including one or more particular photosensitive elements, for example photodetectors. These exemplary embodiments are not limiting, and that, as would be understood by one skilled in the art, any photosensitive element in an exemplary device may be substituted, sometimes with the addition or subtraction of additional circuitry, with another photosensitive element.
In some aspects of the present invention, a mirror and/or optical filtering device can include one or multiple layers of thin-films whose concerted action reflects some incident light, transmits some incident light, and (potentially) absorbs some incident some light. The amount of transmission, reflection, and absorption depends on the wavelength of light in general and is dictated by the filter design. Examples include but are not limited to metals, dielectric interfaces, quarter-wave stacks, rugate filters, graded-index filters, irregular discrete thin-film dielectric, optical resonators, multi-cavity optical resonators.
In some aspects of the present invention, software executing the instructions provided herein may be stored on a non-transitory computer-readable medium, wherein the software performs some or all of the steps of the present invention when executed on a processor.
Aspects of the invention relate to algorithms executed in computer software. Though certain embodiments may be described as written in particular programming languages, or executed on particular operating systems or computing platforms, it is understood that the system and method of the present invention is not limited to any particular computing language, platform, or combination thereof. Software executing the algorithms described herein may be written in any programming language known in the art, compiled or interpreted, including but not limited to C, C++, C #, Objective-C, Java, JavaScript, MATLAB, Python, PHP, Perl, Ruby, or Visual Basic. It is further understood that elements of the present invention may be executed on any acceptable computing platform, including but not limited to a server, a cloud instance, a workstation, a thin client, a mobile device, an embedded microcontroller, a television, or any other suitable computing device known in the art.
Parts of this invention are described as software running on a computing device. Though software described herein may be disclosed as operating on one particular computing device (e.g. a dedicated server or a workstation), it is understood in the art that software is intrinsically portable and that most software running on a dedicated server may also be run, for the purposes of the present invention, on any of a wide range of devices including desktop or mobile devices, laptops, tablets, smartphones, watches, wearable electronics or other wireless digital/cellular phones, televisions, cloud instances, embedded microcontrollers, thin client devices, or any other suitable computing device known in the art.
Similarly, parts of this invention are described as communicating over a variety of wireless or wired computer networks. For the purposes of this invention, the words “network”, “networked”, and “networking” are understood to encompass wired Ethernet, fiber optic connections, wireless connections including any of the various 802.11 standards, cellular WAN infrastructures such as 3G, 4G/LTE, or 5G networks, Bluetooth®, Bluetooth® Low Energy (BLE) or Zigbee® communication links, or any other method by which one electronic device is capable of communicating with another. In some embodiments, elements of the networked portion of the invention may be implemented over a Virtual Private Network (VPN).
Generally, program modules include routines, programs, components, data structures, and other types of structures that perform particular tasks or implement particular abstract data types. Moreover, those skilled in the art will appreciate that the invention may be practiced with other computer system configurations, including hand-held devices, multiprocessor systems, microprocessor-based or programmable consumer electronics, minicomputers, mainframe computers, and the like. The invention may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote memory storage devices.
The storage device 320 is connected to the CPU 350 through a storage controller (not shown) connected to the bus 335. The storage device 320 and its associated computer-readable media, provide non-volatile storage for the computer 300. Although the description of computer-readable media contained herein refers to a storage device, such as a hard disk or CD-ROM drive, it should be appreciated by those skilled in the art that computer-readable media can be any available media that can be accessed by the computer 300.
By way of example, and not to be limiting, computer-readable media may comprise computer storage media. Computer storage media includes volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data. Computer storage media includes, but is not limited to, RAM, ROM, EPROM, EEPROM, flash memory or other solid state memory technology, CD-ROM, DVD, or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by the computer.
According to various embodiments of the invention, the computer 300 may operate in a networked environment using logical connections to remote computers through a network 340, such as TCP/IP network such as the Internet or an intranet. The computer 300 may connect to the network 340 through a network interface unit 345 connected to the bus 335. It should be appreciated that the network interface unit 345 may also be utilized to connect to other types of networks and remote computer systems.
The computer 300 may also include an input/output controller 355 for receiving and processing input from a number of input/output devices 360, including a keyboard, a mouse, a touchscreen, a camera, a microphone, a controller, a joystick, or other type of input device. Similarly, the input/output controller 355 may provide output to a display screen, a printer, a speaker, or other type of output device. The computer 300 can connect to the input/output device 360 via a wired connection including, but not limited to, fiber optic, ethernet, or copper wire or wireless means including, but not limited to, Bluetooth, Near-Field Communication (NFC), infrared, or other suitable wired or wireless connections.
As mentioned briefly above, a number of program modules and data files may be stored in the storage device 320 and RAM 310 of the computer 300, including an operating system 325 suitable for controlling the operation of a networked computer. The storage device 320 and RAM 310 may also store one or more applications/programs 330. In particular, the storage device 320 and RAM 310 may store an application/program 330 for providing a variety of functionalities to a user. For instance, the application/program 330 may comprise many types of programs such as a word processing application, a spreadsheet application, a desktop publishing application, a database application, a gaming application, internet browsing application, electronic mail application, messaging application, and the like. According to an embodiment of the present invention, the application/program 330 comprises a multiple functionality software application for providing word processing functionality, slide presentation functionality, spreadsheet functionality, database functionality and the like.
The computer 300 in some embodiments can include a variety of sensors 365 for monitoring the environment surrounding and the environment internal to the computer 300. These sensors 365 can include a Global Positioning System (GPS) sensor, a photosensitive sensor, a gyroscope, a magnetometer, thermometer, a proximity sensor, an accelerometer, a microphone, biometric sensor, barometer, humidity sensor, radiation sensor, or any other suitable sensor.
In one embodiment, the dynamics of a neuron i in a cellular neural network are given by:
where j represents the group of other neurons neuron i is networked with (typically a local neighborhood in the case of cellular neural networks). By extending the neighborhood to cover all other neurons in the network and allowing only diagonal feedforward template terms (Bij=Biiδij), “regular”, non-cellular neural networks are obtained. As such, hardware that can emulate a cellular neural network can also emulate regular artificial neural networks. Note that variations of this equation exist and can also be implemented in hardware.
Throughout this disclosure, examples may be presented in the context of one or more particular types of neural network, including but not limited to a cellular neural network, a retinomorphic neural network, a recurrent neural network, a feedforward neural network, a convolutional neural network, a generative neural network, a discriminative neural network, or a Hopfield neural network. It is understood that exemplary embodiments presented in one context are not meant to be limiting on the disclosure, and that the systems and methods disclosed herein may in some embodiments be advantageously adapted to any kind of neural network. In particular, Cellular neural networks (or retinomorphic networks) are a restricted subset of general artificial neural networks that are usually implemented via software. While these restrictions make some of the embodiments more convenient to be implemented, the examples in no way limit the proposed hardware here described. Many features described in the embodiments can be extended to general neural networks topologies by anyone skilled in the art.
This equation explains how a cell state xi evolves in time. Four effects drive the evolution: 1) a decay in time, 2) the current state of the neighbors (and itself) xj through “feedback weights” Aj and a nonlinear transformation f, 3) the input state (this is the input “image”) of the neighbors (and itself) uk through “input weights” Bk, and 4) some pumping (bias) Ii. In some alternate models, the last three terms are nonlinearly transformed together. In one embodiment of a cellular neural network, the decay, pumping, action of the neighbors, and nonlinearity ensure that the cells evolve to one of a maximum or minimum state. As used herein, the term “neighbor” refers to any other cell the current cell can influence, including but not limited to physical neighbors. The weights are set according to what long term behavior is desired. One example is identification of closed curves by erasing open ones.
As disclosed herein, a link has been identified between the mathematics governing neural networks, including but not limited to cellular neural networks, and the dynamics of electro-optic circuits comprising organic light-emitting diodes (OLEDs), the fundamental building block of many displays today. In some embodiments, methods of the disclosed approach may be used with semiconductor light emitting diodes rather than OLEDs. By doing all or part of the computation at the sensor or display level, the motion-to-photon time could be greatly improved as compared to traditional architectures.
The disclosed approach is based on mapping form to function. Standard neuromorphic photonics exploits the natural operation of lasers, linear optical elements, photodetectors, and modulators to physically emulate brain-like signal processing. Similarly, the high density and planar nature of OLED arrays make them suited to neurophotonic processing in highly parallel ways. By doing all or part of the computation at the sensor or display level, the motion-to-photon time can be greatly improved as compared to a traditional architecture. To realize this, two approaches will be considered: general neuromorphic computing with active-matrix OLEDs, and analog “retinomorphic” image processing or display with smart pixels. In some embodiments, the neuromorphic opto-electronic device disclosed herein enables new processes, both on its own and even more so if networked.
In one embodiment, the disclosed approach forms a key building block of scalable photonic cellular neural networks. These networks may be implemented as fast massively parallel computing units. Such networks can be especially good at image processing tasks. Because embodiments of the disclosed photonic neuron are made of components already present in display units, this would be highly useful, for example in augmented reality applications.
For example, a general-purpose neuromorphic processor can be engineered using active-matrix OLEDs, organic photodetectors, and thin-film transistors. In one exemplary embodiment, a 5×5 mm, 1 million pixel display could be reconfigured as a single layer of 1,000 neurons where each neuron is connected to every other neuron in the array. This has advantages over purely electronic neuromorphic computing in that 1) the matrix architecture allows for extremely high interconnect capabilities, 2) summations may be performed optically in parallel, and 3) electronic components can implement weighting factors. In other embodiments, weighting factors may also be implemented in the optical layer. Also, while slower than state-of-the-art silicon photonic neuromorphic computing, the active-matrix OLED approach does not require advanced optics, filters, and lasers to be developed and integrated onto expensive substrates. Systems based on this approach can readily be prototyped with existing technology, largely off-the-shelf, and such systems rely on established manufactured components. All of this greatly reduces the time to commercialization. Further examples of hybrid neuromorphic computing systems and methods may be found in U.S. patent application Ser. No. 16/376,744, filed Apr. 5, 2019, and incorporated herein by reference in its entirety.
More specialized architectures are also disclosed. Inspired by the human eye, one embodiment of a system forgoes the active matrix and make each pixel independently “smart.” With photodetectors as inputs, such OLED pixel arrays could natively implement “retina-like” nearest-neighbor image processing at the source of the image. Since this forms a short time-constant dynamical system that is effectively agnostic to refresh rate, many layers could be stacked before latency becomes an issue, allowing complex computations such as convolutions to be performed quickly. In deep learning image recognition, for example, over 98% of the operations occur in the first few layers of the network. The converse problem—ultrafast reactive displays—could also be tackled, for instance by considering coherent imaging schemes within a silicon photonic backplane and the OLED as a pure display.
Embodiments of the disclosed device are made using the same fabrication procedure as for bidirectional OLED microdisplays on silicon. However, extra components—photodetectors and transistors, mainly—are assembled to give individual display elements the described functionality. The simplest specific extra electronic circuits to be added are well-known: one or more RC circuits, transimpedance amplifiers, current sources, and voltage limiters. The necessity of these components is directly mappable to the dynamics of the device.
In some embodiments, neurons disclosed herein could appear in a full-fledged network. A neural network in a display/sensor would be a natural fit for augmented reality tasks. Such a network could for instance perform real-time massively parallel image processing tasks at the image capture level. This could allow processing below the speed that causes “cybersickness” in humans, enabling augmented reality, and therefore enable the fabrication of a low latency neuromorphic processing system.
The input layer of neurons receives signals from outside instead of or in addition to other neurons. “Input neuron” as used herein means those neurons which also receive inputs from a source that is not a bias or other neurons. Anything that can be converted to an electrical or optical signal compatible with the input or next layer of neurons can work.
Specialized sensors can therefore be added to some or all of the neurons in a network as inputs. These can be from different devices (e.g. electronic sensor feeding electronic data) or can be co-integrated if made out of the same materials as the display. The latter is a feature of thin-film optically-connected neurons. For example, photodiodes of the same type as those in optically-connected neurons that take in visual information can be thin film organic or inorganic. Organic upconverters can take non-visual near-infrared light and output visible light for neural processing. Organic spectrometers can convert chemical information to localized optical signals.
In some embodiments, being able to store values for neuron elements (nonlinear units, weights, short-term memory, long-term memory, gain circuits) may be used in reconfigurable networks. Beyond local RAM, this memory can be co-integrated at the physical level at any of the levels disclosed herein. For example, an active-matrix architecture allows analog voltage values to be stored on capacitor plates. This can be used to set the responsivity of a phototransistor or the transmission of a liquid crystal filter. Analog values can also be stored in the fraction of coexisting phases of matter whose optical and electrical properties vary continuously based on the fraction.
In some embodiments, one or more neurons may comprise one or more reconfigurable elements, for example the nonlinear units, weights, short-term memory, long-term memory, gain circuits. Each reconfigurable unit may be changed either in real time or during an offline training session. The elements may be reconfigured either via local learning rules, global learning rules, or via external inputs. In such embodiments, a network may include a way to transmit information either in electronic form or optical form to each reconfigurable element in each neuron.
Neurons may be connected to one another via electrical connections, optical connections, or a combination of the two. In some embodiments, the primary output of an artificial neuron as disclosed herein may be an LED, which may be connected via an optical connection (for example a light pipe or waveguide) to one or more inputs of one or more other artificial neurons. In some embodiments, two neurons may be connected to one another via an electrical connection between an electrical output of a first neuron and an electrical input of a second neuron. Such electrical connections may be direct electrical connections, for example a simple wire or trace. However, the term “electrical connection” as contemplated herein also encompasses indirect electrical connections, for example an electrical connection through one or more intermediary devices. An indirect electrical connection is defined as any connection between two nodes where a change in potential at the first node results in a change in potential at the second node.
In order to be reconfigurable, each neuron may contain an electronic or photonic memory circuit and an electronic biasing circuit, composed of e.g. RAMs coupled to LUTs, much like in the hardware typified by in-memory computing. Reconfiguration information is stored and transmitted in digital form to other neurons or to the outside world. In order to simplify the memory circuits, neurons may be limited to volatile memory units that are loaded upon boot time from an external non-volatile memory unit.
In some embodiments, an array of neurons may be organized in a 2D substrate (or in “2.5D” if arranged in a stacked configuration), each with their corresponding memory units. However, it is understood that artificial neural networks need a level of plasticity (i.e. change of internal neural states or synaptic weights) in order to function properly. The plasticity rate, i.e. the rate of change in the configuration of each neuron, is often much slower than the data processing rate performed by the neural network.
The present disclosure contemplates three kinds of plasticity rules: local, nearest-neighbor, and global. The rules offer different neural-network functionalities at different speeds.
In the local plasticity rule, a circuit in a neuron will reconfigure, or “update”, itself based on the real-time data it receives as input or sends as output to other neurons. This update rule may be implemented by an electronic circuit present in a neuron, which may be fixed by fabrication or field-programmable, as in a Field Programmable Gate Array (FPGA). It is expected that a local plasticity rule would operate as fast as possible, on the order of the data rate. An example of such a rule is spike-timing-dependent plasticity (STDP), which allows the network to have unsupervised learning capabilities.
A nearest-neighbor plasticity rule is a generalization of a local plasticity rule. Here, the update rule affects not only the neuron itself, but also the neighboring neurons. Such a strategy is conducive to direct wired connections between neurons. This enables a “small-world network” topology that mimics many networks studied in both engineering and neuroscience. Systems with small-world connections display enhanced signal-propagation speed, computational power, and synchronizability. These are all desired features of a distributed reconfigurable network. Nearest-neighbor plasticity rules do not need to operate as fast as local ones. An example of a nearest-neighbor rule is rewiring a cluster of neurons to produce the same overall outputs in the event that one of them ceases to function properly.
A global plasticity rule generalizes the nearest-neighbor rules. One example of a global rule is based on the inputs and outputs of a subset of neurons in the neural network, and has the reach to affect every neuron in the network. Because such a rule is global and general, it is also the slowest. An example of such a rule in action is supervised learning, wherein the weights of hidden layers of neural networks are changed based on whether the output layer is close or not to a predetermined target.
In some embodiments, a network of neurons may comprise an addressing scheme for effective transmission of messages, parameters, or inputs to individual neurons. Generally speaking, there are two possible addressing schemes to implement reconfiguration methods with the lowest latency possible: random access or sequential access.
In a random-access connectivity pattern, a reconfiguration signal may be transmitted to any neuron in the 2D or 2.5D substrate within a deterministic amount of time. This can be implemented with well-known protocols such as Address-Event Representation (AER), used extensively in the IBM TrueNorth chip, for example. The AER is itself a simplified version of the networking protocol in the world wide web, where messages are encapsulated in packets and redirected within a network via dedicated “router” circuits. This allows low-latency messages to be transmitted from one neuron to another in the network, or from the outside world to one neuron, but it imposes significant overhead because of the encapsulation and routing. Therefore, it is fast if the rate of messages transmitted in the network is small and sparse. Otherwise, there can be congestion points in each router. A random-access communication scheme is well-suited for small networks, where most communication happens between neighboring neurons, and which very rarely require message transmission to the edges of the network.
Sequential access is an alternative to random access that offers better performance in the case where all neurons need reconfiguration at a single step. With the prior knowledge that all neurons need updating with as high a refresh rate as possible, a more appropriate addressing scheme is to not use any routing at all, which avoids encapsulation and header overhead. Instead, all the data may be packaged into a sequence of bits to be streamed to the neural network in bulk, and serialization and deserialization circuits may be used to unpack the stream and update one fraction of the neurons in a single step. As an example, this can be implemented like a scanline driving scheme commonly employed in flat panel displays, where a scan line selects a row to be updated, and data lines feed the required data in parallel to an entire row of neurons. A full neural network refresh can be performed by sequencing the updates row by row until the entire network has been reconfigured, much like a full frame refresh in a flat panel display.
Neurons in a network can be subdivided into two non-disjoint categories based on their input-output capability. They can take signal inputs from the world via optical or electronic sensors and detectors. They can also output signals to the world via light emitters, direct wires, or radio antennas.
Evidence from both the neuroscience and machine learning fields suggests that useful networks have an “input” layer, which is dedicated to taking inputs from the world, followed by one or more hidden layers, which do not have access to the world and cannot be probed directly, finally connected to an “output” layer, which displays the results of the computation or cognition to the world. Based on that, three methods are used for slicing the neural network into input and output layers. For simplicity, the input and output layers are referred to as I/O neurons, and the strategy can be applied to either input or output.
In a first method, I/O neurons can be organized at the perimeter of the network, forming a one-dimensional I/O. This is amenable to signals that are a one-dimensional time series, or a scalar time series that was deserialized for this purpose. An example of a one-dimensional time series is data coming from a set of sensors in parallel. An example of a scalar time series is an audio stream.
Another possible organization method is to arrange a 2D array of neurons in the network as potentially I/Os. This can be a subset of the entire neural network or its entirety. In this scheme, the neural network will be able to process 2D data arrays, e.g. images or video frames. It can also process a one-dimensional time series that was deserialized, allowing for finite impulse response filters or Fourier and other transforms to be performed in real-time.
A third method is to abandon the one-to-one neuromorphic mapping between the hardware neurons and the artificial neurons. In this case, a much larger artificial neural network is segmented into smaller chunks that fit the network implemented in hardware. At each processing step, each neuron is reconfigured to implement the chunk. Then, inputs are fed electronically or optically to the network and outputs are collected after the required processing time. The collected output is stored in memory either within the network (fast), or in a central processor outside (slow). The outputs are recorded because they may become inputs in subsequent steps. The process is repeated until all of the artificial network has been emulated. The final output is then displayed to the outside world via the central processor coordinating this operation. This scheme has a high latency in comparison with the others, as it requires breaking up a neural network into chunks, and reconfiguring the entire network for each processing step. But it offers the most flexibility in what kinds of network it can simulate.
An embodiment of a thin-film optoelectronic neuron using an optical weighting scheme is shown in
As understood herein, a summation element is an electrical, optical, or other element that takes in a plurality of inputs and provides a sum or combination of the inputs as an output. One example of a summation element is a summing amplifier or voltage adder. In other embodiments, a photodetector may act as a summation element, for example by accepting light having first and second intensities at first and second wavelengths to which the photodetector is sensitive, resulting in an electrical output roughly proportional to the sum of the intensities.
As understood herein, a nonlinear element is an electrical, optical, or other element having at least one input and at least one output, wherein the relationship between a magnitude of at least one output is not directly proportional to a magnitude of at least one input. One example of a nonlinear element is a single-input, single-output thresholding element where the output transitions from a low state to a high state, or vice versa, when the input rises above or falls below a predetermined threshold. In one embodiment, an inverter, for example a thin-film transistor inverter, may be used as a nonlinear element. In one embodiment, any other circuit having a nonlinear transfer function or electrical gain may be used as a nonlinear element. In a nonlinear element the output cannot be simply described as the sum of the weighted inputs, but includes some non-linear function that determines the output based on any given set of inputs. In various embodiments, a nonlinear element may be internal to a neuron or external to a neuron.
An optoelectronic implementation of this principle is shown in
and hence the weighting factor wjk(Vjk)=∫dλRjk (A)Tjk (A, Vjk) in this scheme.
Here, optical power Pk and the detector Rjk are written as scalars. In practice these scalars result from emission and detection profiles (in free space), or mode overlap (with waveguides).
In one embodiment, the variable attenuators 413 may comprise liquid crystals like the ones currently used to attenuate light emitted from individual pixels or subpixels of an LCD display. Other display technologies whose operating mechanisms involve modulating light transmission can also be used, including but not limited to interferometric modulation, micro-electromechanical devices, etc. Electrochromic elements can also be used for this purpose. In some embodiments, waveguides and/or microlenses, for example 3D-printed microlenses, may be used to help route light.
The weighted, demultiplexed inputs 414 may then be fed into one or more photodetectors 415, with the resulting signal passed through optional electronics 416 (for example a continuous-time amplifier, buffer, digital controller, fire-and-reset transistors for “spiking” amplifiers, etc.). Whether or not electronics are used, LED 418 can serve the function of the nonlinear element 403 in the conventional neuron, because LED 418 will emit no light up until the input voltage (sum of weighted inputs) crosses the threshold to move the LED from zero emission into an increasing emissive mode, increasing until the underlying electronics, should they be present, reach a saturation point.
With reference to
and so wjk(Vjk)=Ak(Vjk)∫dλRJk(λ)Tjk(λ) in this scheme. The tunable current gain elements may in some embodiments comprise a thin-film amplifier or transimpedance amplifier.
With reference to
and so wjk(Vjk)=∫dλRjk (λ, Vjk)Tjk (λ) in this scheme. The detector responsivity may be adjusted for example by changing the detector bias, or by changing the gate voltage of an individual phototransistor.
In some embodiments, a bias may be applied to the neuron. A bias as understood herein is a fixed input to a neuron. Such an input may be applied optically, for example as a fixed light input, or electrically, for example as a fixed current or voltage input. In some embodiments, the recovered current can be further processed locally using for example analog electronics or local digital lookup tables.
Summing may be performed in the optical or electrical domain or both depending on the weighting scheme. If optical weighting is used, then a single photodiode can effectively sum a portion or all of the weighted signals. If electrical weighting or in-detector weighting is used, all the resulting synaptic currents can be summed by wiring the weighted outputs together. Switches can be used at this level to direct currents from individual detectors to positive and negative lines which are then subtracted before thresholding. Summing may also be accomplished with dedicated active electronics such as summing amplifiers or adders.
Excitatory and inhibitory synapses can be defined at any of the above levels. In an optical weighting configuration, dedicated optical channels can be used. In some embodiments, within the same optical channel, spectrally similar detectors with different responsivity magnitudes can be used to produce a net current with a certain polarity akin to balanced photodetectors (except with the same amount of light impinging on both detectors, differentiation must happen at the responsivity level). In one embodiment, identical detectors can sum their currents only after experimenting different amplification. In one embodiment, a single photodetector can be used in conjunction with an electrical switch to control if the current adds or subtracts with other synapses. In some embodiments, the sign of the weight can be defined in a local analog or digital processing unit.
Light routing action in networks implemented using the devices and methods disclosed herein may be implemented in various different ways. In one embodiment, the waveguiding action of the display itself (or lack thereof if patterned appropriately) is used to route light from the output of one neuron to an input of another. This can be accomplished by vertical stacking of elements and/or horizontal dielectric waveguides. In another embodiment, modulable filters, such as liquid crystal elements, may be positioned between the neurons to modulate transmission between different neurons. In some embodiments, wavelength-division multiplexing, i.e. using different colors and chromatic filters to transmit multiple signals across the same physical medium at once, can be used. For example, a given LED can transmit primarily or exclusively blue light, and blue filters can be positioned only over those detectors meant to receive the light emitted from the given LED, for example desired weight cells, neurons, or +/− photodetectors. In various embodiments, two, three, four, five, six, seven, eight, nine, ten, fifteen, twenty, or more distinct colors of light may be used, and all the different color channels can be configured to share the same physical path (e.g. waveguide). In some embodiments, visible light may be used, but in other embodiments, infrared light, ultraviolet light, or any wavelength or set of wavelengths generatable, filterable, and or detectable may be used, alone or in combination with visible light.
In some embodiments, multiple polarizations of light may be used, alternatively or in combination with wavelength-division multiplexing, to convey multiple channels along the same medium. These can be obtained by filtering light after emission and before detection, as well as by using emitters and detectors of different spectral profiles. This can be achieved with distinct materials or different device geometries (e.g. cavity effects, material thicknesses, etc.). Wavelength and polarization filtering can be achieved with chemical (e.g. thin-film molecular filters) or physical (e.g. nanophotonic or plasmonic structures) means. Multiple of the above methods can be combined to further diversify the available spectral responses, and hence the number of logical channels.
In addition, multiple LEDs or OLEDs could be used having different output spectra. Arrays of photodetectors could be made to detect only specific wavelengths or the light from specific OLEDs— this could be accomplished by use of color filters applied to the photodetectors or the use of different spectral sensitivity materials in different photodetectors. As discussed elsewhere in this disclosure, having neurons process different wavelengths of light in parallel increases the processing power of the network.
In some embodiments, one or more neurons as disclosed herein may be integrated into an OLED display, which may for example be driven by thin-film transistors or otherwise, for example in microdisplay form. Many physical layers of this scheme can be implemented in the growth direction (i.e., the axis normal to the display surface) to yield a multilayer neural network. This can also be used to simply exploit the third dimension for more connectivity options in a network with effectively fewer layers. In general, many states can be associated with the same neuron (for example by subpixel subdivision) to achieve an effective multilayer network.
The disclosed systems have significant advantages over purely electronic approaches. For example, it is much easier to broadcast to many distant neurons using light, because there are fewer interconnect problems, especially when using color multiplexing. Summation may be implemented passively, simply by collecting light with photodetectors. In AR applications, some or all of the computing can be done at the display level itself. In some embodiments, a global switch can be configured to reset the neurons to some predefined state. Finally, typical display electronics can be used to adjust some or all pumping and weight values, as well as to read output voltages electronically if necessary. This allows the sequential use of different weight configurations conditional on the previous ones on the refresh rate timescale, which permits a wider variety of algorithms to be implemented. Spatially variant topologies are also easy to create this way by generating different weight templates at different areas of the LED display.
Artificial neural networks are currently experiencing a renaissance under the appellation of deep learning. A key enabler for this is hardware that implements efficient linear multiply-accumulate (MAC) operations co-located with memory, which more closely emulates neural computing models than Von Neumann processors. In datacenters, where throughput is key and other metrics such as Size, Weight, and Power (SWaP) are not critical, digital solutions such as Google's TPU have been deployed to great success. At the edge, however, inputs from the environment (e.g., video from cameras) often have a much higher throughput than current mobile electronic hardware can process. In order to deal with this, data is typically digitized and sent to the cloud for processing, leading to unwanted latency and bandwidth bottlenecks in networked systems. For many environmental inputs, particularly for analog time series from sensors, signals of interest are often low-bandwidth compared to total throughput. Accordingly, a more attractive solution for this problem is to employ low-power, low-latency neural pre-processing of sensor readouts by analog neural networks. Data is then highly reduced and efficient digital processing becomes possible for edge devices. This approach can be especially effective for image sensors, where full digitization implies row/column-wise serial readout. Neural processing directly on the image acquisition plane has been pursued in order to reduce overall system latency and power consumption. Application-specific integrated circuits (ASICs) based on the cellular neural network paradigm, specialized chips such as eye trackers, and spike-based event sensors have been developed to this end.
A key operation in any AI pre-processing is a weighted sum of several inputs. In the context of a 2D pixel array in an image sensor, one example is a convolutional operation, where the result at each pixel is a weighted sum of the values of the pixel and its neighbors. There are many scaling advantages in doing such an operation in the analog domain, because the processing complexity increases with O(N2) for an N×N square neighborhood. Wavelength-division multiplexed (WDM) photonics can leverage the spectral dimension of light to accommodate the equivalent of O(N) electrical wires into a single optical waveguide. An added benefit of moving from wires to waveguides is the elimination of interchannel electromagnetic cross-talk and intercomponent output impedance loading. WDM tunable interconnections still require a resonant element calibrated to each wavelength of interest within one pixel area. Thin-film technology enables the possibility to integrate these resonant elements in the vertical dimension, allowing spectral selectivity within the footprint of a single photodetector. An analogous idea has been proposed in the field of neuromorphic silicon photonics with a single bus waveguide, but here it is demonstrated that it carries the same advantages in ‘smart pixel sensors’. This concept and its contrast to conventional sensing methods is illustrated in
Below is described a weighted addition model and its optoelectronic implementation. The model is generalized to sources of arbitrary spectral profiles. This allows the analysis to extend to sensors.
Weighted addition is mathematically defined as:
“Signal” vectors x(t) are dotted with “weight” vectors wi which are typically updated on considerably slower timescales than signals. In multiwavelength photonics, photodetectors can perform summation, producing a current IPD proportional to total impinging optical power P(v) via responsivity RPD(v):
I
PD(t)=∫−∞∞dvRPD(v)P(v,t) (6)
where the amplitude modulation of the power occurs on a much slower timescale than the carrier frequencies v (slowly varying envelope approximation) and thus allows it to have unambiguous time dependence. The first step to map Equation 6 to Equation 5 is to define separate signals. The spectral dimension was used to do so:
Here, pi(v) is the normalized spectral profile of the source whose power amplitude Pi(t) is modulated. For incoherent sources, the expression above holds for arbitrary pi(v) since powers from different sources add linearly. For coherent, narrowband sources where pi(v)->δ(v-vi), but where the different carriers are incoherent respective to each other, Equation 7 holds only in so far has the separation between the vi's is large enough compared to the bandwidth of Pi(t) to avoid coherent interchannel beating.
A structure can be inserted between the sources and the detector to exhibit some transmission profile T(v). Then, the photocurrent becomes:
and hence an effective weight:
ωi≡∫−∞∞dvT(v)ρi(v)RpD(v) (9)
performs the optoelectronic MAC operation for optical power Pi. A further condition to make the signals distinct (mutually distinguishable at the detector) is that:
∫−∞∞dvT(v)ρi(v)ρj(v)RpD(v)∝δij (10)
To implement reconfigurable MACs, then, the task is to find a structure where T(v)=T(v,Δ) can be actuated under a set of control signals Δ. The metric of interest is contrast between weights corresponding to different signal bands, i.e. the maximum difference between wi and wj≠i. Equation 9 treats weighting as an absolute change in optical power that can be detected. For weighted addition, this can be normalized to actuatable detected power to account for fixed insertion losses. With a fixed maximum weight wi,max, the weighting range is simply normalized to (0,1) by letting wi->wi/wi,max. The case of nonzero minimum weight can be handled through a balanced detection scheme.
The building block for controlling T(v) was chosen as the optical resonator, in particular the Fabry-Perot resonator. Optical cavities defined out-of-plane have long been investigated for wavelength-selective optical devices. The high sensitivity of a cavity's reflection or transmission profile on its optical path length is well-known in interferometry. Wavelength-selective field enhancement is key in enhancing absorption in resonant cavity-enhanced detectors or emission in resonant cavity-enhanced light-emitting diodes. Solid-state actuation is interesting for stacked architectures, and cavity optical lengths have successfully been tuned through e.g. morphological changes in phase-change materials, forming the basis of displays and tunable passband sensors.
Fabry-Perot cavities are most often modelled as a pair of mirrors with an interstitial propagation medium. Symmetric mirrors maximize transmission at resonance, and are in this case characterized by both a power reflectance R(v) and a phase shift ϕr(v). A uniform cavity medium is described by length I, real refractive index n, and absorption coefficient α. The normal incidence transmission of such an arrangement is:
with round-trip phase accumulation
The observation here is that if the reflectances R(v) have stopband characteristic, i.e. they have reflectivity value R0 on some finite range of the spectrum and zero elsewhere as displayed in
Trivially, for lossless cavities, TFP(v,0)=1 while TFP(v,R0) depends on n and I. This is especially effective for the coherent case ρi->δ(v-vi), since then wi->T(vi,R0). Since the main effect of n and I on the transmission profile is to simply translate it in frequency space, this may not be as effective a weighting mechanism if the linewidth of the source is commensurate or larger than the cavity linewidth, or if the pixel is operated as a sensor. In general, however, a change in index also leads to a change in absorption (Kramers-Kronig). A changing absorption α(Δ) principally impacts the magnitude of the cavity transmission, and can therefore have a larger impact on the integral of Eq. 5 regardless of the profile of ρi. However, unlike changes in n and I, changing a also adds unintentional weighting e−α(Δ)I to out-of-band signals experiencing R(v)=0.
Intuitively speaking, the cavity causes signals matched to it to experience multiple round trips, and hence be more sensitive to its absorption state compared to the other cavities. For instance, exactly at resonance, the change in transmission due to a change in the absorption state of the cavity medium is:
whereas out-of-band signals seeing only a single pass through the cavity experience:
∂ΔTFP(v,0)=−le−α(Δ)t∂
Hence, an increased factor of sensitivity of:
can be observed. This ratio saturates to
in regimes where mirror losses dominate over the propagation loss α(Δ)I. This monotonically increases with R0, so the practical limit is defined by fabrication limitations, the physical thickness acceptable for a filter (which scales with reflectance), or detector sensitivity/integration time as detected light levels are reduced by smaller cavity linewidths. Smaller R0 might be desirable if the cavity medium operates in a loss-dominated regime (non-unity transmission at resonance), since in this case increasing R0 increases baseline absorption in the low absorption state.
Numerical simulations of the transmission of a concrete resonator stack were performed to explore inline photonic weighting. The example considers signals on infrared O-band carriers, where apodized rugate filters of porous silicon can create narrowband rejection filters and tunable optical materials such as Ge2Sb2Te5 are well characterized. The concept is straightforwardly extended to other wavelength bands, filter types, and materials. A minimal two-channel microcavity weight stack is investigated, but more cavities can be cascaded to weight multidimensional signals. Assuming index-matching from/to incoming and outgoing media, the structure considered here is of form: (1) Filter A, with a single rejection band about v0,A=225.4 THz (1330 nm), (2) Tunable film, (3) Filter B, with dual rejection bands about v0,A and v0,B, (4) Tunable film, and (5) Filter C, with a single rejection band about v0,B=232.4THz (1290 nm).
Narrowband rejection filter synthesis is described in below, and is based on apodized sinusoidal index profiles. The tunable films were Ge2Sb2Te5, which can be switched from a more transparent (t), amorphous phase to a more opaque (o), crystalline phase under appropriate stimulus. The optical constants of the two states are reported in the table of
range would be in the 100's of THz, and only a single peak should be observable (if the resonance fell in the stopband at all). The ˜ 2 THz spacing observed here is more consistent with an effective cavity length of ˜ 20 μm. This matches the apodization length of the filters, which are expected to allow concomitant light penetration.
The transmission response to the tunable film states is different for both channels. The two channel regions are highlighted and presented in
The reciprocal (blue curve) is true when the tunable film matched to the v2≅232 THz-centered cavity is moved away from transparency. Outside of cavity stopbands, for example around 230 THz, the red and blue curves collapse onto each other, indicating that independent weighting is not possible, highlighting the importance of the “independent” cavities.
Even for a small GST thickness of 5 nm the cavities are operating in an absorption-loss-dominated regime, as evidenced by the resonance transmission peaks being lower than unity. Increasing R0 changes Tmax from ≈0.9 to ≈0.65. The base crosstalk due to a single propagation into the other cavity, however, remains proportionally the same as plotted in
Analysis of the footprint advantages of the stacked architecture over lateral subpixels was then performed. While intuitively the light waves sharing the same propagation path should only increase density of pixels, the finite thickness of realistic filters leads to increased beam divergence going from input beam to detector backplane as compared to lateral subpixels with thin single-color filters. In the case of single-mode guiding where there is no such divergence, fundamental limits relating to interwaveguide coupling or practical fabrication considerations will limit density.
The specific metric considered is excess computational density:
This must be contrasted to the default situation with laterally defined cavities of conventional sensing pixels. This benchmark is:
since the Ns sensors performing the Ns operations occupy an overall area of NsP2, with P the pitch between the square pixels. A realistic pitch for commercial sensors is P≈5 μm2, leading to a density ≈0.04 channels/μm2. Hence, for a stacked architecture to be more advantageous, from the point of view of footprint the condition is
Filter equations as described below were used to estimate thicknesses of stacked filters and extend the example from the last section (porous silicon filter in the infrared) when considering realistic values.
If the light routed down the weight stack can be waveguided in a single-mode fashion, for instance if the stacks also exhibit lateral index confinement, then the number of multiply-accumulate operations per area is fundamentally only limited by (1) the pitch between stacks to avoid cross-stack coupling, and (2) how spectrally narrow the cavity mirrors can be made. As an example, silicon, which is nonadsorbing for infrared light, has demonstrated porous rugate filters with n1=0.014, which in the O-band between 1260 and 1360 nm can translate to Δλ ≈10 nm. Hence, about ten cascaded spectral channels in the O-band alone are conceivable if such structures can be etched laterally into single-mode nanopillars, and potentially more if filters are made to overlap somewhat and coherent crosstalk is managed. In this case, the tightest confinement occurs for an average index close to pure silicon (≈3.5) and an interface with air. Coupling crossover lengths above ˜ mm occur with waveguide separations above 1 μm, and hence 1 stack per μm2 is conceivable, leading to 10 channels per μm2. This is 250× more dense than the equivalent subpixel arrangement.
In practice, if waveguides are vertically-etched, constraints on etching aspect ratio will limit pillar density. The number of MACs per area is then:
where Ns is the number of spectral channels per stack, and (Ls/AR)2 the (approximate) effective area of a stack, which is set by its height Ls and the achievable etch aspect ratio AR. For instance, an aspect ratio of 30:1 (achievable in deep reactive ion etching) means that for every 30 units of depth etched to form the stack, the lateral dimensions of the corresponding hole are 1 unit of depth, which determines the pitch between stacks (equivalently, one stack per etch area). Assuming a stack of Ns roughly identical cavities (one per spectral channel) with dual-band mirrors of length L, a stack height of:
is required. The rightmost relation is obtained by considering the scaling of L with n1, which is proportional to Δv from rugate filter equations, neglecting penalties from apodization and the cavity size itself. The required filter spectral width Δv can further be tied to the number of channels via Δv≈B/NS for a spectral band B in which the channels are defined. For the O-band, B≈18 THz.
Fabrication of such tight lateral confinement structures may be difficult, or undesired in order to allow for fabrication with no patterning. In this case, beam divergence must be considered in discussions of channel density. The main consideration here is ensuring the wavefronts maintain sufficient coherence for the cavities to operate as expected, a rule of thumb being that the Rayleigh range of the input beam is of order ˜Ls,eff, the distance the beam must travel. This effective travel distance in this case is a version of Equation 17 where travelling through the cavity is relevant as the unguided cavity round-trips cause beam divergence:
Here, Lc is the effective cavity length including mirror penetration depth. Apodization of only a few cycles has been shown to effectively suppress sidebands in rugate filters, which means effective cavity lengths can be close to physical tunable medium thickness. Unless reflectivities are very large (>0.99), this term is small compared to the other if the tunable film material is thin (μm-size filters L vs nm-size tunable films Lc). The input beam waist w0 that will satisfy the interference condition set above leads to a corresponding computational density of:
The result in the O-band is presented in
For both etched pillars and unguided light, there is a range of reflectivities for which the unpatterned stack is more computationally dense than the equivalent subpixels i.e. for stacks containing less than the Ns value yielding
(dashed line), tiling the plane with stacks allows one to perform more MAC operations than what would be possible by using laterally-separated detectors. This analysis is limits. For instance, in the unguided case the exact threshold above which interference fails is unknown. At the same time, there exist other degrees of freedom, such as angle-selective photodetectors that mitigate spatial crosstalk and can potentially allow spot sizes tighter than the Gaussian beam limit would predict.
Optical filters today are usually designed using sophisticated computer-driven algorithms. Depending on the materials and techniques available, different options are available to generate optical stopband filters. Fourier techniques generate smooth index profiles given a target reflectance. Smooth index profiles can be implemented, for instance, through co-deposition of different materials and glancing angle deposition. They can also be approximated with discrete layers, whether by using multiple layers stepping the index or through frequency-modulated digitization. Different ways of generating notch filters from limited sets of materials have also been considered, such as thickness modulation. General-purpose methods such as the needle method and more recently reinforcement learning have also been applied to the problem of finding the optimal constrained profile to generate a reflection response. Beyond deposition, some stopband filters such as porous silicon have been manufactured through electrochemical or reactive ion etching.
When manufacturing a weight stack, an important consideration will be how to alternate the addition of a filter component and a tunable index component. PCMs, for instance, can be sputtered, making them compatible with deposition-based filter manufacturing technologies. Graded index techniques that rely on microstructure voids in the material may be susceptible to pore infiltration from the tunable film material, although this is also a way to incorporate materials in cavities after fabrication as is done in sensing applications.
Tunable Films and their Actuation:
In some embodiments, integrated microelectromechanical devices can affect large changes in I (and effective n). This is exploited in traditional PICs and in display applications. Phase change materials as considered above have been heavily studied for applications in non-volatile photonics. These materials crystallize when annealed above a crystallization temperature Tc, and amorphize if quickly quenched from a melting temperature Tm. There are large changes in real and imaginary index of refraction between the two forms. The heat can be delivered via Joule heating by a current through the film, from a nearby heater, or from absorbed optical energy. Configurational changes at the molecular level are also known to be able to affect optical properties, a phenomenon known as “chromism”. Photochromism in particular occurs when the change in color is caused by photons. Some photochromic molecules, such as diarylethenes and furylfulgides, are thermally stable at room temperature. Hence, they retain their state in the absence of radiation, making them of interest for photoswitching and optical memory applications. They exist as thin films suspended in polymers or even in crystalline form, suggesting that integration is feasible. Other films have been incorporated into thin-film stacks to modify optical properties. The electronic analogue of photochromism, electrochromism, has been investigated for displays and smart lighting applications. Recently, changing carrier concentrations at thin-film interfaces, leading to changes in complex index from plasma dispersion, has also been considered for light modulation applications.
For reconfigurability, the cavities also need to be actuated in some way. Regular planar structures easily interface with electrical traces for actuation. Introducing independent electrical leads to multiple vertically-stacked cavities, while not impossible, introduces more fabrication complexity. Light controlled films are therefore of interest. The “threshold” behavior of GSTs — amorphization and crystallization being only triggered if an energy above some threshold is dissipated —simplifies the synthesis of “set”, “reset”, and “probe” pulses. Their wavelength-selective optical actuation inside optical cavities has been demonstrated, which is of interest for differential addressing without electrical contacts. Similar to what is proposed here, PCMs have been used to define neuromorphic photonic synapses, and have also been successfully integrated into stacks for display applications.
Residual reflectance outside a given stopband due to insufficient apodization, unmatched interfaces, or other filter imperfections will cause “super-cavities” to form between the resonators. When this occurs, the stack cannot be decomposed as a series of independent cavities and free propagation as was assumed throughout this manuscript. The result is spurious shifts in the transmission curves as the cavity states are actuated. This coherent crosstalk has been extensively studied in the context of microring weight banks. The consequence is another cross-weight penalty that reduces the effective weighting range. This can nonetheless be characterized and effective weighting can be achieved similarly to what was done in this manuscript for absorptive crosstalk.
In deep learning and neuromorphic engineering, neural excitation and inhibition are usually captured by, respectively, positive and negative weights. Add-drop microring resonators, being 4-port devices, can efficiently route “transmitted” and “dropped” light spatially separated from the input onto two arms of a balanced photodetector to determine the sign of generated photocurrent. There is a two-fold reason why this cannot be replicated in the proposed stacked resonators: (1) a microcavity only has two spatially separated ports, and hence it is difficult to collect reflected light without disturbing the input, and (2) absorption tuning, which is present to some degree in all solid-state approaches through the Kramers-Kronig relations, does not conserve photon number. It is, however, always possible to implement negative weights by splitting the input onto two weight stacks, each of which sits on top of one arm of a balanced photodetector. The cost to do this is a 2× power penalty over the regular arrangement from the complete attenuation of a given signal in one of the stacks that is required. The components density is also lessened by 2×, and if optical weight tuning is employed, it now requires spatial resolution. A balanced detection scheme also allows weight 0 to be represented even if low transmission states have nonzero transmission.
Consider a simple structure exhibiting a pure stopband, the apodized rugate filter. The optical properties of rugate filters are intuitively understood by solving the wave equation in a medium of index:
with average index n0 and modulation index n1<<n0 of period ∧. The result is a complex normal incidence reflection coefficient:
with coupling constant K=(πv/c)n1, a wave/grating momentum mismatch Δk=2k0−2π/∧(with k0=2πn0/λ=(4πv/c)n0), and s2=K2−√(Δk/2)2. From this, the peak reflectance amplitude |r0|=R0 and center frequency v0(when Δk=0), as well as half-max width of the stopband Δv (range for which s is real, which occurs for |Δk|<2 k) can be related to the stack design parameters n0, n1, ∧, and L as:
where L is the thickness of the filter.
The narrowband filters above can be deposited sequentially to achieve the separated cavities. However, it is also possible to synthesize dual band rejection filters to improve compactness. Now, the profiles above are superposed as:
where the same ni and ∧i as for the single filter were used. Using different modulation depths is known to help with equalization of the optical densities.
An issue with the structures above is that non-negligible reflectance occurs away from this main stopband due to higher harmonics, truncation of the stack, and interfaces at the boundaries. Harmonics and truncation artefacts can be handled through multiplication of the index profile with a windowing function. A popular apodization function in optics is the following quintic, which exhibits null first and second derivatives:
where d is the distance over which apodization is performed and z∈ [0,d]. Many other apodization functions (sine, exponential sine, Gaussian, etc.) have also been proposed. Apodization of the profile has the side effect of reducing the filter's optical density and increasing light penetration depth. For this demonstration, consider the simplest apodization scheme: the windowing of Equation 26 is applied over across the entire filter, and the filter length is doubled to compensate for the peak reflectance degradation. This function could also be used to index match at boundaries, which is not considered here.
The reflectance profiles of the constituent filters is overlaid in
In some embodiments various thin-film stacks such as bandpass optical filters, broadband pass optical filters, long wave pass optical filters, short wave pass optical filters, dichroic optical filters, rejection optical filters, edge pass optical filters, and neutral density optical filters can be combined to create multiple different behaviors corresponding to different weighting matrices in a photonic neural network.
Since the optical transmission of a thin-film stack depends on the thicknesses of the layers or their indices of refraction, the ability to change the thickness of a thin-film or its index of refraction allows this functionality. MEMS devices such as Qualcomm's IMOD can move two thin film stacks relative to each other, changing thickness. On the other hand, some thin films such as blue phase liquid crystals can change the index of refraction of the film under, for example, applied voltage.
The physics of IMOD MEMS devices are such that under applied voltage, the electrostatic attraction between two mirrors separated by a small gap can be increased, and the equilibrium distance between them reduced. There is a range under which this deformation is continuous (the “released” curves, analog operation), and a critical voltage above which the two mirrors collapse onto each other (the “unstable/actuated” curves, for bistable/digital operation). Such a structure can be changed depending on the reflective behavior of both mirrors, where the transmission can be modified only on a small slice of the spectrum, leaving the rest unaffected. This occurs when the mirrors are both narrowband, i.e they only behave as mirrors on a small slice of the spectrum. Note that the transmission peak does not actually vanish, but rather is moved laterally outside of the range of interest.
In general, for a narrowband mirror, if only a discrete set of materials are available there is no intuitive structure that will only reflect wavelengths on a narrow part of the spectrum, leaving others unaffected. Optimization techniques have been developed to solve this problem.
Fabry-Pérot transmission is defined by:
and the general dielectric partially reflective mirror is defined by:
including the air-first layer interface, propagation in a given layer and next interface (repeated), and propagation in the last layer and last layer-air interface.
A large suit of materials can be used to construct the thin film filters including AlF3, MgF2, NdF3, LaF3, YF3, GdF3, Al2O3, L5, SiO2, Substance M2, YbF3, Substance M3, Substance M5, HfO2, PbF2, Na3AlF6, Na5Al3F14, Y2O3, Substance M1, Substance L5/L5HD, CeF3, ZrO2, Ta2O5, NB2O5, Substances H1/H4/H4HD, Substance H8, ZnS, TiO2, Substance H2, ITO, ZnSe, SiO, and Cr2O3. The same materials deposited in different ways can also exhibit different optical constants, thus offering a larger design space
Furthermore, Bragg mirrors and/or Rugate filters can also be utilized. Bragg mirrors are comprised of thin-film stacks made of alternating high and low index materials. The structure strongly reflects in the vicinity of one wavelength, with some reflection harmonics outside of this range. In one example, a thin-film filter made from a continuous index variation can be utilize where the reflection can be made strong only in one region of the spectrum.
Referring now to
Referring now to
Referring now to
The calculations from
With reference now to
With reference now to
With reference now to
The maximum number of channels is directly related to the spectral narrowness of the mirror, and thus the smallest manufacturable layer. The maximum analog tuning range sets how sharp of a resonance is needed, and the optics and mechanics can be decoupled. The resonance sharpness is set by mirror reflectivities and also bounded by manufacturability. How sharp a resonance can be is equal to how much light is available for processing.
Applications include both analog and digital applications such as OLEDs, WDM neural network synapses, grayscaling, color/hyperspectral mixing at input, reconfigurable routing, and time-multiplexed high spatial resolution hyperspectral filters, for example. Furthermore, the stacked IMODs can be engineered as to minimally interfere with each other.
Additional embodiments of neuromorphic opto-electronic devices (503, 504) are shown in
Device 503 includes a plurality of cavities 530a-n apportioned between a plurality of stacked selective mirrors 531a-n. In some embodiments, the cavities 530 comprise multi-cavities. In some embodiments, optical filtering devices are utilized in place of the selective mirror 531. Each selective mirrors 531 includes one of a plurality of photodetectors 525a-n (collectively 525) positioned within the cavities 530. In some embodiments the photodetectors 525 comprise resonant-cavity-enhanced photodetectors. The selective mirrors 531 are each configured to interact with light of separate specific wavelengths. Since the light does not need to propagate down to below the stack, the mirrors in this case can be asymmetric, where the lower mirrors can reach up to 100% reflection.
Device 504 includes a plurality of cavities 530b-n apportioned between a plurality of stacked selective mirrors 531b-n. In some embodiments, the cavities 530 comprise multi-cavities. In some embodiments, optical filtering devices are utilized in place of the selective mirrors 531. Each selective mirror 531 includes one of a plurality of photodetectors 525b-n positioned within the cavities 530 to detect light in each cavity. The selective mirrors 531 are each configured to interact with light of separate specific wavelengths. The device can further include a bulk photodetector 525a positioned below the stacked selective mirrors 531 for the last channel, assuming all other channels have been absorbed at this depth.
The photodetectors 525 can be any suitable photosensitive element, including phototransistors, photogates, and avalanche photodetectors with intrinsic gain. Additionally, electronics can be utilized after to photodetector 525 to provide gain either directly, such as for a transimpedance amplifier, or in an integrating scheme, such as shown in
Consider transmission through the absorptive stack as a function of the absorption coefficient. Loss from the mirrors can be defined by:
|r(λ)|2+|t(λ)|2=IL(λ)<1 (30)
single-pass phase can be defined by:
which can be written as:
where α=4πK(λ)/λ and r(λ)2 equals 0 if λ is far from λ0.
For nonvolatile control of absorption electrochromic devices or photochromic thin films can be used.
In some embodiments, the absorption-modulated stack device 505 of
By using photochromic weight actuation electrical contacting of each element in the stack is not necessary. Furthermore, processing signals are in the same form as the actuation signals, thus the device can display optical memristive behavior and optical learning rules (plasticity) can be implemented.
A mathematical model of the dynamics of photochromic molecules is presented below. The transition between the two states follows a linear rate equation, with the rate constant proportional to the optical intensity, absorption coefficients, and quantum yield of the two forms, all of which depend on the photon wavelength.
The photokinetics can be defined by:
with xo/c (z, t)=No/c(z, t)/N and molecule conservation xo+xc=1.
The net absorption coefficient is defined by:
αλ(z,t)=ϵoλxo(z,t)C+ϵCλ(1−xO(z,t))C+αmatrixλ (35)
where
The net refractive index is defined by:
Absorption is well-characterized in molar extinction coefficient E. The index change is small (˜0.01) with respect to interface mismatch, but large enough for cavity actuation.
The photon flux (photons per unit time) is defined as:
the absorption cross section (how many photons absorbed) as:
with the quantum yield (how many absorbed photons yield photoreaction) as [ΦX->{circumflex over (X)}λ]=(unitless), concentration
mol amount of active units [w%]=(unitless), density of active film
molar mass of photochromic unit
and Avogadro's number
with the cavity optical timescale defined by:
If τc<<τx, intensity buildup is instant. For weak absorption, cavity modes are intact leading to effective complex index (no z-dep).
It was then considered what happens when the photonic environment around the photochromic film in modified through insertion into an optical cavity. Assuming the film is thin enough, the spatial dependence of the molecules can be brushed away (even without this approximation, the general result still holds). The main differences with simple propagation through a film is that there is an optical field buildup for wavelengths resonant with the cavity, and hence the chemical rate is increased. Furthermore, the cavity transmission is also modified by the presence (and optical state) of the film and is in fact more sensitive to the film state then the transmission through the film without the cavity.
Speedup is also possible as shown in
Another possibility is to consider nondestructive (index) actuation where transmission in wavelength ranges where the chemical does not convert from one form to another (for instance, in the infrared). For example, choosing a signal wavelength ϵλIR≈0 but with n(xc=1)−n(xc=0)=Δn there would be no crosstalk, no loss thus enabling low-power processing, and the transmission state will be left intact. The weighting depends strongly on the effective cavity length. This can be defined by:
Referring now to
For example, assuming no UV cavities (RmUV=0 ∀ m) and assuming independent visible cavities (Rmλn=Rδmn) than transmission is purely exponential for m<n. Coloring irradiance is seen by deeper layers screened by earlier ones in a fixed manner described by:
I
in
λ
UV|m≈IinλUVe−2ϵ
Given the state-dependence of visible reaction, the simplest approach to set states involves “cancelling” of the coloring of the upper layers with appropriate tones. UV irradiance can be used to hold uppermost layers at a steady-state of xc=1−xo=0.2 for Ivis≈100 W/m2 where IUV={dot over (x)}cvis(0.2)/{dot over (x)}oUV (0.8) Ivis≈15 W/m2. Hence coloring rate at lowermost layer (N=9) is ˜600 s−1. Thus, the reconfiguration rates are Δt˜1's ms, but can be improved with smaller stacks, higher pump irradiances, or less conservative algorithms. Assuming all weights can be updated during this time, energy would be Δt(Σi=1NIvis+IUV)≈0.1N W/m2. The system is further defined by:
If the signals are able to change the weight state, in-memory memristive behavior can be achieved. There is a use for weights that are modified by the signals they are intended to process. In neuroscience for example, synaptic connections can be strengthened automatically if a signal causes a neuron to fire and reduced if it does not, an effect known as spike-time-dependent-plasticity. This effect is captured by memristor equations:
{dot over (w)}=f(w,vMR) (55)
i
MR
=g(W,vMR)vMR (56)
Nanoelectronics devices attempt to implement this effect for memristive computing (top right). Photochromic weights present similar dynamics and can therefore act as “photonic memristors” based on:
Referring now to
For a tunable optical film comprising phase-change materials, when heated above a melting temperature and quickly quenched, their atomic structure can be scrambled, putting the film into an “amorphous” phase that is more transparent. When heated above a crystallization temperature for a longer period of time, the structure can be annealed back into a crystalline form that is more opaque. Refractive index changes are also observed. There is a wide range of phase change materials available, as well as other thin film materials with tunable optical properties such as electrochromic devices, photochromic devices, and epsilon near-zero materials, for example. Typically, individual films will require thicknesses of order wavelength to act as mirrors/cavities. Mirror film materials should have low absorption in the wavelength ranges of interest, and dielectrics are typically used. For absorption-based tuning, materials such as phase-change materials, chromic molecules (e.g. electrochromic, photochromic, thermochromic), III-V modulator or detector materials, and carrier-based films (e.g. pn junctions, epsilon-near zero stacks, etc.), or any other suitable material can be utilized.
The spectra from the
with xo+xc=1, and xo/c (z, t)=No/c(z, t)/N(Z)
In nanophotonic structures, the single mode profile can be defined as:
I
λ(r,t)->I0λ(t)·|Eλ(r)|2 (65)
and the modal index/loss as:
D
core
·{circumflex over (n)}=D
slot
·{circumflex over (n)} (68)
εcoreE⊥core=εslotE⊥slot (69)
If εslot<εcore, then E⊥slot>ε⊥core. Since the slot (and cladding) material will be photochromic, this results in interaction strength enhancement.
A variety of options were considered when designing a neuromorphic opto-electronic system 600 equivalent to the electrical implementation of Kim at al. Options for the interconnects included an 8-channel WDM, bidirectional synaptic waveguide, and scalability to more channels in the same waveguide. Options for the weighted sum included bandstop filter synaptic weighting, fast switching (1 us), a single photodetector and phototransistor, and scalability to more channels in the same area. Options for nonlinearity included a nonlinear analog circuit with direct OLED modulation (ReLU), fast dynamics (up to 250 MHz), OLED being the dominant power cost (static), and a good ON/OFF power ratio.
Advantages of optical interconnects for the OLED-based neuromorphic processor include re-use of information transmission medium (WDM in waveguides) providing a N:1 advantage, use of a single photodetector 525 for summing channels (one component for many channels) providing a N:1 advantage, and scalability where area x fan-in tradeoff is eliminated by stacking optical filters 531 which can increase fan-in without jeopardizing area and thus speed. The advantages over CMOS increase with increasing pixel resolution. Exposure latency can also be improved in CMOS electronics.
The system 600 can be manufactured in any suitable way. In some embodiments, the system 600 is manufactured via a first die (die 1) including a BiCMOS backplane with analog electronics, amplifiers, digital logic and memory, thin-film IMODs weights with MEMS or other thin film filter weight banks, and thin-film OLEDs deposited on top. A second die (die 2) including 3D-printed microoptic lenses and waveguides on a transparent substrate is then manufactured. A flip-chip bond placing die 2 on top of die 1 with micrometric alignment tolerance is then done. Alternatively, in some embodiments, if 3D-printing is non-abrasive, die 2 and flip-chip bonding will not be required.
In some embodiments, limitations on fabrication and/or performance may limit the number of available wavelengths for wavelength division multiplexing (WDM) weighting. By time multiplexing is implemented, the number of wavelengths required for computation can be reduced at the cost of additional time steps per operation. Simple bounds emerge for the general case of arbitrarily weighted convolution kernels. Convolution kernels with redundant weights allow for reduction in wavelengths and/or time steps used.
Referring now to
N
H
≤N
λ
≤W
BB
H
BB (36)
where NH is the number of pixels in the pixel neighborhood, Nλ is the number of required wavelengths, and WBB, HBB are width and height, respectively of the pixel neighborhood bounding box.
N
H
≤N
T
≤W
BB
H
BB (37)
where NH is the number of pixels in the pixel neighborhood, NT is the number of required time steps, and WBB, HBB are width and height, respectively of the pixel neighborhood bounding box.
Performing different row operations on different time steps allows for a smaller number of wavelengths, where Nλ=WBB is the number of required wavelengths, NT=HBB is the number of time steps, and WBB, HBB are the width and height of pixel neighborhood bounding box. In some embodiments, it is assumed that WDM weights are adjustable between multiplexing time steps. Furthermore, this is generalizable to any pixel neighborhood. Additionally, in some embodiments, wavelength and time multiplexing can be freely interchanged.
In some embodiments, wavelength multiplexing and time multiplexing can be traded off, allowing greater flexibility in system design where both dimensions behave identically for a convolution operation. In some embodiments, simple bounds exist at the limit of only wavelength division multiplexing (WDM) or time division multiplexing (TDM), as well as the intermediary case of WDM in one spatial dimension and TDM in another spatial dimension. Furthermore, convolution kernels with redundant weights allow for reduction in wavelengths and/or time steps used.
The integrated neuromorphic photonic system 600 of
Combination with Other Materials
The materials described herein as useful for a particular layer in an organic light emitting device may be used in combination with a wide variety of other materials present in the device. For example, emissive dopants disclosed herein may be used in conjunction with a wide variety of hosts, transport layers, blocking layers, injection layers, electrodes and other layers that may be present. The materials described or referred to below are non-limiting examples of materials that may be useful in combination with the compounds disclosed herein, and one of skill in the art can readily consult the literature to identify other materials that may be useful in combination.
Various materials may be used for the various emissive and non-emissive layers and arrangements disclosed herein. Examples of suitable materials are disclosed in U.S. Patent Application Publication No. 2017/0229663, which is incorporated by reference in its entirety.
A charge transport layer can be doped with conductivity dopants to substantially alter its density of charge carriers, which will in turn alter its conductivity. The conductivity is increased by generating charge carriers in the matrix material, and depending on the type of dopant, a change in the Fermi level of the semiconductor may also be achieved. Hole-transporting layer can be doped by p-type conductivity dopants and n-type conductivity dopants are used in the electron-transporting layer.
A hole injecting/transporting material to be used in the present disclosure is not particularly limited, and any compound may be used as long as the compound is typically used as a hole injecting/transporting material.
An electron blocking layer (EBL) may be used to reduce the number of electrons and/or excitons that leave the emissive layer. The presence of such a blocking layer in a device may result in substantially higher efficiencies, and or longer lifetime, as compared to a similar device lacking a blocking layer. Also, a blocking layer may be used to confine emission to a desired region of an OLED. In some embodiments, the EBL material has a higher LUMO (closer to the vacuum level) and/or higher triplet energy than the emitter closest to the EBL interface. In some embodiments, the EBL material has a higher LUMO (closer to the vacuum level) and or higher triplet energy than one or more of the hosts closest to the EBL interface. In one aspect, the compound used in EBL contains the same molecule or the same functional groups used as one of the hosts described below.
The light emitting layer of the organic EL device of the present disclosure preferably contains at least a metal complex as light emitting material, and may contain a host material using the metal complex as a dopant material. Examples of the host material are not particularly limited, and any metal complexes or organic compounds may be used as long as the triplet energy of the host is larger than that of the dopant. Any host material may be used with any dopant so long as the triplet criteria is satisfied.
A hole blocking layer (HBL) may be used to reduce the number of holes and/or excitons that leave the emissive layer. The presence of such a blocking layer in a device may result in substantially higher efficiencies and/or longer lifetime as compared to a similar device lacking a blocking layer. Also, a blocking layer may be used to confine emission to a desired region of an OLED. In some embodiments, the HBL material has a lower HOMO (further from the vacuum level) and or higher triplet energy than the emitter closest to the HBL interface. In some embodiments, the HBL material has a lower HOMO (further from the vacuum level) and or higher triplet energy than one or more of the hosts closest to the HBL interface.
An electron transport layer (ETL) may include a material capable of transporting electrons. The electron transport layer may be intrinsic (undoped), or doped. Doping may be used to enhance conductivity. Examples of the ETL material are not particularly limited, and any metal complexes or organic compounds may be used as long as they are typically used to transport electrons.
In tandem or stacked OLEDs, the CGL plays an essential role in the performance, which is composed of an n-doped layer and a p-doped layer for injection of electrons and holes, respectively. Electrons and holes are supplied from the CGL and electrodes. The consumed electrons and holes in the CGL are refilled by the electrons and holes injected from the cathode and anode, respectively; then, the bipolar currents reach a steady state gradually. Typical CGL materials include n and p conductivity dopants used in the transport layers.
As previously disclosed, OLEDs and other similar devices may be fabricated using a variety of techniques and devices. For example, in OVJP and similar techniques, one or more jets of material is directed at a substrate to form the various layers of the OLED.
Each of the following publications are incorporated by reference herein in their entireties:
The disclosures of each and every patent, patent application, and publication cited herein are hereby incorporated herein by reference in their entirety. While this invention has been disclosed with reference to specific embodiments, it is apparent that other embodiments and variations of this invention may be devised by others skilled in the art without departing from the true spirit and scope of the invention. The appended claims are intended to be construed to include all such embodiments and equivalent variations
This application claims priority to U.S. provisional application No. 63/280,436 filed on Nov. 17, 2021, U.S. provisional application No. 63/318,957 filed on Mar. 11, 2022, and U.S. provisional application No. 63/318,968 filed on Mar. 11, 2022, each hereby incorporated herein by reference in their entirety.
Number | Date | Country | |
---|---|---|---|
63280436 | Nov 2021 | US | |
63318957 | Mar 2022 | US | |
63318968 | Mar 2022 | US |