The disclosure relates generally to image sensors, and more particularly to non-scattering nanostructures of silicon pixel image sensors.
The present background section is intended to provide context only, and the disclosure of any concept in this section does not constitute an admission that said concept is prior art.
An image sensor is a semiconductor that converts light into an electrical signal that can be viewed on a display device. When light passes through a camera lens and hits an image sensor, the sensing pixels of the image sensor convert the light into an electrical charge. The strength of the charge is proportional to the light's intensity. Silicon-based image sensors are semiconductor devices that use silicon to detect and measure light. When a photon of sufficient energy hits a silicon atom, the silicon atom releases an electron, creating an electron hole pair. The silicon crystal then generates an electron flux in response to the photon flux.
The above information disclosed in this Background section is only for enhancement of understanding of the background of the disclosure and therefore it may contain information that does not constitute prior art.
In various embodiments, the systems and methods described herein include systems, methods, and apparatuses for non-scattering nanostructures of silicon pixel image sensors. In some aspects, the techniques described herein relate to a pixel of an image sensor including: a back reflector formed on a substrate layer of the pixel, the back reflector configured to reflect electromagnetic radiation incident on the pixel; a photodetector formed within the substrate layer of the pixel and configured to generate photoelectrons based on the electromagnetic radiation; a passivation layer formed over the silicon layer and including a thin film dielectric; a nanostructure formed on the passivation layer and configured to allow the electromagnetic radiation to pass through the nanostructure and reflected electromagnetic radiation from back reflector through the substrate with zero to minimal to steer the electromagnetic radiation towards the photodetector with minimal scattering of the electromagnetic radiation; and a microlens positioned on the nanostructure, the microlens including at least one of a flat coat layer or a curved lensing layer.
In some aspects, the techniques described herein relate to a pixel, further including an anti-reflection layer adjacent to the nanostructure, wherein the anti-reflection layer includes at least one dielectric thin film.
In some aspects, the techniques described herein relate to a pixel, wherein: the anti-reflection layer includes a first refractive index, and the substrate includes a second refractive index, the second refractive index having a higher refractive index than the first refractive index.
In some aspects, the techniques described herein relate to a pixel, wherein the back reflector: is configured to reflect at least a portion of the electromagnetic radiation back towards the substrate, and includes at least one metal layer, at least one dielectric layer, or any combination of metal layers and dielectric layers.
In some aspects, the techniques described herein relate to a pixel, wherein: the nanostructure is configured to reflect a photon of the electromagnetic radiation that is reflected off the back reflector through the substrate with zero to minimal scattering, the nanostructure includes a first layer and a second layer, and an aspect of the first layer differs from an aspect of the second layer, the aspect of the first layer including at least one of size, shape, height, or placement.
In some aspects, the techniques described herein relate to a pixel, wherein a width of an element of the nanostructure is less than or equal to one third of the lowest wavelength of the target electromagnetic radiation spectra.
In some aspects, the techniques described herein relate to a pixel, wherein a height of an element of the nanostructure is greater than or equal to a lowest wavelength of a target electromagnetic radiation spectra that is inversely proportional to an effective refractive index of the nanostructure medium.
In some aspects, the techniques described herein relate to a pixel, wherein a spacing between a first element and a second element of the nanostructure is less than or equal to half of the lowest wavelength of the target electromagnetic radiation spectra.
In some aspects, the techniques described herein relate to a pixel, wherein: the image sensor includes a second nanostructure of a second pixel of the image sensor, and an aspect of the second nanostructure differs from an aspect of the nanostructure.
In some aspects, the techniques described herein relate to a pixel, wherein the aspect of the second nanostructure includes at least one of an element spacing, an element width, an element height, or an element type, the element type including at least one of a pillar, a hole, or a grating.
In some aspects, the techniques described herein relate to a pixel, wherein the nanostructure is configured to provide a first chief ray angle (CRA) correction and the second nanostructure is configured to provide a second CRA correction different than the first CRA correction based respectively on a location of the pixel and a location of the second pixel in the image sensor.
In some aspects, the techniques described herein relate to a pixel, wherein the microlens of the pixel includes a single lens, a two lens microlens array, or a four lens microlens array.
In some aspects, the techniques described herein relate to a pixel, wherein the microlens includes at least one of a microlens nanostructure or curved organic material.
In some aspects, the techniques described herein relate to a pixel, wherein the nanostructure induces a Pi radians phase shift on the target electromagnetic radiation spectra.
In some aspects, the techniques described herein relate to a method of fabricating a pixel of an image sensor, the method including: forming a metal layer on a substrate layer of the pixel and configuring the metal layer to reflect electromagnetic radiation incident on the pixel; forming a photodetector on a silicon layer of the pixel and configuring the photodetector to generate photoelectrons based on the electromagnetic radiation; forming a passivation layer over the silicon layer, the passivation layer including a thin film dielectric; forming a nanostructure on the passivation layer and configuring the nanostructure to allow the electromagnetic radiation to pass through the nanostructure and to steer the electromagnetic radiation linearly towards the photodetector; and forming a microlens on the nanostructure, the microlens including at least one of a flat coat layer or a curved lensing layer.
In some aspects, the techniques described herein relate to a method, further including forming an anti-reflection layer adjacent to the nanostructure, wherein the anti-reflection layer includes at least one dielectric thin film.
In some aspects, the techniques described herein relate to a method, wherein: the anti-reflection layer includes a first refractive index, and the photodetector includes a second refractive index, the second refractive index having a higher refractive index than the first refractive index.
In some aspects, the techniques described herein relate to an image sensor including: one or more pixels, each pixel of the one or more pixels including: a back reflector including a metal layer formed on a substrate layer of the pixel, the back reflector configured to reflect electromagnetic radiation incident on the pixel; a photodetector formed on a silicon layer of the pixel and configured to generate photoelectrons based on the electromagnetic radiation; a nanostructure formed adjacent to the silicon layer and configured to allow the electromagnetic radiation to pass through the nanostructure and to steer the electromagnetic radiation linearly towards the photodetector; an anti-reflection layer adjacent to the nanostructure, wherein the anti-reflection layer includes at least one dielectric thin film; and a microlens positioned on the nanostructure, the microlens including at least one of a flat coat layer or a curved lensing layer.
In some aspects, the techniques described herein relate to an image sensor, wherein at least one pixel includes a passivation layer formed over the silicon layer and including a thin film dielectric.
In some aspects, the techniques described herein relate to an image sensor, wherein: the anti-reflection layer includes a first refractive index, and the photodetector includes a second refractive index, the second refractive index having a higher refractive index than the first refractive index.
A computer-readable medium is disclosed. The computer-readable medium can store instructions that, when executed by a computer, cause the computer to perform substantially the same or similar operations as described herein are further disclosed. Similarly, non-transitory computer-readable media, devices, and systems for performing substantially the same or similar operations as described herein are further disclosed.
The systems and methods described herein include multiple advantages and benefits. For example, the systems and methods increase quantum efficiency (QE) of image sensors (e.g., silicon pixel image sensors), where QE is the measure of the effectiveness of an imaging device to convert incident photons into electrons. Based at least on the nanostructures described herein, the systems and methods increase the signal to noise ratio (SNR) of image sensors. Also, the systems and methods reduce or minimize (e.g., eliminate) scattering of photons within a given pixel area (e.g., area of the photodetector) based at least on the nanostructures described herein. The nanostructures are configured to direct incident light linearly towards the photodetector, thus minimizing the scattering of light that occurs in some systems. Accordingly, the nanostructures minimize crosstalk, where scattered photons enter a lens of a first photodetector and travel to a second photodetector adjacent to the first photodetector, thus increasing the SNR and QE of a photodetector. Also, the systems and methods include forming nanostructures separately from the photodetector based on a passivation layer. As a result, the photodetector is not affected by chemical etching of nanostructures, thus avoiding the dark states that cause dark noises or dark counts within the photodetector when the photodetector is chemically etched as a result of forming the nanostructures without separating nanostructures from the photodetector. Also, the nanostructures described herein are configured to trap a target electromagnetic spectrum within the pixel area based on the non-scattering properties of the nanostructures and a metal-backed reflector layer.
The above-mentioned aspects and other aspects of the present systems and methods will be better understood when the present application is read in view of the following figures in which like numbers indicate similar or identical elements. Further, the drawings provided herein are for purpose of illustrating certain embodiments only; other embodiments, which may not be explicitly illustrated, are not excluded from the scope of this disclosure.
These and other features and advantages of the present disclosure will be appreciated and understood with reference to the specification, claims, and appended drawings wherein:
While the present systems and methods are susceptible to various modifications and alternative forms, specific embodiments thereof are shown by way of example in the drawings and will herein be described. The drawings may not be to scale. It should be understood, however, that the drawings and detailed description thereto are not intended to limit the present systems and methods to the particular form disclosed, but to the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the present systems and methods as defined by the appended claims.
The details of one or more embodiments of the subject matter described herein are set forth in the accompanying drawings and the description below. Other features, aspects, and advantages of the subject matter will become apparent from the description, the drawings, and the claims.
Various embodiments of the present disclosure now will be described more fully hereinafter with reference to the accompanying drawings, in which some, but not all embodiments are shown. Indeed, the disclosure may be embodied in many forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will satisfy applicable legal requirements. The term “or” is used herein in both the alternative and conjunctive sense, unless otherwise indicated. The terms “illustrative” and “example” are used to be examples with no indication of quality level. Like numbers refer to like elements throughout. Arrows in each of the figures depict bi-directional data flow and/or bi-directional data flow capabilities. The terms “path,” “pathway” and “route” are used interchangeably herein.
Embodiments of the present disclosure may be implemented in various ways, including as computer program products that comprise articles of manufacture. A computer program product may include a non-transitory computer-readable storage medium storing applications, programs, program components, scripts, source code, program code, object code, byte code, compiled code, interpreted code, machine code, executable instructions, and/or the like (also referred to herein as executable instructions, instructions for execution, computer program products, program code, and/or similar terms used herein interchangeably). Such non-transitory computer-readable storage media include all computer-readable media (including volatile and non-volatile media).
Embodiments of the present disclosure are described below with reference to block diagrams and flowchart illustrations. Thus, it should be understood that each block of the block diagrams and flowchart illustrations may be implemented in the form of a computer program product, an entirely hardware embodiment, a combination of hardware and computer program products, and/or apparatus, systems, computing devices, computing entities, and/or the like carrying out instructions, operations, steps, and similar words used interchangeably (for example the executable instructions, instructions for execution, program code, and/or the like) on a computer-readable storage medium for execution. For example, retrieval, loading, and execution of code may be performed sequentially, such that one instruction is retrieved, loaded, and executed at a time. In some example embodiments, retrieval, loading, and/or execution may be performed in parallel, such that multiple instructions are retrieved, loaded, and/or executed together. Thus, such embodiments can produce specifically configured machines performing the steps or operations specified in the block diagrams and flowchart illustrations. Accordingly, the block diagrams and flowchart illustrations support various combinations of embodiments for performing the specified instructions, operations, or steps.
Reference throughout this specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment disclosed herein. Thus, the appearances of the phrases “in one embodiment” or “in an embodiment” or “according to one embodiment” (or other phrases having similar import) in various places throughout this specification may not be necessarily all referring to the same embodiment. Furthermore, the particular features, structures or characteristics may be combined in any suitable manner in one or more embodiments. In this regard, as used herein, the word “exemplary” means “serving as an example, instance, or illustration.” Any embodiment described herein as “exemplary” is not to be construed as necessarily preferred or advantageous over other embodiments. Additionally, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. Also, depending on the context of discussion herein, a singular term may include the corresponding plural forms and a plural term may include the corresponding singular form. Similarly, a hyphenated term (e.g., “two-dimensional,” “pre-determined,” “pixel-specific,” etc.) may be occasionally interchangeably used with a corresponding non-hyphenated version (e.g., “two dimensional,” “predetermined,” “pixel specific,” etc.), and a capitalized entry (e.g., “Counter Clock,” “Row Select,” “PIXOUT,” etc.) may be interchangeably used with a corresponding non-capitalized version (e.g., “counter clock,” “row select,” “pixout,” etc.). Such occasional interchangeable uses shall not be considered inconsistent with each other.
Also, depending on the context of discussion herein, a singular term may include the corresponding plural forms and a plural term may include the corresponding singular form. It is further noted that various figures (including component diagrams) shown and discussed herein are for illustrative purpose only, and are not drawn to scale. Similarly, various waveforms and timing diagrams are shown for illustrative purpose only. For example, the dimensions of some of the elements may be exaggerated relative to other elements for clarity. Further, if considered appropriate, reference numerals have been repeated among the figures to indicate corresponding and/or analogous elements.
The terminology used herein is for the purpose of describing some example embodiments only and is not intended to be limiting of the claimed subject matter. As used herein, the singular forms “a,” “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It will be understood that when an element or layer is referred to as being on, “connected to” or “coupled to” another element or layer, it can be directly on, connected or coupled to the other element or layer or intervening elements or layers may be present. In contrast, when an element is referred to as being “directly on,” “directly connected to” or “directly coupled to” another element or layer, there are no intervening elements or layers present. Like numerals refer to like elements throughout. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items.
The terms “first,” “second,” etc., as used herein, are used as labels for nouns that they precede, and do not imply any type of ordering (e.g., spatial, temporal, logical, etc.) unless explicitly defined as such. Furthermore, the same reference numerals may be used across two or more figures to refer to parts, components, blocks, circuits, units, or modules having the same or similar functionality. Such usage is, however, for simplicity of illustration and case of discussion only; it does not imply that the construction or architectural details of such components or units are the same across all embodiments or such commonly-referenced parts/modules are the only way to implement some of the example embodiments disclosed herein.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this subject matter belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
As used herein, the term “module” refers to any combination of software, firmware and/or hardware configured to provide the functionality described herein in connection with a module. For example, software may be embodied as a software package, code and/or instruction set or instructions, and the term “hardware,” as used in any implementation described herein, may include, for example, singly or in any combination, an assembly, hardwired circuitry, programmable circuitry, state machine circuitry, and/or firmware that stores instructions executed by programmable circuitry. The modules may, collectively or individually, be embodied as circuitry that forms part of a larger system, for example, but not limited to, an integrated circuit (IC), system on chip (SoC), an assembly, and so forth.
The following description is presented to enable one of ordinary skill in the art to make and use the subject matter disclosed herein and to incorporate it in the context of particular applications. While the following is directed to specific examples, other and further examples may be devised without departing from the basic scope thereof.
Various modifications, as well as a variety of uses in different applications, will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to a wide range of embodiments. Thus, the subject matter disclosed herein is not intended to be limited to the embodiments presented, but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
In the description provided, numerous specific details are set forth in order to provide a more thorough understanding of the subject matter disclosed herein. It will, however, be apparent to one skilled in the art that the subject matter disclosed herein may be practiced without necessarily being limited to these specific details. In other instances, well-known structures and devices are shown in block diagram form, rather than in detail, in order to avoid obscuring the subject matter disclosed herein.
All the features disclosed in this specification (e.g., any accompanying claims, abstract, and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise. Thus, unless expressly stated otherwise, each feature disclosed is one example only of a generic series of equivalent or similar features.
Various features are described herein with reference to the figures. It should be noted that the figures are only intended to facilitate the description of the features. The various features described are not intended as an exhaustive description of the subject matter disclosed herein or as a limitation on the scope of the subject matter disclosed herein. Additionally, an illustrated example need not have all the aspects or advantages shown. An aspect or an advantage described in conjunction with a particular example is not necessarily limited to that example and can be practiced in any other examples even if not so illustrated, or if not so explicitly described.
Furthermore, any element in a claim that does not explicitly state “means for” performing a specified function, or “step for” performing a specific function, is not to be interpreted as a “means” or “step” clause as specified in 35 U.S.C. Section 112, Paragraph 6. In particular, the use of “step of” or “act of” in the Claims herein is not intended to invoke the provisions of 35 U.S.C. 112, Paragraph 6.
It is noted that, if used, the labels left, right, front, back, top, bottom, forward, reverse, clockwise and counterclockwise have been used for convenience purposes only and are not intended to imply any particular fixed direction. Instead, the labels are used to reflect relative locations and/or directions between various portions of an object.
Any data processing may include data buffering, aligning incoming data from multiple communication lanes, forward error correction (“FEC”), and/or others. For example, data may be first received by an analog front end (AFE), which prepares the incoming for digital processing. The digital portion (e.g., DSPs) of the transceivers may provide skew management, equalization, reflection cancellation, and/or other functions. It is to be appreciated that the process described herein can provide many benefits, including saving both power and cost.
Moreover, the terms “system,” “component,” “module,” “interface,” “model,” or the like are generally intended to refer to a computer-related entity, either hardware, a combination of hardware and software, software, or software in execution. For example, a component may be, but is not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a controller and the controller can be a component. One or more components may reside within a process and/or thread of execution and a component may be localized on one computer and/or distributed between two or more computers.
Unless explicitly stated otherwise, each numerical value and range may be interpreted as being approximate, as if the word “about” or “approximately” preceded the value of the value or range. Signals and corresponding nodes or ports might be referred to by the same name and are interchangeable for purposes here.
While embodiments may have been described with respect to circuit functions, the embodiments of the subject matter disclosed herein are not limited. Possible implementations may be embodied in a single integrated circuit, a multi-chip module, a single card, system-on-a-chip, or a multi-card circuit pack. As would be apparent to one skilled in the art, the various embodiments might also be implemented as part of a larger system. Such embodiments may be employed in conjunction with, for example, a digital signal processor, microcontroller, field-programmable gate array, application-specific integrated circuit, or general-purpose computer.
As would be apparent to one skilled in the art, various functions of circuit elements may also be implemented as processing blocks in a software program. Such software may be employed in, for example, a digital signal processor, microcontroller, or general-purpose computer. Such software may be embodied in the form of program code embodied in tangible media, such as magnetic recording media, optical recording media, solid-state memory, floppy diskettes, CD-ROMs, hard drives, or any other non-transitory machine-readable storage medium, that when the program code is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the subject matter disclosed herein. When implemented on a general-purpose processor, the program code segments combine with the processor to provide a unique device that operates analogously to specific logic circuits. Described embodiments may also be manifest in the form of a bit stream or other sequence of signal values electrically or optically transmitted through a medium, stored magnetic-field variations in a magnetic recording medium, etc., generated using a method and/or an apparatus as described herein.
Some aspects of the systems and methods described herein may be configured to detect electromagnetic radiation. The detected electromagnetic radiation may include at least a partial range of infrared light, near-infrared light, visible light, ultraviolet light, etc. A near-infrared (NIR) sensitive silicon (Si) photodetector (PD) may be used for active 3D imaging (time of flight (ToF), structured light, augmented reality (AR)), machine vision applications (internet of things (IoT), security, robotics) and automotive applications (advanced driver-assistance system (ADAS), in-cabin sensors). A microlens may direct incident light towards the photodetector. The photodetector (e.g., photodiode) may be configured to convert photon energy into an electrical signal.
A microlens can be a relatively small lens that is usually less than a millimeter in diameter. Some microlenses may be made of a variety of organic materials, plastic materials, polymers, etc. Polymer microlens arrays (MLAs) may be used in a variety of applications, including optical sensors, 3D displays, lighting devices, 4D light-fields. MLAs can increase the light collection efficiency of image sensors. MLA arrays can collect and focus light onto the photosensitive areas of the image sensor.
The systems and methods described herein may include nanostructures. In some examples, the systems and methods may include a microlens configured to focus light towards a nanostructure. Nanostructures (e.g., nanostructured patterns) may include nanoscale structures created on the surface of a substrate. Nanopatterning can be the process of creating these patterns, which may have at least one dimension that may be smaller than 100 nanometers (nm). Nanostructures may include materials with at least one dimension in the nanoscale range (<100 nm). In some cases, nanostructures of the systems and methods described herein may include at least one dimension in the range of 50 to 400 nm. Nanostructures can be made of a single material or multiple materials. Nanostructures can have different shapes, including square, round, oval, spherical, triangular, pyramidical (e.g., 3-sided, 4-sided, etc.), rod, hexagonal, cubic, amorphous, needle-like, crystalline, and/or any given shape. Nanostructures can include multiple layers (e.g., stacked vertically). These layers can be similar and/or or different in size, shape, height, placement, structure, etc.
In some cases, at least a portion of the nanostructure may include metamaterials. In some cases, nanostructures may be referred to as metamaterials. Metamaterials are artificial materials that interact with light and other forms of energy in ways that are not found in nature. Metamaterials can be configured into a patterned array not found in nature. These patterns can be of materials at the nanoscale and/or microscale, and may include a repeating pattern, a non-repeating pattern, a periodic pattern, and/or an aperiodic pattern. The shape, geometry, size, orientation, and arrangement of the materials may produce a number of effects, especially on the surface of a thin film, also known as a metasurface. A metamaterial pattern may use elements and patterns having a size that varies within a fraction of a target wavelength. A metamaterial pattern can vary depending on the target wavelength. Metamaterials can be 3D structures made up of at least two different materials. Metamaterials may be designed around unique micro- and nanoscale patterns or structures called “meta-atoms.” These structures provide optical properties that can be shaped on length scales below the wavelength of light. Metasurfaces can transmit or reflect light to focus and steer it, as well as perform other types of wave manipulation. Metamaterials can also interact strongly with light, significantly changing the properties of light over a subwavelength thickness. Metamaterials can be used in many applications, including anechoic chambers, scattering control, photodetectors, microbolometers, solar cells, microwave energy harvesting, sensor and radar cross-section reduction applications, etc.
An electromagnetic metasurface may refer to a kind of artificial sheet material with sub-wavelength thickness. Metasurfaces can be either structured or unstructured, with subwavelength-scaled patterns in the horizontal dimensions. In electromagnetic theory, metasurfaces modulate the behaviors of electromagnetic waves through specific boundary conditions rather than constitutive parameters in three-dimensional (3D) space, which is commonly exploited in natural materials and metamaterials. Metasurfaces may also refer to the two-dimensional counterparts of metamaterials. There can be 2.5D metasurfaces that involve the third dimension as additional degree of freedom for tailoring their functionality.
In some examples, the systems and methods may include one or more thin film semiconductors. A thin film semiconductor is a layer of semiconductor material that is grown or deposited on a substrate. Thin films are typically a few nanometers to microns thick. For example, thin films can range from a few atoms to 100 micrometers thick. A thin film layer that is less than 2 micrometers thick is considered relatively thin, while a thin film layer that is more than 20 micrometers is considered relatively thick. Thin films are made by growing or depositing layers of semiconductor material on a substrate using various deposition processes. Thin film deposition is the process of creating and applying thin film coatings to a substrate material. Deposition processes include vaporization and condensation (e.g., vaporize the solid material and condense it onto the substrate), gas or vapor reaction (e.g., the gas or vapor reacts with the substrate to create a solid thin film), thermal evaporation (e.g., deposit pure metals, non-metals, oxides, and nitrides), magnetron sputtering (e.g., use a physical phenomenon to expel microscopic particles of solid materials from their surface), physical vapor deposition (PVD), chemical vapor deposition (CVD), electron beam (e-beam) evaporation. The coatings can be made of many different materials, such as metals, oxides, and compounds. In thin film deposition, material is added to the substrate in the form of thin film layers. These layers can be structural or act as spacers that can later be removed. Deposition techniques may control layer thickness within a few tens of nanometers. Thin film deposition can be done by processing above the substrate surface, typically a silicon wafer with a thickness of 200-700 μm.
In some examples, the nanostructures of the systems and methods may modify or modulate the phase of incident electromagnetic radiation (e.g., incident light). Phase modulation is a process that changes the phase angle of a carrier wave to match a data signal. This adjustment allows for efficient data transmission over different mediums. Phase modulators may manipulate the phase of electromagnetic radiation (e.g., heat, light). Phase modulation of electromagnetic wavelengths may include focusing, steering, scattering, phase shifting, polarizing, etc. Phase-modulating may reduce aberrations, improve efficiency, alter the polarization of electromagnetic radiation, and/or change the field of view, among other benefits. Such phase-modulating may allow for optimization of a pixel array (e.g., image sensor array) at the sensor level, optimizing the electromagnetic radiation for a specific location or predetermined target wavelength.
A metasurface thin film thus may use nanoscale and microscale elements to alter electromagnetic waves traveling through the thin film. By careful choice of patterns, a metasurface may create within the thin film layer local conditions within the thin film that alters the index of refraction, and thus alter the magnitude of interaction with electromagnetic radiation through the thin film. By doing so, phases of electromagnetic radiation passing through the thin film may be modulated. Due to some of the metasurface elements being subwavelength, the metasurface elements may produce changes at a much smaller size than conventional optics. Additionally, although refractive indexes may be used, changes in refractive indexes may be related to the differences in the dielectric constant. As such, a high-index material may have a higher dielectric constant, while a low-index material may have a lower dielectric constant.
One or more aspects of the systems and methods described herein may be based on a chief ray angle (CRA). The CRA may be the angle between the optical axis and the lens chief ray. The lens chief ray is the ray that passes through the aperture stop of the optical system and the line between the entrance pupil's center and the object point. The CRA can affect image quality factors, such as color shading and vignetting. The magnitude of impact from CRA mismatch can be approximated using the Difference of Squares. The CRA angle, as specified by the sensor manufacturer, can be dependent on the construction of the sensor.
The systems and methods described herein (e.g., nanostructures, anti-reflective layers, lenses, microlenses, metalenses, meta materials) may implement one or more types of optical gratings. An optical grating, also known as a diffraction grating, may be an optical element that splits electromagnetic radiation (e.g., heat, light) into multiple beams that move in different directions. Optical gratings may be made up of parallel lines or grooves that are engraved on a surface. A type of grating can have many evenly spaced parallel slits. When polychromatic light hits the grating, each wavelength of light is reflected at a slightly different angle. For example, red light is bent further than blue light because red light has a longer wavelength. Thus, diffraction grating breaks up the colors of white light. Diffraction gratings are used in many scientific and technological applications. Diffraction gratings are typically better than prisms because they are more efficient, give a linear dispersion of wavelengths, and are free of absorption effects. Diffraction gratings work for both transmission and reflection of electromagnetic radiation. Examples of natural diffraction gratings include butterfly wings, the Australian opal, and the feathers of some birds like hummingbirds. Diffraction gratings may include binary gratings and/or blazed gratings.
Binary gratings may include a periodic structure that causes optical amplitude and/or phase changes. A binary grating may include a periodic structure that has a binary change in phase or amplitude within a single period. A binary grating is a periodic structure with binary change in amplitude or phase within one period. Binary line gratings are often used in spectrometers. Binary gratings may have grating periods greater than one wavelength, but use subwavelength structures within one period. This allows binary gratings to achieve high efficiencies of 70-80%. Binary phase gratings are similar to binary amplitude gratings, but the opaque regions are replaced by etched grooves. Diffraction gratings separate the wavelength components of electromagnetic radiation (e.g., heat, light) by directing each wavelength into a unique output angle. Diffraction gratings such as binary gratings are commonly used in monochromators and spectrometers, but also have other applications, such as optical encoders and wavefront measurement. Binary line gratings can diffract electromagnetic radiation in several directions due to the symmetry of the structure, whereas blazed and slanted gratings may diffract in a single direction.
Blazed gratings are a type of diffraction grating optimized for maximum efficiency in a given diffraction order. Blazed gratings are also known as echelette gratings or saw-tooth gratings. Blazed gratings have a triangular, sawtooth-shaped cross section, forming a step structure. The steps are tilted at the blaze angle with respect to the grating surface. The blaze angle is optimized to maximize efficiency for the wavelength of the used light. Blazed gratings operate at a specific wavelength, known as the blaze wavelength. This means that the majority of the optical power will be in the designed diffraction order while minimizing power lost to other orders. Blazed gratings are relatively efficient and are a good choice for applications with high signal strength constraints.
Fill factor of an image sensor is the ratio of light-sensitive area of a pixel to total pixel area in an image sensor. A higher fill factor means the sensor is more sensitive to light (e.g., more efficient). Fill factor ratios vary by device, but generally range from 30-80% of the pixel area. A lower fill factor means the sensor is less sensitive and requires longer exposure times.
In some examples, the nanostructures of the systems and methods described herein may include holes (e.g., hole-based thin film), pillars (e.g., pillar-based thin film), optical gratings, etc. A hole-based thin film is a thin film with an array of nanometer and/or micrometer-sized holes. A hole is a void area with air or a relatively low refractive index medium surrounded by a relatively high refractive index medium. Electromagnetic radiation (e.g., infrared, thermal) may travel unimpeded through such holes. A hole-based nanostructure may use an array of nanometer- and/or micrometer-sized holes to modulate electromagnetic radiation (e.g., focus, steer, scatter, phase shift, polarize infrared wavelengths). A pillar-based thin film is a thin film with nanostructures and/or microstructures made of isolated pillars (e.g., pillars of high refractive index surrounded by a low refractive index medium). Thin films can be layers of material deposited on a bulk substrate to give the substrate additional properties. Thin films can be used in a variety of surface coatings to alter opto-electronic properties or increase wear or corrosion resistance. A pillar-based nanostructure may include a surface with a semiconductor layer. The semiconductor may be etched with an array of pillars that are nanometers to tens of microns high.
The techniques described herein include logic to provide non-scattering nanostructures of silicon pixel image sensors. The logic includes any combination of hardware (e.g., at least one memory, at least one processor), logical circuitry, firmware, and/or software to fabricate, incorporate, and/or implement non-scattering nanostructures of silicon pixel image sensors.
In some examples, the non-scattering nanostructures of silicon pixel image sensors may include low refractive index components and/or high refractive index components. The systems and methods provide non-scattering nanostructures with subwavelength elements in the periodicity and/or in the dimensions. In some cases, the systems and methods provide nanostructures that do not scatter and/or do not diffract light. Additionally, or alternatively, the systems and methods provide nanostructures that scatter or diffract light with minimal scattering angle (e.g., zeroth diffraction order). In some cases, the non-scattering nanostructures of silicon pixel image sensors may include low refractive index nanometer structures, low refractive index micrometer structures, high refractive index nanometer structures, and/or high refractive index micrometer structures.
In some examples, the nanostructures described herein may be configured to focus, deflect, guide, bend, and/or route electromagnetic radiation. In some cases, the nanostructures include an antireflective surface. The antireflective surface may be a series of dielectric or organic materials such as photoresists, electron-beam resists, germanium, zinc selenide, zinc-sulfide, etc. In some cases, the nanostructures may provide up to a Pi radian phase shift. The phase profile of a given nanostructure may vary spatially within a given pixel and/or vary spatially from a first nanostructure of a first pixel to a second nanostructure of a second pixel.
The nanostructures may include a structure/surface with one or more layers. In some cases, the nanostructures may include a structure/surface with two or more different materials. In some examples, the nanostructures may focus the incoming light with a predefined focal length. In some cases, the nanostructures may partially or entirely cover a photodetector (e.g., partially or entirely cover a pixel of an image sensor).
Some silicon-based photodetectors may include lenses with nanostructures. Silicon photodetectors can be used for electromagnetic radiation detection in one or more spectrums (e.g., infrared, near-infrared, visible, ultraviolet, x-rays, etc.). Silicon photodetectors can absorb green and blue wavelengths at relatively thin configurations, but may need to be thicker to absorb red wavelengths, and again thicker to absorb infrared wavelengths. Accordingly, the dimensions of a silicon photodetector may be configured to increase the photoelectron count.
Some silicon photodetectors may be configured with scattering elements (e.g., scattering nanostructures) configured to scatter light to increase photoelectron count at the photodetector. However, scattering elements can cause crosstalk where some of the photons being scattered at a first photodetector travel to other photodetectors adjacent to the first photodetector.
In some cases, scattering nanostructures may be formed based on chemical etching. The scattering nanostructures may be chemically etched within the pixel area (e.g., photodetector region), thus exposing the photodetector to the chemical etching. However, exposing a silicon-based photodetector to chemical etching can result in dark states that cause dark noises (e.g., dark current, dark counts) within the photodetector, further decreasing the sensitivity or signal to noise ratio of the photodetector. Also, the configuration of some pixels increases parasitic light sensitivity, involve higher process costs, and lower yields.
Based on the systems and methods described herein, an image sensor may include a semiconductor substrate that includes multiple pixel regions (e.g., an image sensing pixel array). A pixel may include a first surface (e.g., a substrate layer), and a second surface (e.g., silicon layer, nanostructure layer, etc.) opposing the first surface. In some cases, the image sensor may include multiple transistors adjacent to the first surface of the semiconductor substrate in each of the pixel regions (e.g., in each pixel). In some cases, the image sensor may include a metallic back reflector in the first surface. In some examples, the image sensor may include non-scattering nanostructures on the second surface in the pixel region, where the non-scattering nanostructures may be configured to linearize a direction (e.g., minimize scattering) of incident light. In some cases, the non-scattering structures may be composed of low-index and/or high-index materials. For example, the non-scattering nanostructures may be fabricated with silicon (Si) (e.g., crystalline, amorphous or poly-crystalline silicon (c-Si, a-Si, p-Si)), Germanium (Ge), silicon oxide (SiO), silicon nitride (SixNy) silicon oxynitride (SixOyNz), hafnium oxide (HfO), aluminum oxide (AlO), titanium oxide (TiO), and/or airgap. In some cases, at least a first portion of a nanostructure may include a relatively high refractive index material (e.g., Si, Ge, TiO, SiN) and at least a second portion of the nanostructure may include a relatively low refractive index material (e.g., SiO, AlO, HfO). In some examples, the high refractive index portion may be positioned adjacent to or beside the low refractive index portion in relation to incident light. The non-scattering nanostructures may be etched. In some cases, the non-scattering nanostructures may include features that extend into the pixel area. The non-scattering nanostructures may include multiple layers of nanostructures to perform the function of reflecting light back to the photodetector for a broad range of wavelengths and/or for a broad range of incident light angles. The nanostructures can be formed as square, round, oval, and/or any arbitrary shapes. The size of the non-scattering nanostructures may be subwavelength of incident light. A maximum of the size of the non-scattering nanostructures (e.g., element width, element diameter, element length, element height) may be a fraction (e.g., half, third, fourth, etc.) of the wavelength of a target spectrum (e.g., a fraction of the lowest wavelength of the target spectrum). Based on the systems and methods, a pixel may include a passivation layer made of a thin film (e.g., one or more dielectric thin film layers). The passivation layer may be fabricated with silicon (Si), silicon oxide (SiOx), silicon nitride (SixNy) silicon oxynitride (SixOyNz), hafnium oxide (HfO), aluminum oxide (AlO), titanium oxide (TiO). In some cases, the second surface of a given pixel may include one or more dielectric thin films following non-scattering structures for antireflective purposes. In some cases, the refractive index of the second surface may be lower than refractive index of the pixel area (e.g., photodetector, semiconductor substrate). In some implementations, the systems and methods may include a trench insulator (e.g., shallow trench insulator, relatively shallow trench insulator) defining an active pixel area between the substrate and a gate structure. In some cases, an insulating layer may fill the trench. The insulating layer may be formed in the semiconductor substrate. In some cases, the trench may be extended from the second surface to the first surface partially or entirely. In some cases, the trench insulator may extend at least partially through the pixel area (e.g., from a top of the pixel area towards the nanostructure to a bottom of the pixel area towards the back reflector). In some cases, a pixel of the image sensor may include a single microlens or a microlens array (e.g., two lens microlens array, four lens microlens array, etc.). The microlens may be formed on the second surface of the semiconductor substrate. In some cases, the microlens may be made of set of nanostructures or curved organic materials (e.g., curved organic plastics, polymers, acrylics, polycarbonates, etc.). In some cases, the fabrication of a pixel of an image sensor may include at least one of a gate structure, a semiconductor substrate formed over the gate structure, an inter-layer dielectric layer (e.g., a second inter-layer dielectric layer with a first inter-layer dielectric layer part of the gate structure layer) formed over the semiconductor substrate, a passivation layer formed over the inter-layer dielectric, at least one nanostructure layer formed over the passivation layer, an antireflective layer formed over the at least one nanostructure layer, and a microlens formed over the antireflective layer. In some cases, the gate structure may include a back end of line with metal interconnect layers that is patterned on a wafer (e.g., on front end of line layer). In some cases, the gate structure may include an inter-layer dielectric (e.g., a first inter-layer dielectric layer deposited on or adjacent to the back end of line). In some cases, the back reflector may be incorporated in the gate structure (e.g., in the back end of line layer or inter-layer dielectric layer). The photodetector may be formed in or on the semiconductor substrate. In some cases, the second inter-layer dielectric adjacent to the passivation layer may include at least one metal shield (e.g., back-side metal shield).
Based on the system and methods described, quantum efficiency (QE) is increased (e.g., greater than 50% quantum efficiency with incident light at or relatively near near-infrared wavelengths). The systems and methods provide a relatively higher QE based on a relatively thinner photodetector (e.g., from 0.01 to 10 micrometers), where the thickness of the photodetector is measured from a top of the photodetector towards the nanostructure to a bottom of the photodetector towards the back reflector (e.g., relatively equivalent to the substrate), the incident light traveling in a linear direction in line with the microlens, the nanostructure, the photodetector, and the back reflector. In some cases, the systems and methods may include a photodetector that is part of or is a sub-section of the substrate. For example, the systems and methods may include a substrate that is Si and a part of the substrate may be doped to make a photodetector out of the Si substrate.
Based on the systems and methods, crosstalk (e.g., photons incident at a first pixel being detected at an adjacent second pixel) and dark noise are minimized.
In one or more examples, lens 105 may be configured to focus, deflect, guide, bend, and/or route incident electromagnetic energy (e.g., visible light radiation, infrared radiation, thermal radiation) onto image sensor 110. In some cases, lens 105 may include a filter configured to allow one or more selected spectral bands to pass while blocking other spectral bands. In some cases, lens 105 may include at least one microlens (e.g., microlens array), at least one metalens (e.g., one or more multi-layered metalenses), at least one nanostructure lens (e.g., one or more metamaterial lenses of pillars, holes, and/or gratings in the nanometer scale), at least one curved lens (e.g., of organic materials), at least one flat lens, etc.
In some implementations, image sensor 110 may be configured to detect incident light (e.g., photons) and convert the light to an electrical signal (e.g., photoelectrons). In some cases, image sensor 110 may include a cooling system that chills image sensors (e.g., image sensor 110) to reduce thermally-induced noise below the level of the imaging signal at the scene being detected.
In some examples, image processor 115 may be configured to process image data. In some cases, image processor 115 may include a thermal image processor configured to process thermal image data detected by image sensor 110. In some examples, image processor 115 may include an ADC to detect and convert electromagnetic radiation detected by image sensor 110 to a digital signal. In some cases, image sensor 110 may include a DSP to process the digital signal. In some examples, image processor 115, in conjunction with memory 120, may record image data.
In one or more examples, memory 120 may be configured to store data generated by image sensor 110 and/or data processed by image processor 115. In some examples, memory 120 includes DRAM to store image data (e.g., the electrical signal of the IR sensor, the digital signal of the image processor 115, etc.). Additionally, or alternatively, memory 120 includes cache memory (e.g., SRAM) to hold data being processed by image processor 115.
In some cases, display 125 may be configured to display image data detected by image sensor 110 and processed by image processor 115 (e.g., display a field of view of imaging system 100, display a captured image stored in memory 120). In some cases, image processor 115 may perform object detection based on image data detected by image sensor 110. In some examples, display 125 may be configured to show objects detected by image sensor 110 and processed by image processor 115.
The systems and methods described herein include multiple advantages and benefits. For example, the systems and methods provide up to a twofold improvement in QE while reducing crosstalk and up to a three-fold reduction in parasitic light sensitivity. The simulated light filed map shows that while scattering nanostructures (BST) scatters light everywhere in the photodiode (PD) and surrounding diodes or transistors (SD), non-scattering nanostructures (NST) traps the light within PD without scattering to the storage nodes. Based on the improved sensitivity of the systems and methods, the performance of associated sensors is increased. For example, the systems and methods provide up to a multi-fold increase in range resolution and extension for time of flight (ToF) sensors. For single photon avalanche diode (SPAD) sensors, the systems and methods can provide over a two-fold increase in photon detection efficiency (PDE) performance while increasing photon-timing precision (e.g., decreasing timing jitter). In global shutter implementations, the systems and methods can provide over a two-fold increase in SNR and high dynamic range performance. The systems and methods operate with relatively low transmit power, making the systems and methods suitable for low-light applications. The systems and methods can be extended to multiple spectrums (e.g., visible spectrum, infrared spectrum, near-infrared spectrum, etc.). This multi-spectrum applicability makes the systems and methods suitable for automotive applications (e.g., ADAS, in-cabin sensors, etc.). With infrared applications, the systems and methods trap and direct infrared light and/or near-infrared light just as with other spectrums, providing a two-fold improvement in QE in infrared and near-infrared applications. The systems and methods provide the ability to provide a QE above 90% (e.g., for red spectrums).
In some cases, pixel 200 may be a silicon pixel. A silicon pixel can include a silicon-based image sensor (e.g., photodetector 210). that is part of a 2D array of charge-collection sites, or picture elements (pixels), located below the surface of a silicon chip. When an image is focused on the surface, the silicon generates electron-hole pairs, and the pixels collect the electrons as signal information.
In some examples, inter-layer dielectric layer 220 (e.g., a second inter-layer dielectric) may be formed adjacent to, on, and/or in a semiconductor substrate that includes photodetector 210. Passivation layer 225 may be formed on and/or adjacent to inter-layer dielectric layer 220. Inter-layer dielectric layer 220 may include at least one metal shield (e.g., back-side metal shield). As shown, inter-layer dielectric layer 220 may include a first metal shield below and to the left of photodetector 210 and/or include a second metal shield below and to the right of photodetector 210 relative to the depicted pixel 200. As shown, the one or more metal shields of inter-layer dielectric layer 220 may be formed and/or positioned to allow light pass into the pixel area or admit (e.g., not block) light entering the pixel area of photodetector 210. As shown, inter-layer dielectric layer 220 may include a gap that allows light to pass freely into the pixel area of photodetector 210. Although depicted in
In some examples, lens 240 may include one or more microlenses. In some cases, lens 240 may include a single microlens (e.g., one microlens per pixel). In some cases, lens 240 may include a two lens microlens array or a four lens microlens array (e.g., two or four microlenses per pixel). In some cases, pixel 200 is one pixel of multiple pixels of a given image sensor. In some cases, one or more pixels of the image sensor may include one microlens per pixel. Additionally, or alternatively, one or more pixels of the image sensor may include two microlens per pixel (e.g., two lens microlens arrays). Additionally, or alternatively, one or more pixels of the image sensor may include four microlens per pixel (e.g., four lens microlens arrays). Additionally, or alternatively, one or more pixels of the image sensor may include at least one metalens microlens per pixel (e.g., four lens microlens arrays).
In some examples, trench insulator 215 may be optional in pixel 200. Based on the systems and methods described herein, trench insulators in pixel 200 may be optional. In some cases, pixel 200 may include relatively shallow (e.g., relatively low height) trench insulators. In some cases, based on the linear or relatively linear steering of photons towards photodetector 210, which minimizes crosstalk to adjacent pixels, pixel 200 may be configured without trench insulators. As shown, trench insulator 215 may extend into the pixel area from inter-layer dielectric layer 220 and/or passivation layer 225. Additionally, or alternatively, trench insulator 215 may extend from the gate structure layer.
In some examples, a layer of pixel 200 (e.g., gate structure layer) may include back reflector 205. In some cases, back reflector 205 may include a metal layer (e.g., formed on or as part of a gate structure layer of pixel 200). As shown, a layer of pixel 200 may include anti-reflective layer 235. In some cases, anti-reflective layer 235 may include and/or may be formed with multiple thin dielectric films. In some cases, anti-reflective layer 235 may be configured to minimize light escaping the pixel area of photodetector 210 (e.g., escaped light 255).
In the illustrated example, light incident on pixel 200 (e.g., incident light 245) enters lens 240. Incident light 245 passes through anti-reflective layer 235, nanostructure 230, and passivation layer 225 to enter the pixel area. In the illustrated example, incident light 245 passes between the depicted gap of inter-layer dielectric layer 220. Once in the pixel area, incident light 245 may be referred to as captured light 250. At least a portion of captured light 250 may be absorbed and detected by photodetector 210 (e.g., photons converted to photoelectrons). As shown, at least a portion of captured light 250 may pass through photodetector 210 and/or pass beside photodetector 210. As shown, a portion of captured light 250 that passes through photodetector 210 and/or passes beside photodetector 210 may reflect off of back reflector 205. In some cases, at least a portion of captured light 250 that reflects off back reflector 205 may be absorbed and detected by photodetector 210.
As shown, at least a portion of captured light 250 that reflects off back reflector 205 may pass through passivation layer 225 (e.g., between the depicted gap of inter-layer dielectric layer 220) and reach nanostructure 230. Based on the systems and methods described herein, nanostructure 230 may be configured to permit incident light 245 to pass through nanostructure 230 from anti-reflective layer 235 and/or lens 240. However, nanostructure 230 may be configured to reflect captured light 250 that reflects off back reflector 205 and/or passes through photodetector 210. Accordingly, the design of elements of nanostructure 230 (e.g., a width of an element of nanostructure 230, a height of an element of nanostructure 230, and/or a spacing between a first element and a second element of nanostructure 230) may be configured to minimize light escaping the pixel area of photodetector 210 (e.g., escaped light 255).
As shown, lens 305 may be a single microlens, lens 315 may be a two-lens microlens array, lens 325 may be a four-lens microlens array, and lens 335 may be a metalens (e.g., single layer metalens, multi-layer metalens, etc.). In some cases, an imaging device based on the systems and methods described herein may include one or more lenses based on lens configurations 300. For example, an imaging device may include at least one pixel with a single microlens, at least one pixel with a two-lens microlens array, and/or at least one pixel with a four-lens microlens array.
As shown, first element 405 and second element 410 may be configured with pitch 415 (e.g., spacing between first element 405 and second element 410). In some cases, two or more elements of nanostructure 400 (e.g., first element 405 and second element 410) may be spaced periodically. Additionally, or alternatively, two or more elements of nanostructure 400 (e.g., first element 405 and second element 410) may be spaced aperiodically). In some cases, a first spacing between first element 405 and second element 410 (e.g., pitch 415) may differ from a second spacing of two other elements of nanostructure 400. In some examples, a spacing between elements (e.g., pitch 415) may be based on a wavelength of a target spectrum (e.g., incident light 245). In some cases, element spacing may be less than (e.g., less than or equal to) one-half the wavelength of the target spectrum (e.g., λ/2). In some cases, element spacing less than (e.g., less than or equal to) 200 nm to 1500 nm.
In the illustrated example, first element 405 may be configured with width 425. In some cases, two or more elements of nanostructure 400 (e.g., first element 405 and second element 410) may be configured with width 425. Additionally, or alternatively, at least one element of nanostructure 400 (e.g., second element 410) may be configured with a second width different from width 425. In some examples, an element width (e.g., width 425) may be based on a wavelength of a target spectrum (e.g., incident light 245). In some cases, the width of an element may be formed at or relatively near one-third the wavelength of the target spectrum (e.g., at or relatively near λ/3). In some cases, the width of an element may be between 100 nm and 1000 nm.
In the illustrated example, second element 410 may be configured with height 420. In some cases, two or more elements of nanostructure 400 (e.g., first element 405 and second element 410) may be configured with height 420. Additionally, or alternatively, at least one element of nanostructure 400 (e.g., first element 405) may be configured with a second height different from height 420. In some examples, an element height (e.g., height 420) may be based on a wavelength of a target spectrum (e.g., incident light 245). In some cases, the height of an element of nanostructure 400 may be proportional to a wavelength of the target spectrum and inversely proportional to an effective refractive index of nanostructure 400. In some cases, the height of an element may be configured to be greater than (e.g., greater than or equal to) the wavelength of the target spectrum over two times the effective refractive index of nanostructure 400 (e.g., λ/2*neff). In some cases, the height of an element may be between 50 nm and 5000 nm.
In the illustrated example, first element 405 may be configured with a relatively low refractive index (e.g., silicon oxide), while an element adjacent to first element 405 may be configured with a relatively high refractive index (e.g., silicon, Ge, TiO, SiN). Additionally, or alternatively, second element 410 may be configured with a relatively high refractive index, while an element adjacent to second element 410 may be configured with a relatively low refractive index.
Based on the systems and methods described herein, nanostructures for pixels at or relatively near the center of the image sensor may have a CRA correction different from the CRA correction of nanostructures at a first radius out from the center of the image sensor, both of which may be different from the CRA correction of nanostructures at a second radius out from the center of the image sensor that is further than the first radius, and so on.
For illustrative purposes, lens 505 is depicted with multiple concentric rings. In some cases, the concentric rings are depicted to indicate the position of pixels/nanostructures relative to image sensor 510. As shown, the pixels of image sensor 510 are located at different positions relative to image sensor 510 and/or lens 505. For example, nanostructure 515 is at or relatively near a center position of image sensor 510 and/or lens 505, nanostructure 520 is at the first concentric ring or a first radius of image sensor 510 and/or lens 505, nanostructure 525 is at the second concentric ring or a second radius of image sensor 510 and/or lens 505 further from the center than the first radius, nanostructure 530 is at the third concentric ring or a third radius of image sensor 510 and/or lens 505 further from the center than the second radius. Other pixels/nanostructures of image sensor 510 may be located at different positions and/or relatively at similar positions (e.g., similar radius from the center of image sensor 510 and/or lens 505), at positions nearer the center of image sensor 510 and/or lens 505, and/or positions further away from the center of image sensor 510 and/or lens 505 relative to at least one pixel/nanostructure depicted in system 500. It is noted that the size of the illustrated pixels/nanostructures may not be to scale with the size of lens 505 and/or sensor 510.
In some examples, pixels of an image sensor may be configured with customized or semi-customized nanostructure configurations. In some cases, a first nanostructure may be based on a first CRA correction, a second nanostructure may be based on a second CRA correction, and so on, based on the respective locations of the nanostructures relative to the pixel locations on the image sensor.
In the illustrated example, nanostructure 515 may be configured with a first nanostructure configuration (e.g., sensor center zone configuration, CRA correction of 0 degrees), nanostructure 520 may be configured with a second nanostructure configuration (e.g., sensor center zone configuration, CRA correction of 7.5 degrees), nanostructure 525 may be configured with a third nanostructure configuration (e.g., sensor center zone configuration, CRA correction of 9.4 degrees), nanostructure 530 may be configured with a fourth nanostructure configuration (e.g., sensor center zone configuration, CRA correction of 12 degrees). In some examples, less or more nanostructure configurations may be implemented with different CRA corrections than those depicted. Accordingly, the nanostructures of system 500 may be configured to steer light directly towards a photodetector instead of scattering the incident light throughout the pixel area, thus reducing crosstalk and dark noises, etc.
Nanostructure 600 may include at least one nanostructure. In some cases, a top surface of nanostructure 600 may include one or more nanostructure elements. Additionally, or alternatively, a bottom surface of nanostructure 600 may include one or more nanostructure elements. In some examples, at least one element of nanostructure 600 may be formed based on dry etching and/or wet etching.
In the illustrated example, nanostructure 600 may include nanostructure elements such as blazed gratings 605, pillars 610, binary gratings 615, and/or holes 620. In some examples, nanostructure 600 may include one or more patterns of diffraction grating (e.g., a pattern of blazed grating 605 and/or binary grating 615). In some cases, pillars 610 may include one or more patterns of pillars (e.g., repeating patterns, periodic patterns, aperiodic patterns). In some implementations, holes 620 may include one or more patterns of holes (e.g., repeating patterns, periodic patterns, aperiodic patterns).
In the illustrated, light may enter NST 715. NST 715 may steer the incident light towards photodetector 710, resulting in less scattering of incident light. In some cases, some of the light steered by NST 715 may reflect off back reflector 705 towards photodetector 710. As shown, light 735 steered by NST 715 and/or reflected by back reflector 705 may be retained generally within the dotted lines (e.g., contained within an area or volume based on at least one dimension of back reflector 705), indicating the systems and methods minimize the scattering of incident light, thus increasing the SNR and QE of photodetector 710.
At 805, method 800 may include forming a back reflector on a substrate layer of a pixel of an image sensor. For example, a back reflector may be formed on a substrate layer of a pixel of an image sensor where the back reflector includes a metal layer that is positioned to reflect electromagnetic radiation incident on the pixel back towards a photodetector of the pixel.
At 810, method 800 may include forming on a silicon layer of the pixel a photodetector configured to generate photoelectrons based on the electromagnetic radiation. For example, a photodetector may be formed on a silicon layer of the pixel, where the photodetector generates photoelectrons based on the electromagnetic radiation absorbed by the photodetector.
At 815, method 800 may include forming over the silicon layer a passivation layer that includes at least one thin film dielectric. For example, a passivation layer may be formed over the silicon layer, where the passivation layer may include one or more thin film dielectric layers.
At 820, method 800 may include forming over the passivation layer a nanostructure configured to steer the electromagnetic radiation linearly towards the photodetector. For example, a nanostructure may be formed on the passivation layer. The nanostructure may be configured to allow the electromagnetic radiation to pass through the nanostructure and to steer the electromagnetic radiation linearly towards the photodetector.
At 825, method 800 may include forming over the nanostructure a microlens that includes at least one of a flat coat layer or a curved lensing layer. For example, a microlens positioned on the nanostructure, the microlens comprising at least one of a flat coat layer or a curved lensing layer.
At 905, method 900 may include forming a back reflector on a substrate layer of a pixel of an image sensor. For example, a back reflector may be formed on a substrate layer of a pixel of an image sensor where the back reflector includes a metal layer that is positioned to reflect electromagnetic radiation incident on the pixel back towards a photodetector of the pixel.
At 910, method 900 may include forming on a silicon layer of the pixel a photodetector configured to generate photoelectrons based on the electromagnetic radiation. For example, a photodetector may be formed over the substrate layer on a silicon layer of the pixel, where the photodetector generates photoelectrons based on the electromagnetic radiation absorbed by the photodetector.
At 915, method 900 may include forming over the silicon layer a passivation layer that includes at least one thin film dielectric. For example, a passivation layer may be formed over the silicon layer, where the passivation layer may include one or more thin film dielectric layers.
At 920, method 900 may include forming over the passivation layer a nanostructure configured to steer the electromagnetic radiation linearly towards the photodetector. For example, a nanostructure may be formed on the passivation layer. The nanostructure may be configured to allow the electromagnetic radiation to pass through the nanostructure and to steer the electromagnetic radiation linearly towards the photodetector. It is noted that some systems may include the passivation, while other systems may not include the passivation layer.
At 925, method 900 may include forming over the nanostructure a microlens that includes at least one of a flat coat layer or a curved lensing layer. For example, a microlens positioned on the nanostructure, the microlens comprising at least one of a flat coat layer or a curved lensing layer.
At 930, method 900 may include forming an anti-reflection layer over the nanostructure. For example, an anti-reflection layer may be formed over (e.g., adjacent to) the nanostructure, where the anti-reflection layer includes at least one layer of a dielectric thin film. In some cases, the anti-reflection layer may include a first refractive index (e.g., relatively low refractive index) and the substrate (e.g., silicon substrate including the photodetector) may include a second refractive index (e.g., relatively high refractive index), where the second refractive index has a higher refractive index value than the first refractive index.
In the examples described herein, the configurations and operations are example configurations and operations, and may involve various additional configurations and operations not explicitly illustrated. In some examples, one or more aspects of the illustrated configurations and/or operations may be omitted. In some embodiments, one or more of the operations may be performed by components other than those illustrated herein. Additionally, or alternatively, the sequential and/or temporal order of the operations may be varied.
Certain embodiments may be implemented in one or a combination of hardware, firmware, and software. Other embodiments may be implemented as instructions stored on a computer-readable storage device, which may be read and executed by at least one processor to perform the operations described herein. A computer-readable storage device may include any non-transitory memory mechanism for storing information in a form readable by a machine (e.g., a computer). For example, a computer-readable storage device may include read-only memory (ROM), random-access memory (RAM), magnetic disk storage media, optical storage media, flash-memory devices, and other storage devices and media.
The word “exemplary” is used herein to mean “serving as an example, instance, or illustration.” Any embodiment described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other embodiments. The terms “computing device,” “user device,” “communication station,” “station,” “handheld device,” “mobile device,” “wireless device” and “user equipment” (UE) as used herein refers to a wireless communication device such as a cellular telephone, smartphone, tablet, netbook, wireless terminal, laptop computer, a femtocell, High Data Rate (HDR) subscriber station, access point, printer, point of sale device, access terminal, or other personal communication system (PCS) device. The device may be either mobile or stationary.
As used within this document, the term “communicate” is intended to include transmitting, or receiving, or both transmitting and receiving. This may be particularly useful in claims when describing the organization of data that is being transmitted by one device and received by another, but only the functionality of one of those devices is required to infringe the claim. Similarly, the bidirectional exchange of data between two devices (both devices transmit and receive during the exchange) may be described as ‘communicating’, when only the functionality of one of those devices is being claimed. The term “communicating” as used herein with respect to a wireless communication signal includes transmitting the wireless communication signal and/or receiving the wireless communication signal. For example, a wireless communication unit, which is capable of communicating a wireless communication signal, may include a wireless transmitter to transmit the wireless communication signal to at least one other wireless communication unit, and/or a wireless communication receiver to receive the wireless communication signal from at least one other wireless communication unit.
Some embodiments may be used in conjunction with various devices and systems, for example, a Personal Computer (PC), a desktop computer, a mobile computer, a laptop computer, a notebook computer, a tablet computer, a server computer, a handheld computer, a handheld device, a Personal Digital Assistant (PDA) device, a handheld PDA device, an on-board device, an off-board device, a hybrid device, a vehicular device, a non-vehicular device, a mobile or portable device, a consumer device, a non-mobile or non-portable device, a wireless communication station, a wireless communication device, a wireless Access Point (AP), a wired or wireless router, a wired or wireless modem, a video device, an audio device, an audio-video (A/V) device, a wired or wireless network, a wireless area network, a Wireless Video Area Network (WVAN), a Local Area Network (LAN), a Wireless LAN (WLAN), a Personal Area Network (PAN), a Wireless PAN (WPAN), and the like.
Some embodiments may be used in conjunction with one way and/or two-way radio communication systems, cellular radio-telephone communication systems, a mobile phone, a cellular telephone, a wireless telephone, a Personal Communication Systems (PCS) device, a PDA device which incorporates a wireless communication device, a mobile or portable Global Positioning System (GPS) device, a device which incorporates a GPS receiver or transceiver or chip, a device which incorporates an RFID element or chip, a Multiple Input Multiple Output (MIMO) transceiver or device, a Single Input Multiple Output (SIMO) transceiver or device, a Multiple Input Single Output (MISO) transceiver or device, a device having one or more internal antennas and/or external antennas, Digital Video Broadcast (DVB) devices or systems, multi-standard radio devices or systems, a wired or wireless handheld device, e.g., a Smartphone, a Wireless Application Protocol (WAP) device, or the like.
Some embodiments may be used in conjunction with one or more types of wireless communication signals and/or systems following one or more wireless communication protocols, for example, Radio Frequency (RF), Infrared (IR), Frequency-Division Multiplexing (FDM), Orthogonal FDM (OFDM), Time-Division Multiplexing (TDM), Time-Division Multiple Access (TDMA), Extended TDMA (E-TDMA), General Packet Radio Service (GPRS), extended GPRS, Code-Division Multiple Access (CDMA), Wideband CDMA (WCDMA), CDMA 2000, single-carrier CDMA, multi-carrier CDMA, Multi-Carrier Modulation (MDM), Discrete Multi-Tone (DMT), Bluetooth™, Global Positioning System (GPS), Wi-Fi, Wi-Max, ZigBee™, Ultra-Wideband (UWB), Global System for Mobile communication (GSM), 2G, 2.5G, 3G, 3.5G, 4G, Fifth Generation (5G) mobile networks, 3GPP, Long Term Evolution (LTE), LTE advanced, Enhanced Data rates for GSM Evolution (EDGE), or the like. Other embodiments may be used in various other devices, systems, and/or networks.
Although an example processing system has been described above, embodiments of the subject matter and the functional operations described herein can be implemented in other types of digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them.
Embodiments of the subject matter and the operations described herein can be implemented in digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Embodiments of the subject matter described herein can be implemented as one or more computer programs, i.e., one or more components of computer program instructions, encoded on computer storage medium for execution by, or to control the operation of, information/data processing apparatus. Alternatively, or in addition, the program instructions can be encoded on an artificially-generated propagated signal, for example a machine-generated electrical, optical, or electromagnetic signal, which is generated to encode information/data for transmission to suitable receiver apparatus for execution by an information/data processing apparatus. A computer storage medium can be, or be included in, a computer-readable storage device, a computer-readable storage substrate, a random or serial access memory array or device, or a combination of one or more of them. Moreover, while a computer storage medium is not a propagated signal, a computer storage medium can be a source or destination of computer program instructions encoded in an artificially-generated propagated signal. The computer storage medium can also be, or be included in, one or more separate physical components or media (for example multiple CDs, disks, or other storage devices).
The operations described herein can be implemented as operations performed by an information/data processing apparatus on information/data stored on one or more computer-readable storage devices or received from other sources.
The term “data processing apparatus” encompasses all kinds of apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, a system on a chip, or multiple ones, or combinations, of the foregoing. The apparatus can include special purpose logic circuitry, for example an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit). The apparatus can also include, in addition to hardware, code that creates an execution environment for the computer program in question, for example code that constitutes processor firmware, a protocol stack, a database management system, an operating system, a cross-platform runtime environment, a virtual machine, or a combination of one or more of them. The apparatus and execution environment can realize various different computing model infrastructures, such as web services, distributed computing and grid computing infrastructures.
A computer program (also known as a program, software, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, declarative or procedural languages, and it can be deployed in any form, including as a stand-alone program or as a component, component, subroutine, object, or other unit suitable for use in a computing environment. A computer program may, but need not, correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or information/data (for example one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (for example files that store one or more components, sub-programs, or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
The processes and logic flows described herein can be performed by one or more programmable processors executing one or more computer programs to perform actions by operating on input information/data and generating output. Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and information/data from a read-only memory or a random access memory or both. The essential elements of a computer are a processor for performing actions in accordance with instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive information/data from or transfer information/data to, or both, one or more mass storage devices for storing data, for example magnetic, magneto-optical disks, or optical disks. However, a computer need not have such devices. Devices suitable for storing computer program instructions and information/data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, for example EPROM, EEPROM, and flash memory devices; magnetic disks, for example internal hard disks or removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
To provide for interaction with a user, embodiments of the subject matter described herein can be implemented on a computer having a display device, for example a CRT (cathode ray tube) or LCD (liquid crystal display) monitor, for displaying information/data to the user and a keyboard and a pointing device, for example a mouse or a trackball, by which the user can provide input to the computer. Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback, for example visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input. In addition, a computer can interact with a user by sending documents to and receiving documents from a device that is used by the user; for example, by sending web pages to a web browser on a user's client device in response to requests received from the web browser.
Embodiments of the subject matter described herein can be implemented in a computing system that includes a back-end component, for example as an information/data server, or that includes a middleware component, for example an application server, or that includes a front-end component, for example a client computer having a graphical user interface or a web browser through which a user can interact with an embodiment of the subject matter described herein, or any combination of one or more such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital information/data communication, for example a communication network. Examples of communication networks include a local area network (“LAN”) and a wide area network (“WAN”), an inter-network (for example the Internet), and peer-to-peer networks (for example ad hoc peer-to-peer networks).
The computing system can include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. In some embodiments, a server transmits information/data (for example an HTML page) to a client device (for example for purposes of displaying information/data to and receiving user input from a user interacting with the client device). Information/data generated at the client device (for example a result of the user interaction) can be received from the client device at the server.
While this specification contains many specific embodiment details, these should not be construed as limitations on the scope of any embodiment or of what may be claimed, but rather as descriptions of features specific to particular embodiments. Certain features that are described herein in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable sub-combination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a sub-combination or variation of a sub-combination.
Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.
Thus, particular embodiments of the subject matter have been described. Other embodiments are within the scope of the following claims. In some cases, the actions recited in the claims can be performed in a different order and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In certain embodiments, multitasking and parallel processing may be advantageous.
Many modifications and other examples described herein set forth herein will come to mind to one skilled in the art to which these embodiments pertain having the benefit of the teachings presented in the foregoing descriptions and the associated drawings. Therefore, it is to be understood that the embodiments are not to be limited to the specific embodiments disclosed and that modifications and other embodiments are intended to be included within the scope of the appended claims. Although specific terms are employed herein, they are used in a generic and descriptive sense only and not for purposes of limitation.
This application claims the benefit of U.S. Provisional Patent Application Ser. No. 63/523,917 filed Jun. 28, 2023, which is incorporated by reference herein for all purposes.
Number | Date | Country | |
---|---|---|---|
63523917 | Jun 2023 | US |