The instant application relates to image data processing and more specifically to three-dimensional modeling.
Eyes are an organ that reacts to light and allows vision. Light enters the eye through the cornea, through the pupil and then through the lens. The lens shape is changed for near focus (accommodation) and is controlled by the ciliary muscle. Cells within the eye are able to detect visible light and convey this information to the brain by converting the light into electrical signals that are transmitted to the brain. The brain interprets these electrical signals as sight and vision.
The present disclosure provides new and innovative systems and methods for generating eye models with realistic color. In an example, a computer-implemented method includes obtaining refraction data, obtaining mesh data, generating aligned model data by aligning the refraction data and the mesh data, calculating refraction points in the aligned model data, and calculating an approximated iris color based on the refraction points and the aligned model data by calculating melanin information for the aligned model data based on the refraction points for iris pixels in the aligned model data.
In an example, the computer-implemented method includes calculating a color of the iris based on a melanin absorption coefficient, an iris stroma scattering coefficient, and an anisotropy of a scattering phase function.
In an example, the computer-implemented method includes calculating the refraction points based on multiple lighting conditions.
In an example, a computer-implemented method includes obtaining refraction data, obtaining mesh data, generating aligned model data by aligning the refraction data and the mesh data, calculating refraction points in the aligned model data, and calculating an approximated iris color based on the refraction points and the aligned model data by calculating a Mie scattering.
Additional features and advantages of the disclosed method and apparatus are described in, and will be apparent from, the following detailed description and the figures. The features and advantages described herein are not all-inclusive and, in particular, many additional features and advantages will be apparent to one of ordinary skill in the art in view of the figures and detailed description. Moreover, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes, and not to limit the scope of the inventive subject matter.
The description will be more fully understood with reference to the following figures, which are presented as exemplary aspects of the disclosure and should not be construed as a complete recitation of the scope of the disclosure, wherein:
Turning now to the drawings, techniques are disclosed for new and innovative systems and methods for generating eye models with realistic color. In computer graphics, a variety of techniques exist for capturing humans and reproducing them as realistically and as accurately as possible in a three-dimensional (3D) virtual environments. A computer-generated human model that accurately represents its real-life counterpart is often referred to as a digital double or a digital human. Capturing and reproducing humans realistically includes (1) capturing and reproducing the geometric shape of the human and (2) capturing and reproducing organic tissue. These two parts are interdependent as shape can influence the appearance of the tissue and vice versa. Capturing the geometric shape can include obtaining the anterior shape of a fixated human subject. The geometry can be reproduced in a computer-generated model including vertices and normal vectors of a polygonal approximation of the shape, which can include a triangular or rectangular mesh. By capturing two or more fixated shapes, also called blend shapes, movements in the computer-generated model can be reproduced by interpolating between the two shapes. Capturing the optical properties of the organic tissue is more complex. For estimating the color of organic tissue, a variety of parameters should be considered in order to accurately model how the tissue interacts with light sources. These parameters include, but are not limited to, light absorption, reflection, refraction, and scattering. By imaging the tissue in controlled lighting environments and camera positions, the fundamental parameters that control how a tissue behaves in real-world conditions can be modeled. The parameters can include, but are not limited to, the shape and the optical properties of the eyes, such as light reflectivity, light absorption, and light scattering. The computer-generated model can be used to accurately represent the tissue in any computer-generated lighting environment and camera position.
Recently, progress has been made in capturing and reproducing skin. In particular, the process of capturing (and separating) the specular reflectivity and sub-surface scattering of skin have been captured successfully due to the polarity preservation of specular reflectivity in contrast with loss of initial polarity of scattering photons in the epidermal. Similarly, much progress has been made with capturing and analyzing the optical properties of skin by means of collimated light, such as lasers. However, these existing techniques typically are not successful at accurately modeling other features of the human body, such as eyes. For example, existing skin capture techniques using polarized filters do not transfer well to determining the color and optical behavior of the iris as most photons that enter the eye through the cornea will be either absorbed or scattered before they exit the eye towards an observer. Therefore, by the time the photons reach the iris, the scattering will have altered the photons' polarization and hence, the polarized filters used in the skin capture techniques fail to capture useful information. Moreover, specular reflection typically appears as white Purkinje reflections off the cornea and lens. Similarly, other scanning techniques, such as with collimated light sources (e.g. lasers), cannot be safely used to scan eyes without risking the health and safety of the eyes. Accordingly, new techniques are needed to realistically model eyes for computer-generated environments.
System and methods in accordance with embodiments of the invention allow for accurate and realistic modeling of eyes, particularly in the modeling and reproduction of iris colors. By capturing data regarding a human eye in a controlled lighting environment and from a variety of camera positions, the shape and composition of the eye can be determined. This data allows accurate models to be generated that reproduce a realistic likeness of the eyes' color and optical behavior in a computer-generated environment. In particular, the color of the iris is a highly complex optical phenomenon requiring understanding of the anatomy, refraction of the cornea (as well as aqueous humor and iris' stroma), the absorption by eumelanin and pheomelanin molecules, scattering by the iris' stroma, and the behavior of photons as they pass through the anterior chamber. In particular, photons that travel through the cornea to the iris experience refraction absorption, reflection, and scattering. These optical phenomena are extremely complex and interdependent, which make them difficult to accurately model. However, a variety of information regarding the properties of an eye can be used to approximate the data needed to accurately model the eye. For example, the amount and type of absorption can be dependent on the photon's wavelength, the amount of melanin in the melanosomes (which can be dependent on the productivity of melanocytes), and/or the type of melanin in the melanosomes. The amount of scatter can be dependent on the photon's wavelength, the distance travelled through the stroma (which is dependent on the iris' stroma), the scattering phase function or anisotropy (which is dependent on scatterer's size and structure), and/or the incident and exiting angle of the photon when entering and exiting the iris. The amount of reflection (and iris ambient occlusion) can be dependent on a surface normal vector of the cornea (for Purkinje specular reflections) and/or a surface normal vector of the iris stroma (for iris specular and ambient occlusion). As described in more detail herein, models can include absorption coefficients (μa), scatter coefficients (μs), and anisotropy coefficients (g) along with independent modeling of specular and ambient occlusion from the scattering and absorption in the iris.
The modeling devices and processes described herein provide and improvement over existing techniques for determining eye color and generating accurate computer models. In particular, the modeling devices and processes are an improvement in computer-related technology and technological processes by allowing computing device to produce accurate and realistic eye models that can be utilized in a variety of contexts, including generating computer models. Additionally, the modeling devices and processes allow for the automation of modeling eyes that previously could not be automated.
A variety of computing systems and processes for generating eye models with realistic color in accordance with aspects of the disclosure are described in more detail herein.
Client devices 110 can obtain and/or generate a variety of data, such as images and/or scans of eyes, as described herein. Modeling server systems 120 obtain data regarding one or more eyes and generate models of the eyes as described herein. The modeling server system 120 can also provide modeling data to a variety of remote server systems 130. In a variety of embodiments, the modeling server system 120 provides modeling data for integration into computer-generated models. In a number of embodiments, the modeling server system 120 provides middleware or other computer software than can be used by remote server systems 130 to generate, incorporate, and/or manipulate eye models as described herein. Remote server systems 130 can obtain and provide modeling data as described herein. The network 140 can include a LAN (local area network), a WAN (wide area network), telephone network (e.g. Public Switched Telephone Network (PSTN)), Session Initiation Protocol (SIP) network, wireless network, point-to-point network, star network, token ring network, hub network, wireless networks (including protocols such as EDGE, 3G, 4G LTE, Wi-Fi, 5G, WiMAX, and the like), the Internet, and the like. A variety of authorization and authentication techniques, such as username/password, Open Authorization (OAuth), Kerberos, SecureID, digital certificates, and more, may be used to secure the communications. It will be appreciated that the network connections shown in the operating environment 100 are illustrative, and any means of establishing one or more communications links between the computing devices may be used.
Any of the computing devices shown in
The processor 210 can include one or more physical processors communicatively coupled to memory devices, input/output devices, and the like. As used herein, a processor may also be referred to as a central processing unit (CPU). Additionally, as used herein, a processor can include one or more devices capable of executing instructions encoding arithmetic, logical, and/or I/O operations. In one illustrative example, a processor may implement a Von Neumann architectural model and may include an arithmetic logic unit (ALU), a control unit, and a plurality of registers. In many aspects, a processor may be a single core processor that is typically capable of executing one instruction at a time (or process a single pipeline of instructions) and/or a multi-core processor that may simultaneously execute multiple instructions. In a variety of aspects, a processor may be implemented as a single integrated circuit, two or more integrated circuits, and/or may be a component of a multi-chip module in which individual microprocessor dies are included in a single integrated circuit package and hence share a single socket.
Memory 230 can include a volatile or non-volatile memory device, such as RAM, ROM, EEPROM, or any other device capable of storing data. Communication devices 220 (e.g. input/output devices) can include a network device (e.g., a network adapter or any other component that connects a computer to a computer network), a peripheral component interconnect (PCI) device, storage devices, disk drives, sound or video adaptors, photo/video cameras, printer devices, keyboards, displays, etc.
Although specific architectures for computing devices in accordance with embodiments of the invention are conceptually illustrated in
Modeling Eyes with Accurate Colors
Generating a model of an eye can include generating a three-dimensional mesh of the anterior surface of the eye, including the sclera, limbus, and/or cornea. In a variety of embodiments, a camera can project one or more projections, projected from one or more angles, onto a fluorescein tinted tear film. These projections can be captured by a coaxial telecentric photographic sensor. The captured projections can be transformed into a three-dimensional mesh of the surface of the object (e.g. eye) being imaged. For example, each measured point of the projections can be mapped to a vertex in the three-dimensional mesh. In a number of embodiments, the three-dimensional mesh is accurate to within 10 microns of the object being imaged. The normal vectors of each vertex and polygon can be calculated based on the three-dimensional mesh using any of a variety of techniques, such as interpolation. In many embodiments, the generated three-dimensional mesh has a capture diameter of approximately 20-25 mm and approximately 500,000 measured points that form the vertices of the mesh. Each vertex can have a measured normal vector. In several embodiments, the camera also captures a near-infrared image of the object (e.g. eye).
Generating accurate eye models can also include capturing the texture of the eye including, but not limited to, the color and optical properties of the eye. In several embodiments, the texture of the eye can be generated based on image data captured using one or more imaging sensors oriented coaxial to an eye fixation point along with one or more lights, with at least one of the lights coaxial to at least one of the imaging sensors. In many embodiments, the imaging sensor includes a red-green-blue (RGB) sensor with a red peak of approximately 600 nm, a green peak of approximately 520 nm, and a blue peak of approximately 460 nm. The imaging sensor(s) can be located behind an optical lens system. In several embodiments, the other lights are positioned at various angles from temporal, nasal, superior, and inferior sides (e.g. left, right, up, down) of the eye. The angle to the optical axis can be measured for each light. The generated image sequence can include a strobe sequence of images from the different lights. In a number of embodiments, there are five images in the image sequence, with one image being captured for each light in the imaging system. In a variety of embodiments, the image sequence can be repeated one or more times to correct any possible optical flow (e.g. small movements) of the subject and/or the eye during the capture sequence. In addition to capturing the image sequence, a color chart can also be captured using the same strobe sequence with the same lights at the same angles and distances. The color chart can be used to normalize differences in intensities of the lights for each wavelength (e.g. perform color correction). A variety of devices and techniques for capturing images, including imaging the texture of an eye, are disclosed in PCT International Patent Application No. PCT/US2020/058575, titled “Coaxial Multi-Illuminated Ocular Imaging Apparatus” and filed Nov. 2, 2020, the disclosure of which is hereby incorporated by reference in its entirety.
The image sequence can include one or more (e.g. five) images of the coaxially fixated eye, illuminated from one or more (e.g. five) different angles in incidence, θz, where the z-axis can be defined as the coaxial lens axis and the x-y plane is parallel to the eye's iris plane and perpendicular to the coaxial lens axis. The following can be defined for the x-y plane:
In several embodiments, the angle of the lights to the camera axis can be calculated based on a Purkinje reflection (e.g. the first Purkinje reflection) and/or the normal vector angle of the cornea at that point. Purkinje reflections are specular reflections where the angle of the incoming light and the corneal surface normal equals the angle of the reflected light and the corneal surface normal. As the imaging sensor records light that travels parallel to the axis of the lens (e.g. due to the telecentric lens), we can conclude that the angle of the light and the lens axis is two times the angle of the corneal surface normal and the lens axis.
Generating a model of the eye can include aligning the three-dimensional mesh and the image sequence along a common reference point. In several embodiments, the common reference point is the iris of the eye. The diameter of the iris can be used to scale the three-dimensional mesh and/or one or more of the images in the image sequence such that the iris is of approximately equal size in each piece of data. In a variety of embodiments, veins in the eye (such as veins in the sclera) can be used as secondary reference points. In particular, as the imaging devices generating the three-dimensional mesh and the image sequence should be telecentric to the eye itself, the images and mesh should exhibit minimal distortion relative to each other and capture the eye at approximately the true size of the eye. The color of the eye can be determined based on the sensor intensity for the imaging devices capturing the three-dimensional mesh and/or the image sequence.
In a variety of embodiments, the intensity I for pixel (n,m) of an imaging sensor can be defined as:
where r, g, and b are intensity functions for red, green, and blue intensities respectively, where
(r,g,b)∈{[0,255],[0,255],[0,255]}∈(,,)
and r, g, and i are functions of light angle θ. In many embodiments, θ is expressed as an addition of angle with z-axis and x-y plane: θz+θxy, initial intensity I0, and corneal mesh coordinate cn,m∈(x,y,z). In several embodiments, cn,m is the first intersection of perpendicular line from pixel (n,m) to the corneal mesh. In a number of embodiments, θ is expressed based on the normal vector γn,m at corneal mesh coordinate cn,m. The intensity for pixel (n,m) can be expressed as the following matrix, where each row in the matrix corresponds to a captured image:
In many embodiments, θxy∈{0,90,180,270} for the images corresponding to the side lights (e.g. left, right, above, and below images) and θxy=θz=0 for the images corresponding to the coaxial light (e.g. the coaxial image), In,m can be expressed as:
As the light moves from the light source, through the eye, and back out through the eye into the imaging device, refraction of the photons in the light occur, particular as the photons move through the different structures within the eye. Before the photons reach the camera sensor, they first pass from the light source through the cornea and aqueous humor to the iris. At this point, the photos can be absorbed, reflected, or scattered. It is in the iris that the events take place that are most significant to determining the (perceived) eye color. The photon's path is refracted several times: 1) at the anterior corneal edge when going from air to corneal stroma, 2) at the posterior corneal edge when going from corneal stroma to aqueous humor, and 3) at the anterior iris' edge, when going from salty tear water of the aqueous humor to the iris' stroma. The light rays (e.g. photons) can be backward-tracked from the camera sensor via the iris to the light sources to calculate the refraction in the sensor pixels of the imaging device.
The projection from sensor pixel to iris pixel, (n,m)→(i,j) can be defined based on a line FG, perpendicular to the sensor, where G∈(x,y,z) is center of a sensor pixel (n,m) and F∈(x,y,z) is first intersection of corneal mesh, such that xG=xF=xn and yG=xF=xm. Based on this, a photon's exiting angle at F can be calculated. The surface normal at F can be determined based on the capture of the three-dimensional mesh and the image sequence. The angle between surface normal at F and the z-axis can be defined as γF,out, which corresponds to the angle that the photon exited out of the cornea on its way to the camera sensor. In a variety of embodiments, γF,out can be expressed as:
If the refractive index n is defined by
where c is the speed of light in vacuum and v is phase velocity of light in the medium, then Snell's Law states that:
n
in sin γin=nout sin γout
For point F, where nout=nair (e.g. the refractive index of air) and nin=ncornea (e.g. the refractive index of the cornea), the path through the cornea has angle γF,in with the surface normal at F is:
In terms of θF,normal:
γF,in=sin−1(0.73 sin(π/2−θF,normal))
or
γF,in=sin−1(0.73 cos θF,normal)
Photons also experience some degree of refraction as they travel through the cornea (e.g. distance EF). The amount of refraction is based on the thickness of the cornea. A typical cornea thickness is approximately 0.6 mm or 600 μm. With this value, the distance of EF in mm can be calculated as follows:
and in terms of θF,normal:
In practice, the cornea is not of uniform thickness. In a variety of embodiments, the thickness of the cornea is between 520 μm and 670 μm. The above calculations can be refined to model the varying thickness by measuring the distance between first and second Purkinje reflections as the Purkinje reflections are related to corneal thickness. In general, the error margin of the refraction through the cornea is:
For point E, where nout=ncornea=1.37 and nin=naqueous_humor=1.33, the path through the aqueous humor has angle γE,in with the surface normal at E is:
The normal at F is approximately equal to the normal at E because of the proximity of F and E. Therefore, the angles to the normal of the photon's path can be assumed to be approximately equal:
γE,out≈γF,in
and the earlier equation can be expressed in terms of γF,out as follows:
and in terms of θF,normal:
γE,in=sin−1(0.75 cos θF,normal)
The length DE, the path through the aqueous humor, can be calculated based on the height of the cornea hc and the distance from cornea to the lens. In a variety of embodiments, hc=3.4 mm can be used as a constant value. However, the corneal height is typically around 4.2 mm in young people and as low as 2.4 mm in older people. In several embodiments, hc can be calculated based on the distance between the first (anterior cornea) and third (lens) Purkinje reflections.
With angle γE,in, a line from point E angled in direction γE,in can be established, with point D defined as the intersection of this line with the iris plane. Point D∈(x,y,z), the center point of pixel (i,j), is defined as:
As
θ{right arrow over (EF)}=θF,normal−γF,in=θF,normal−sin−1(0.73 cos θF,normal)
and the length EF as:
For point D, the angle of vector DE can be expressed as:
θ{right arrow over (DE)}=θE,normal−γE,in≈θF,normal−sin−1(0.75 cos θF,normal)
and length DE as:
With D, the midpoint of pixel (i,j) and E, the intersection at posterior surface of the cornea, as well as the refraction angles at anterior and posterior surface of the cornea are defined for the image sequence, the refracted photon paths of the different lights coming into pixel midpoints of (i,j) can be calculated.
The refraction of photons' paths from the light sources to the iris can be calculated, in particular the intensity Ii,j(r,g,b) of a photon travelling from a light source to iris pixel (i,j), with angle of the light θi,jght. In several embodiments, the intensity can be back-calculated from mid-point of iris pixel (i,j) to the light source. In a number of embodiments, this calculation can be based on Snell's Law. Turning now to
Lengths CD and BC can be calculated as:
As described herein, image synchronization (e.g. image matching) can be based on cornea measurements and subsequent refraction calculations through reverse ray-tracing of each camera sensor pixel's center point from the camera through the cornea to the iris and then from the iris through the cornea to each light source. These image synchronization techniques allow for a variety of additional modeling including, but not limited to, iris ambient occlusion calculation, melanin absorption coefficient calculation, and iris stroma' scatter coefficient calculations.
The strongest specular reflections of the eye are typically the three Purkinje reflections that reflect off the corneal anterior surface, off the corneal interior surface and off the lens. However, these reflections are typically not useful for determining eye models in accordance with embodiments of the invention. In a variety of embodiments, the specular reflection directly off the anterior edge off the iris is used in the generation of eye models. In several embodiments, the iris at a smooth surface; however, due to the unordered collagen fibril stroma of the iris, the surface of the iris is typically not smooth. The uneven surface of the iris causes brighter pixels where the light reflects directly off the fibers into the camera, such as via the θout angle. This can also cause certain pixels to be darker where higher-positioned fibers shade occluded fibers. In order to address these issues, the specular reflection and/or the occlusion shadows can be separated from the other optical phenomena, such as absorption and scattering.
Typically, specular reflection and shading are mostly independent of wavelength of light, whereas absorption and scattering in the iris is mostly dependent on the wavelength of light. In addition, the contrast between occlusion shadow and specular reflection will be significantly lower on the coaxially lit image, where θz=0 compared to the side lit images where |θz|>0.
In many embodiments, pheomelanin colors can be represented in RGB, with r=255, 100<g<200, and b=0. This correlates to wavelengths λ between 600 and 625 nm. Accordingly, pheomelanin can be represented in RGB as (r,g,b)∈[(2.55·g,g,0), (1.28·g,g,0)]. In several embodiments, eumelanin colors can be represented in RGB, with 200<r<255, g=0, and b=0. This correlates to wavelengths λ between 700 and 780 nm. Accordingly, eumelanin can be represented in RGB as (r,g,b)∈[(r,0,0), (r,0,0)].
In a variety of embodiments, scattering in the iris' stroma is inversely proportional to the fourth power of the wavelength (nm). When the source light is white, i.e. (r,g,b)=(c,c,c) for a constant c E [0,255]:
In several embodiments, maximum scatter is (r,g,b)=(0.289·255,0.554·255,255)=(74,141,255). In a number of embodiments, minimum scatter is (r,g,b)=(1.156,2.216,4)≈(1,2,4). More generally, for scattering:
if b=x⇒(r,g,b)=(0.289·x,0.554·x,x)
In many embodiments, the following steps can be taken to separate specular from melanin absorption and scattering. To determine separate scattering for intensity pixel matrix Ii,j(r,g,b)∈(,,), split off scatter as follows:
A scatter map matrix can be defined as:
S
i,j(r,g,b)=(0.289·si,j,0.554·si,j,si,j)
Separate specular and ambient occlusion can be calculated. For intensity pixel matrix I′i,j(r,g,b)∈(,,), specular and ambient occlusion can be separated as follows:
A specular and ambient occlusion map matrix can be defined as:
A
i,j(r,g,b)=(ai,j,ai,j,ai,j)
After separating scattering, specular and ambient occlusion, the remaining structures left to model in the eye are mostly melanin, which includes either or both of eumelanin and pheomelanin. As described herein, neither the eumelanin nor pheomelanin have blue—pheomelanin has both green and red in it, whereas eumelanin only has red. Based on this observation, for intensity pixel matrix I″ij(r,g,b)∈(,,), the pheomelanin can be split off as follows:
Then the pheomelanin map matrix can be defined as:
P
i,j(r,g,b)=(t·pi,j,pi,j,0)
and the eumelanin map matrix can be defined as:
E
i,j(r,g,b)=I′″i,j(r,g,b)
As described above, the primary objective is to split off specular and ambient occlusion in order to more accurately model the iris. In several embodiments, the above-described techniques model the complex light effects on the iris as linear functions in the (r,g,b) space. It is of course more complex than this. In order to complete the splitting of the specular and ambient occlusions, we can split Ai,j(r,g,b) from the intensity matrix as follows:
I
i,j(r,g,b)=Ei,j(r,g,b)+Pi,j(r,g,b)+Si,j(r,g,b)
The intensity due to scattering and absorption can be separated in a second pass. For simplicity, this process will be described in accordance with a single pixel 610 (i,j) as shown in
The incident and outgoing angle at each pixel 610 (i,j) can be calculated as described herein. Similarly, the initial intensity, I0(r,g,b), the distance travelled through aqueous humor, dij,a, and the distance travelled through cornea, dij,c, for each pixel (i,j) have been calculated as described herein. In many embodiments, the initial intensity can include intensity loss due to travel through air before reaching the cornea. Additionally, the observed intensity at the imaging sensor, In,m(r,g,b) can be measured at the time the image(s) in the image sequence are captured. In many embodiments, when the image sequence includes five images as described herein:
The scatter intensity model In,m(r,g,b) can be defined as:
I
n,m(r,g,b)=En,m(r,g,b)+Pn,m(r,g,b)+Sn,m(r,g,b)
where En,m(r,g,b) is the eumelanin reflection intensity in θij,out direction, Pn,m(r,g,b) is the pheomelanin reflection intensity in θij,out direction, and Sn,m(r,g,b) is the Mie scattering intensity in θij,out direction.
The absorption coefficient μa[cm−1] of melanosomes can differ significantly depending on the density of the melanosomes. The general shape of the melanosome absorption spectrum can be approximated as:
where λ [nm] is the wavelength of the incident light.
In a variety of embodiments, μa=1.70·1012·λ−3.48 for melanosomes in skin, while μa=6.49·1012·λ−3.48 for melanosomes in the retina. Melanosomes in the iris can be approximated as:
μa=M′·λ−3.48 with 1.70·1012<M′<6.49·1012
In several embodiments, the likelihood that melanin will reflect is given by Beer's law:
R
m=1−e−μ
where μa,m is the absorption coefficient of melanin and dm is the thickness of the melanin layer. In a variety of embodiments:
5 μm<dm<10 μm
Eumelanin and pheomelanin reflections at (n,m) can occur in direction θij,out:
E
n,m(r,g,b)+Pn,m(r,g,b)=I0(r,g,b)·(Tcornea+Taq.humor)·(1−e−μ
where f is the coefficient of diffuse reflection in the θij,out direction.
Light is subject to absorption when traveling through the cornea and aqueous humor. The photon survival rate T can be given by Beer's law:
T
aq.humor
=T(dij,aq)=e−μ
where μa,a is the absorption coefficient of water and
T
cornea
=T(dij,c)=e−μ
where μa,c is the absorption coefficient of the cornea.
In several embodiments, Tcornea is constant and Taq.humor is a linear function of distance travelled in the aqueous humor, dij,a.
The melanin can be modeled as a top layer filter through which the photon travels before reaching the stroma of the iris. Some of these photons will interact by the melanin filter depending on how much melanin there is. The survival rate of the photons traveling through melanin can also be given by Beer's law:
T
melanin
=T
m
=e
−μ
d
where μam is absorption coefficient of melanin and dm is the thickness of the melanin layer. In a variety of embodiments:
5 μm<dm<10 μm
as described herein.
Mie scattering describes the scattering of an electromagnetic plane wave by a homogeneous sphere. In a variety of embodiments, the eye is approximately modeled as a homogeneous sphere. In several embodiments, the scattering of the photons Sn,m(r,g,b) can be expressed as:
S
n,m(r,g,b)=I0(r,g,b)·(e−μ
where β(λ) is the Mie scattering coefficient and γ(θ) is the scattering phase function. In many embodiments, β(λ) can be approximated as:
where n is the refractive index of iris' stroma and N is the molecular number density of the iris' stroma.
In a number of embodiments, γ(θ) can be approximated as:
where g is the anisotropy of the scattering, which indicates the direction and shape of the scattering.
If the refractive index of iris equals that of the cornea, n=1.37, then:
In many embodiments, the scatter density N and anisotropy g, as well as absorption coefficient μa,m of melanin for each pixel (i,j) can be calculated. These values can be used to define the optical properties of the image sequence.
A variety of scatter models can be used to model the scattering of light off the iris.
Although the process 900 is described with reference to the flowchart illustrated in
Once generated, the eye models herein can be integrated into a variety of computer-generated models. For example, the eye models can be used to provide accurate, realistic eyes for computer gaming, virtual environments, and/or any other computer-generated models.
Although the process 1000 is described with reference to the flowchart illustrated in
As described herein, the concentration of melanin pigmentation in the iris can be used to determine the structural color of the iris under different lighting environments. In a variety of embodiments, the following variables are used to determine melanin concentration:
The melanin extinction coefficients, including the extinction coefficient of eumelanin εeu(λ)[(cm)−1(mg/ml)−1] and the extinction coefficient of pheomelanin εpheo(λ)[(cm)−1(mg/ml)−1] can be defined based on the red, green, and blue variables. In several embodiments, the eumelanin and pheomelanin concentrations fall within a range defined by a high threshold value and a low threshold value. A step size can be defined in order to increase or decrease ceu and cpheo after each ray simulation to minimize Imodel−Isimulated
The eumelanin and pheomelanin absorption coefficients, μeu[cm−1] and μpheo[cm−1] can be calculated by multiplying the extinction coefficient with concentration. The absorption coefficient indicates the level of absorption per distance for a particular wavelength. For example, the wavelength can be for red, green, blue, infrared, and/or any other wavelength as appropriate.
As described above, a variety of refraction indices are utilized for modeling the path of light rays through various layers of the eye: cornea, aqueous humor, anterior base layer (ABL), stroma, and/or iris pigment epithelium (IPE) can be defined:
The following variables can be used to define the thickness (e.g. height) of the iris layers, ABL and stroma.
The scatter coefficient μs and the anisotropy coefficient g can be calculated using Mie Theory. In several embodiments, the scatter coefficient remains a fixed constant for each wavelength. In many embodiments, the scatter coefficient and/or anisotropy coefficient are automatically recalculated during each modeling loop as described in more detail below. In a variety of embodiments, the following variables can be used to define the scatter coefficient:
It should be noted that any of the above variables can be calculated as described herein and/or predefined values as known in the state of the art can be utilized. Further, more or fewer variables can be used depending on the specific requirements of particular applications of embodiments of the invention.
The ABL ratio can be defined as the fraction of melanin in the ABL over the total melanin in the iris:
for all pixels and/or pixel groups, rA(i,j), where:
In many embodiments, the ABL melanin ratio rA(i,j) remains constant during process 1100.
Light rays from a light source can be simulated (1112). The simulation can include simulate one or more light rays (e.g. photons) at a time from the light source through the iris layers, where they are either absorbed or transmitted out of the iris towards the camera lens. In many embodiments, a Monte Carlo simulation can be used to randomize the simulation. The Monte Carlo simulation can use random numbers ui, for i=1, 2, 3, . . . with values uniformly distributed in the interval [0,1], which are generated on the fly during the simulation. The following random numbers can be used in the simulation:
It should be noted that more or fewer variables can be used depending on the specific requirements of particular applications of embodiments of the invention.
In a variety of embodiments, a ray can be simulated by start a new ray at boundary of ABL coming from the light source. The ray travels from boundary to boundary. At a boundary one or more of a variety of events can take place, including (1) a reflection event (e.g. the ray is reflected back into the incident layer) and/or (2) a refraction event (e.g. the ray passes through the boundary). In between each boundary one of following events takes place: (1) an absorption event (e.g. the ray end and the simulation of this ray is complete), (2) a scattering event (e.g. the ray changes direction and travels to the next boundary), or (3) neither (e.g. the ray continues in a straight line to the next boundary). In some embodiments, the reflection and refraction events can be diffuse events.
An example simulation of a ray traveling through an eye in accordance with embodiments of the invention are described in more detail with respect to FIG. 12. The simulation 1200 starts with a ray 1212 originating at pixel 1210 (i+3,j) in the aqueous humor layer 1226. As the ray 1212 travels through the ABL boundary 1234, a refraction event occurs altering the path of the ray 1212 through the ABL layer 1224, while other rays may be reflected. The ray 1212 passes through the ABL layer 1224 undisturbed, while other rays may be absorbed by melanin. As the ray 1212 travels through the stromal boundary 1232, a second refraction event occurs altering the path of the ray 1212 through the stromal layer 1222, while other rays may be reflected. The ray 1212 passes through the stromal layers 1222 undisturbed while other rays may be scattered by collagen fibrils or absorbed by melanin through an attenuation event. As the ray 1212 interacts with the IPE boundary 1230, the ray 1212 is reflected and continues as reflected ray 1214 through the stromal layer 1222 while other rays may pass through the IPE boundary 1230 and be absorbed in the IPE layer 1220. The reflected ray 1214 passes through the stromal layer 1222 undisturbed, while other rays may be scattered by collagen fibrils or absorbed by melanin through an attenuation event. The reflected ray 1214, as it crosses the stromal boundary 1232, experiences a refraction event that alters the path of the reflected ray 1214 in the ABL layer 1224, while other rays may be reflected. The reflected ray 1214 can pass through the ABL boundary 1234, through the aqueous humor layer 1226, and be detected at pixel 1210 (i,j), while other rays may be absorbed by melanin.
Melanin concentrations can be calculated (1114). The melanin concentrations can be determined based on the number and/or intensity of the rays that exit the iris as described herein. Based on the rays, the melanin coefficients ceu(i,j) and cpheo(i,j) can be calculated and/or updated for all pixels (i,j) where the ray traveled. For example, for each simulated ray that exits the iris towards the camera sensor at pixel (x,y), the difference (Ireality(x,y)−Isimulated(x,y)) can be minimized by increasing or decreasing the melanin concentrations ceu(i,j) and cpheo(i,j) where pixels (i,j) are all the pixels through which this particular ray traveled.
The simulation of light rays and calculation of melanin coefficients can be repeated (1116) until the simulation reaches a desired number of iterations (e.g. a threshold number of rays have been simulated) and/or a desired accuracy is reached. In many embodiments, the desired accuracy is determined based on the amount of change between simulations is below a threshold value). The number of simulated rays can be between a minimum and/or a maximum threshold. For example, the minimum threshold may be 10,000 rays and the maximum threshold may be 10,000,000 rays, although any number of rays can be simulated as appropriate. If an accuracy threshold has been reached (1116), iris color data can be calculated (1118). If an accuracy threshold has not been reached (1116), the process 1100 returns to step 1112. Iris color data can be calculated for any lighting environment (1118). The iris color data can be calculated for any lighting environment based on the calculated melanin concentrations as described herein.
Although the process 1100 is described with reference to the flowchart illustrated in
The following is an example algorithm for simulating a light ray as described with respect to
Every new ray encounters ABL boundary from the Aqueous Humor first
If at any boundary, decide whether to reflect or transmit the ray first. A diffuse perturbation can be applied when the ray (re)enters the ABL and Stromal Layers. The ray successfully completes when the ray (re)enters the aqueous humor layer. The ray ends when the ray enters the IPE layer. Pseudocode 1300 conceptually showing this calculation is shown in
When entering the ABL layer, the ray can either be absorbed (and the ray ends) or be transmitted, where the ray continues in a straight line to the next boundary). When entering the Stromal layer, the ray can be absorbed (and the ray ends), be scattered (change the direction of the ray and then continue in a straight line to the next boundary), or be transmitted (continue the ray in an undisturbed straight line to the next boundary). Pseudocode 1320 conceptually showing this calculation is shown in
The new direction ν(αR,βR) using fora ray can be determined using a Rayleigh scattering phase function. Pseudocode 1340 conceptually showing this calculation is shown in
In many embodiments, a Fresnel test can be performed to determine if a ray should be reflected or transmitted (e.g. refracted) through a boundary. As each layer has a different refractive index, either reflection or transmission can occur at all boundaries. This reflection or transmission can be calculated based on a reflection coefficient R. ϑi is incident angle from the surface normal. If a boundary is flat, ϑi=α (polar angle) and
If u1≤R then ray is reflected, else ray is transmitted.
The angle of reflection/transmission can be determined. When a ray enters (through reflection or transmission) either the ABL or Stromal Layer, the ray can be diffusely perturbed due to the internal arrangement of the tissues. In order to account for this effect, a warping function based on the cosine distribution as follows for the resulting diffused vector νd(αd,βd):
ν
d(αd,βd)=(cos−1((1−u2)1/2),2π·u3) where αd is polar angle and βd is azimuthal angle
In several embodiments, the cosine perturbation can include a bias towards
In many embodiments this is determined based on the average of two angles:
where βi is the incident azimuthal angle.
Rejection sampling can be used to prevent the perturbed direction of propagation that invalidates the result of the Fresnel test performed at a particular boundary. For example, the Fresnel test can indicate a refraction and the ensuing diffuse perturbation using cosine distribution is rejected if it makes the path of the ray into a reflection.
When a ray is traversing either the ABL, possible absorption due to the presence of eumelanin and pheomelanin pigments can be determined. In a variety of embodiments, the absorption coefficient can be calculated for all pixels along length based on the average of absorption coefficients of eu- and pheomelanin. The probability of absorption can be calculated as follows:
When the ray travels through the stroma, it may be absorbed, scattered, or transmitted undisturbed by the tissue in the stroma. The attenuation type, i.e. either absorbed, scattered or transmitted, can be modeled based on the scatter probability, the absorption probability, and the distance that the ray travels through the stroma. In many embodiments, the absorption coefficient can be calculated for all pixels along length, taking average of absorption coefficients of eu- and pheomelanin:
μa,STROMA(λ)=μa,STROMA(λ)+(εeu(λ)·ceu(i,j)·(1−rA)+εpheo(λ)·cpheo(i,j)·(1−rA))/2
Attenuation coefficient can be calculated based on the sum of absorption and scatter coefficients:
μ(λ)=μa,stroma(λ)+μs(λ)
where μs(λ) is a constant as described herein.
The attenuation probability can be calculated as:
P
μ(λ)=1−exp(−μ(λ)·lbb)
The absorption probability can be calculated as:
The attenuation results can be determined by:
When the ray complete its path, the ray exits the ABL towards the camera pixel. At this stage, the ray can be evaluated for all the pixels (i,j) that the ray traversed and a determination to either increase or decrease ceu(i,j) and cpheo(i,j) can be made. In many embodiments, increasing or decreasing ceu(i,j) and cpheo(i,j) is made to adjust the frequency of the type of ray to occur more or less often in future simulations.
The ray-occurrence probability of this ray can be calculated as:
For a particular pixel, when the real intensity is higher than the simulated intensity, the ray-occurrence probability can be increased, which corresponds to a melanin concentration decrease.
To decrease the ray-occurrence probability, the concentration of melanin can be increased.
In a variety of embodiments, the concentration of melanin in the pixel set (i,j) (e.g.) the pixels through which the ray traversed to before exiting at pixel (x,y)) can be increased or decreased to a level such that δλ(x,y) approximates zero.
It will be appreciated that all of the disclosed methods and procedures described herein can be implemented using one or more computer programs, components, and/or program modules. These components may be provided as a series of computer instructions on any conventional computer readable medium or machine-readable medium, including volatile or non-volatile memory, such as RAM, ROM, flash memory, magnetic or optical disks, optical memory, or other storage media. The instructions may be provided as software or firmware and/or may be implemented in whole or in part in hardware components such as ASICs, FPGAs, DSPs, or any other similar devices. The instructions may be configured to be executed by one or more processors, which when executing the series of computer instructions, performs or facilitates the performance of all or part of the disclosed methods and procedures. As will be appreciated by one of skill in the art, the functionality of the program modules may be combined or distributed as desired in various aspects of the disclosure.
Although the present disclosure has been described in certain specific aspects, many additional modifications and variations would be apparent to those skilled in the art. In particular, any of the various processes described above can be performed in alternative sequences and/or in parallel (on the same or on different computing devices) in order to achieve similar results in a manner that is more appropriate to the requirements of a specific application. It is therefore to be understood that the present disclosure can be practiced otherwise than specifically described without departing from the scope and spirit of the present disclosure. Thus, embodiments of the present disclosure should be considered in all respects as illustrative and not restrictive. It will be evident to the annotator skilled in the art to freely combine several or all of the embodiments discussed here as deemed suitable for a specific application of the disclosure. Throughout this disclosure, terms like “advantageous”, “exemplary” or “preferred” indicate elements or dimensions which are particularly suitable (but not essential) to the disclosure or an embodiment thereof, and may be modified wherever deemed suitable by the skilled annotator, except where expressly required. Accordingly, the scope of the invention should be determined not by the embodiments illustrated, but by the appended claims and their equivalents.
The instant application claims priority to U.S. Provisional Patent Application No. 63/237,674, entitled “Systems and Methods for Modeling Realistic Eye Color” and filed Aug. 27, 2021, the disclosure of which is hereby incorporated by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
63237674 | Aug 2021 | US |