The present invention concerns a multi-spectral light-field device. The present invention concerns also an imaging system comprising such a device and a system for object identification, in particular for moving object identification, comprising such a device.
Objet identification is useful in many applications as e.g. well-being, health, cosmetics, etc. The object of interest has a number of properties, as a shape, a color, a size, a surface texture, a material of which is made, etc. Light emanates from a point of an object (or “object point”) as a light-field containing a directional light distribution function, that is related to the object's reflection function.
A conventional camera captures only the reflected intensity of a single object point in a single pixel of its (two-dimensional) image sensor. Thus, the conventional camera accumulates the footprint of the object and a limited number of colours.
Light-field cameras are fragmenting the light-field of the object by a micro-lens array into mosaic images of varying view-points, in order to retrieve depth information and thus allow to determine further the object's size. Known light-field cameras are capable to portray the directional distribution of an object point.
In this context, a micro-lens array is an array of micro-lenses, i.e. an array of ultra-thin or very flat optical components, having an optic height in the order of few millimeters. The micro-lens array is connected, preferably directly connected, to an image sensor.
In this context, the optic height is the height (or the thickness) of the first surface of the first optical component to the image sensor it is cooperating with. In this context, a micro-optical component (or ultra-thin optical component or very flat optical component) is an optical component having an optic height in the order of few millimeters, e.g. 2 mm, 1 mm or smaller.
It is also known to explore the object's spectral content by using spectral or multi-spectral cameras. Common multi-spectral cameras deliver a three-dimensional intensity map without taking into account the light-field. The three-dimensional intensity map comprises intensity values per object point position and wavelength. The complete object is described within a spectro-spatial data cube. This data cube is often used to determine the object's material.
Besides the shape and the spectral information of an object point, there is another object property suitable to be detected: its type of reflection. The object reflection can vary in a range comprising among others a completely diffusive reflection (as in the case of a sand blasted metal surface), a complete specular reflection (as in the case of a polished metal mirror) and a non-symmetrical reflection (as in the case of an engraved metal foil). Such reflectance properties are connected to the object's surface property, which imprints a certain reflectance intensity function, e.g. the bi-directional reflectance distribution function (“BRDF”).
Spectral imaging based on light-field cameras (or multi-spectral light-field camera) should help to identify at least the object's shape, size, material and further the surface properties in one shot. Thus, they are an ideal candidate for object identification devices.
The document U59307127 describes a multi-spectral light-field camera comprising an imaging component (e.g. an imaging lens), a micro-lens array in the focal plane of this imaging lens, an image sensor in the back-focal plane of the micro-lens array and two different sets of color filter arrays. The first filter array is placed close to the stop plane of the imaging lens (i.e. near the diaphragm position of the imaging lens) and the second filter array is directly attached to the image sensor. The lights from an object pass through the respective filters of the first filter array and of the second filter array, to simultaneously form a plurality of object's spectral image types on an image plane of the image sensor. Large ray angles in the stop plane are typical for very small optical devices like smartphone cameras. In order to avoid vignetting, the first filter array has to provide spectral transmission for the complete spectrum. Thus, the used filters are bandpass filters providing only a broad spectral width. For a higher spectral resolution, the filter functions have to be more specific for the ray angles, too. Therefore, the described solution is complex as two filter arrays are used. Moreover, it is not adapted as such for having high spectral resolution when integrated in a handheld device as a smartphone.
The document EP2464952 describes a spectral light-field camera comprising a pin-hole as first imaging lens, like in a camera obscura. The “imaged” beams are passing a dispersive component (as a grating or a prism) and are relayed by a second lens towards a micro-lens array. An image sensor is placed into the back-focal plane of the micro-lens array. Due to the dispersive component, a continuous hyperspectral information of the object is available. The main drawback of this solution is the low light throughput for high-resolution results, since the light transmission is governed by the entrance pin-hole diameter, which is also determining the spectral resolution. Moreover, the beam path is long and therefore not suitable for very compact devices as a smartphone camera.
A disadvantage of light-field cameras is the computational effort for the image reconstruction. Recent attempts try to use machine-learning for this image reconstruction. The document US2019279051 describes a classification method based on deep learning for a (non-spectral) light-field camera. However and especially for a spectral light-field camera, the complexity of the data delivered may limit the potential of a full image reconstruction in real-time on the limited computational resources of a mobile device.
An aim of the present invention is the provision of a multi-spectral light-field device that overcomes the shortcomings and limitations of the state of the art.
Another aim of the invention is the provision of a multi-spectral light-field device adapted to be integrated in a small, compact and/or handheld (mobile) device, as a smartphone.
Another aim of the invention is the provision of a multi-spectral light-field device adapted to deliver high resolution images.
An auxiliary aim of the invention is the provision of an object identification system allowing an image reconstruction in real-time on the limited computational resources of a mobile device.
According to the invention, these aims are attained by the object of the attached claims, and especially by the multi-spectral light-field device according to claim 1, whereas dependent claims deal with alternative and preferred embodiments of the invention.
The multi-spectral light-field device according to the invention comprises:
In the context, the expression “spectral distribution” indicates a given amplitude or intensity, as a function of a wavelength and/or of a polarization.
In this context, the expression “angular distribution” indicates a given amplitude or intensity, as a function of the output angle.
In this context, the expression “spatial distribution” indicates a given amplitude or intensity, as a function of the position on the image plane.
The claimed optical filter associates a spectral distribution to an angular distribution, i.e. defines a function linking the spectral distribution to the angular distribution.
The optical filter of the device according to the invention is then placed between the optical component and the micro-lens array. It is arranged so as to filter the (indistinguishable) spectral content from the imaging component as a function of the incidence angle(s).
In other words, the optical filter is arranged so as to transform the input signal defined on a range of angles into an output signal comprising directional spectral or angular spectral contents, i.e. into a signal comprising angle-dependent spectral contents of the light-field. Those angle-dependent spectral contents are thus spatially distributed on an image plane by the micro-lens array. In one preferred embodiment, the device according to the invention comprises an image sensor in the image plane.
In one preferred embodiment, the filter comprises a substrate supporting one or more layers and/or one or more structures.
According to the invention, the optical filter is arranged so as to transmit to the micro-lens array the wavelengths of the received light rays in dependency of the angle of incidence (AOI) of the light rays on the optical filter. In other words, the optical filter according to the invention is arranged so as to transform an input signal defined on a range of angles and comprising an indistinguishable spectral content into an output signal comprising (different) spectral distributions for each angle of the angular distribution. Those (different) spectrally sorted distributions are then spatially separated on an image plane by the micro-lens array.
In other words, the claimed optical filter allows to create a wavelength dependent spatial distribution of the light-field on an image plane. The claimed optical filter is therefore an AOI-dependent filter, as its transmission profile or function depends on the light incidence angle.
Moreover, since the claimed optical filter associates a spectral distribution to an angular distribution, the claimed micro-lens array is arranged to transform this spectral distribution into a spatial distribution on the image plane. In other words, thanks to the presence of the claimed optical filter, the input signal for the claimed micro-lens array is not an angular distribution as in the state of the art, but a spectral distribution.
Thanks to the presence of a micro-lens array, the device according to the invention provides the advantage to also retrieve depth information, and thus to allow to determine further the object's size, as in light-field cameras. Thanks to the presence of the claimed optical filter, the device according to the invention provides the advantage to retrieve object's surface properties as well.
In other words, the invention provides the advantage to have a device which is at the same time a light-field device and a multi-spectral device, and which is more simple and compact than the known multi-spectral light-field devices, since the claimed multi-spectral light-field device comprises one optical filter. Therefore, the claimed multi-spectral light-field device can be easily integrated in a small, compact and/or handheld (mobile) device, as a smartphone.
The multi-spectral light-field device according to the invention is then capable to collect all relevant data of an object point in the field of view, comprising BRDF data, in a snap-shot.
In one preferred embodiment, the present invention concerns also an object identification system, comprising the multi-spectral light-field device according to the invention, and a machine-learning module connected to the multi-spectral light-field device, and arranged for identifying the object based on data collected by the multi-spectral light-field device.
In other words, a snap-shot of the multi-spectral light-field device of the invention is the input to a machine-learning module, whose output is the identified object and also (but not necessarily) its properties.
In one preferred embodiment, both the multi-spectral light-field device and the machine-learning module belong to a mobile device. Advantageously, the claimed system is therefore optimized for limited computational resources of this mobile device.
In one preferred embodiment, the machine-learning module is arranged for retrieving multi-spectral 3D-images out of multi-spectral light-field snap-shot images from the multi-spectral light-field device.
In one embodiment, the object identification system comprises:
In one embodiment, the first machine-learning module, the second machine-learning module and the third machine-learning module are the same machine-learning module.
In another embodiment, any of the first machine-learning module, the second machine-learning module and the third machine-learning module are replaced with a hand designed (or hand crafted) algorithm.
Exemplar embodiments of the invention are disclosed in the description and illustrated by the drawings in which:
Alternatively, the imaging component can be made of more than two lens components.
The illustrated optical filter 3 comprises a substrate 30 and one or more layers (of coatings) and/or one or more structures 31, supported by the substrate 30. In the embodiment of
The micro-lens array 4 comprises a set of micro-lenses 44 and a substrate 40. In the embodiment of
In
In
The micro-lens array 4 is imaging the plane of the aperture 21 with coordinates Ax, Ay onto the image sensor 5. Thus, parts of the light-field of each object point OP are captured, wherein the spatial distribution on the image sensor 5 is depending onto the transmitted angles of the optical filter 3.
In
The optical filter 3 is placed between the imaging component 2 and the micro-lens array 4. The optical filter 3 has the inherent property to transmit the spectral distributions, e.g. the wavelengths λi in the illustrated embodiment, in dependency of the angles of incidence, where θ denotes the radial angle and it, the azimuthal angle of incidence of the rays on the optical filter 3.
The micro-lens array 4 converts the spectral distribution to a certain spatial position on the image sensor 5, denoted in the following by the coordinates x and y. The power at sensor position L(x, y) is depending on the filter transmission function T(λ, ϕ, θ) according to the following formula:
L(x,y)=L(Ax,Ay)T(λ,ϕ,θ) (1)
Since each micro-lens 44 allocates for each point of the aperture 21 a different point in the sensor plane of the image sensor 5, and each aperture point causes a different angle of incidence θi on the optical filter 3, the spectral content of the object point OP is spatially distributed onto the image sensor 5.
According to the invention, parts of the spectral and directional content of the light-field of each object point are captured. For object identification, the captured spectral, spatial and angular data are analysed. In one preferred embodiment and as discussed below, a machine-learning module is used for object identification.
In other words, the optical filter 3 is arranged so as to transform an input signal defined on a range of incidence angles into an output signal comprising a spectral distribution associated to an angular distribution. In other words, the output signal comprises angle-dependent spectral contents of the light-field. Those angle-dependent spectral contents are thus spatially distributed on an image plane by the micro-lens array 4.
The optical filter 3 allows then to create a wavelength and/or polarization dependent spatial distribution of the light-field on the image sensor 5.
Advantageously, the multi-spectral light-field device 10 according to the invention is sufficiently compact and therefore it can be integrated into a mobile device as a smartphone. In one preferred embodiment, the size of the device 10 is ˜3×3×3 mm3.
Advantageously, the multi-spectral light-field device 10 according to the invention transmits the spectrum for an entire image without specific bands (“hyperspectral”). Therefore, it can be adapted to any type of image sensor 5, whose pixel resolution will the spectral resolution.
Advantageously, the multi-spectral light-field device 10 captures information within one frame: therefore, it is a snap-shot camera that can measure the properties of moving objects.
The spectral resolution of the multi-spectral light-field device 10 can be tuned by the balancing of the F-number of the imaging lens 22, 24 of the imaging component 2, the filter function of the optical filter 3, and the AOI on the micro-lens array 4. Depending on the filter function, its layout and distribution, different embodiments are described in the following.
In a first embodiment, an optical filter 3 characterised by a single filter function is used. This optical filter 3 comprises at least one layer. For example, the optical filter of
In one preferred embodiment, the imaging component 2 is adapted to the optical filter 3. For example, the imaging component 2 is arranged so as to set the range of incidence angles on the optical filter 3, e.g. by adjusting the F-number F # of the imaging component 2 so that the set range of incidence angles on the optical filter 3 includes the angular limits of the filter transmission function. In other words, the imaging component has an F-number so that the range of incidence angles on the optical filter is within the angular acceptance of the optical filter.
The opposite strategy could be used as well, setting up a gradient or step-wise filter function that matches the range of incidence angles of the imaging component 2. In this case, the optical filter 3 has a filter transmission function which is not constant along the filter's radial dimension r to fit a non-constant range of incidence angles along the filter's radial dimension r set by the imaging component 2, as will be discussed later.
An example of adapting the imaging component 2 to the optical filter 3 is described with reference to
For objects at a distance Z=g much larger than the focal length f, for example g>100×f or g>1000×f, the cone angle of the light-field is given by the aperture diameter D, the chief ray angle θ(r), and the focal length f of the imaging lens having an F number F #=f/D:
For the object point at the position X=Y=0, its light-field's spectral range is λ0(θ0)<λ<λ1(θ1), where the minimum angle of incidence on the optical filter 3 is θ0=0° and the maximum angle is θ1, wherein:
The filter function T(λ, ϕ, θ) is constant, i.e. it does not depend on the optical filter's radial dimension r, visible e.g. on
The transmitted spectrum is changing with the chief ray angle θ(r), illustrated in
tan θ2=tan θ(r)−tan θ1 and tan θ3=tan θ(r)+tan θ1 (4)
In this case, the wavelengths λ<λ2(θ2) would not be transmitted for largest chief rays θ(r). In one embodiment and for a constant filter function, the optical design provides for each point in the optical filter plane a minimum angle of incidence of θ=0° by a large aperture that fulfils the equation
so that tan 02 becomes zero. Thus, the common spectral range of central and marginal light-fields is extended to λ0(θ0)<λ<λ(θrmax), as illustrated in
In the embodiment of
The transmission filter function of the optical filter 3 of the device according to the invention allocates for the given angular width Δθ a spectral width Δλ. For example, the following values Δθ=52°, θ2=0° and θ3=30° correspond to a range of AOI range from −30° to 30°.
An AOI-dependent filter can be realized from diffraction and/or interference effect, generating resonances in the scattered field also known as physical colours. The structure of the optical filter can be homogeneous, i.e. comprising only one set of parameters. For interference filters, this set of parameters comprises e.g. the thicknesses and refractive indexes of the interference layers. For diffractive waveguides, this set of parameters comprises e.g. the thicknesses and refractive indexes of the thin film coatings, the periodicity, the fill factor and depth of the protrusions. The incident light on the optical filter 3 is characterized in particular by its wave vector kin. The optical filter 3 on the other hand is characterized by a resonance along a given axis x and a resonance wavelength λres, usually obtained from a constructive interference effect. This condition reads:
where n is the refractive index of the resonance medium and θin is the incidence angle. Thus, a relationship between the incidence angle and the wavelength is achieved.
Such a dispersion can be obtained for example with an optical filter which is an interference filter. In one embodiment, the interference filter comprises stacked dielectric layers, wherein the layers are of high- and low-refractive index and their thickness is in the order of the wavelengths or below. By an appropriate layer design comprising establishing the number, thicknesses and refractive index of the interference layers, a resonance is created, which allows only a certain wavelength to transmit the filter at a certain input and output angle. Such interference filters provide a maximum angular drift of up to 30 nm to 60 nm for e.g. Δθ/2=30° to 40°. An estimation of the resonance wavelength as a function of the AOI for λres=550 nm is shown in
In another embodiment, the AOI-dependent optical filter comprises a waveguide with periodic corrugation, as it can show a larger spectral range (SR). In such case, a resonance is accomplished when the light is coupled by the periodic corrugation (e.g. a grating) into the plane of the waveguide (effect known as Wood-Rayleigh anomaly), wherein:
where θin is the incidence angle of the wavelength λ, n1 and n2 are the refractive index of the superstrate and of the substrate, P is the periodicity of the corrugation and m the diffraction order. An estimation of the resonance wavelength as a function of the AOI for
P=350 nm, n1=1 and n2=1.52 is shown in
The angular range from −30° to 30° illustrated in
Depending on the waveguide materials, the light coupled in transmission at resonance has a high amplitude, while other wavelengths for the same incidence angle have a low amplitude. Therefore, a filtering effect is built, which can be narrowband in the example of
In the example of
The optical filter 3 of
The optical filter 3 of
Finally, the protrusions and part of the slots (over a length t4 for each side of the protrusion) of the third layer 32′″ are covered by a coating 32″″, made e.g. of Al, and having a thickness t3 over the protrusions of the third layer 32′″.
In the example of
In one preferred embodiment, the dispersive resonant waveguide grating filter 3 of
In one preferred embodiment, the dispersive resonant waveguide grating filter 3 of
When the incidence angle is varied, the resonance condition is spectrally shifted and the transmission peak is shifted, too, as illustrated in
This uncertainty can be lifted by considering the full dispersion of the filter, along both polar and azimuthal angles, as illustrated in
Although the peak position is the same for ϕ=0° in
A resonant waveguide grating filter comprises subwavelength structures to couple light into and out of wave-guiding layers, made of metallic or dielectric or a combination of metallic and dielectric materials. The structures can be fabricated by lithography or UV-replication of a UV-curable material.
In the example of
In the example of
The manufacturing of the corrugation of the resonant waveguide gratings used as examples in this application is not limited to UV replication, but can be performed with other methods such as hot embossing, electron beam lithography, photolithography, deep UV photolithography, laser interference lithography, or focused ion beam milling. The layers material deposition can be realized for example by thermal evaporation, sputtering or by wet solution processing.
The invention is not limited to the described examples of AOI-dependent optical filters 3. Alternatively, the AOI-dependent optical filter 3 can be based for example on resonant plasmonic nanostructures, coated nanoparticles, dielectric or metallic meta-surfaces or diffraction gratings.
The optical filter 3, e.g. the optical filter of
The required spectral resolution for a multi-spectral light-field device 10 can be designed as explained in the following. A single micro-lens 44 focuses all rays passing a single aperture position, e.g. A1 in
Light rays emanating of the object points in the range of OP1 to OP2 may pass through the identical aperture positions and are superimposing on the image sensor 5, at an image point. The spectral width at the image sensor position (x, y) is thus determined by the back focal length f of the imaging lens 20, 22 and the diameter dML of the micro-lens' aperture:
In one embodiment, it is possible to limit the spectral deviation within said image point, if the angular acceptance angle of each micro-lens is in the range of 1°<δθ<2° only. This can be achieved by a small micro-lens diameter dML, e.g. by a micro-lens diameter dML≤100 μm. e.g. dML≤10 μm. In dependency of the optical filter function, the spectral precision may be in the order of δλ≤1 nm.
The micro-lens array 4 may also have an aperture array to improve its imaging quality. Each micro-lens can have a square, circular or hexagonal basis. The micro lens arrays can be placed in a square or hexagonal (closely packed) array. The micro-lens array can also be replaced by an array of diffractive lenses, Fresnel lenses or diffractive optical elements to perform the same functionality.
The micro-lens array 4 may consist of a single array of micro-lenses or several micro-lens arrays, where each micro-lens array may have its own substrate or is processed on the back-side of another micro-lens array. In one embodiment, the micro-lens array 4 is processed directly on top of the image sensor 5, as illustrated in
In the embodiment of
The illustrated optical filter 3 comprises a substrate 30 and one or more layers and one or more structures 31 on top of the substrate 30. In the embodiment of
An array of micro-lenses 44 is then placed on top of this aperture array 430. In one embodiment, the micro-lens 44 are replicated a material curable by ultraviolet light, for example in a uv-curable sol-gel material. Alternatively, the micro-lens array can be fabricated by photolithography. In the embodiment of
Typical values for the micro-lens array parameters comprising spherical micro-lenses for an imaging lens of focal length f=1.44 mm considering two different F-numbers are given in the table 1 here below for a uv-curable sol-gel material:
wherein:
In one embodiment, if an imaging lens of the imaging component 2 cannot be adapted to the spectral range of the overall device 10, the filter function may have to be adapted towards the changing chief ray angle θ(r). In one embodiment, the transmission function of the filter is changing along the optical filter's radial dimension r.
T(Δ,ϕ,θ)=F(Δ,ϕ,θ,r) (10)
In one embodiment, the filter function F(λ, ϕ, θ, r) is a step function, as in the embodiment of
In one embodiment, the filter function F(λ, ϕ, θ, r) is a gradient function, as in the embodiment of
Both configurations of
The change in steps is an approach for filters that are processed by lithography and thin film coating or other non-replication-processes like interference filters. Each filter function is realized by individual thicknesses of some of the various layers of the high- and low-index material. Different layer thicknesses have to be coated subsequently, which makes the filter fabrication quite costly, as mask design changes are required, and thus only a limited number of different filter functions can be realized.
The transmission function of plasmonic or resonant waveguide filters can be altered e.g. by solely changing the period of the subwavelength structure of the optical filter 3. This change in the period can be established in a cost effective manner, e.g. by UV-replication and thin film coating. Thus, a change of the filter transmission versus the filter radius in steps or as a gradient is feasible.
The parameters of the optical filter 3 can be adapted in order to address other spectral ranges than the visible. In particular, the periodicity increase to 0.5 μm, 1 μm and above yields resonances in the near infra-red (NIR) and (short-wave infra-red) SWIR ranges.
In one embodiment, the filter function can be processed on a (curved) surface near the imaging component, e.g. the imaging lens, or directly on the imaging lens. The integration of the optical filter on the imaging lens is cost effective.
In one embodiment, the filter is processed on a curved surface near the imaging lens 20 or 22, as illustrated in
In another embodiment, the curved surface is part of the imaging lens, as illustrated in
In the embodiment of
In one preferred embodiment, the imaging lenses 22 and 52 are identical.
For compactness and in order to ensure temporal consistency, it is a further advantage to implement the high-resolution 2D camera 50 onto the same image sensor 5 of the device 10 according to the invention. Since the 2D beam path is not including a lens array, the spatial resolution is (at minimum) as high as given by the image sensor 5. Thus, the 2D camera 50 is generating a high resolution 2D image on the 2D section 550 of this image sensor 5 and the multi-spectral light-field camera 10 is generating a multi-spectral light-field image on the light-field section 510 of this image sensor 5.
In order to reduce the packaging effort, it is of advantage to implement in the beam paths of both lenses the substrate 53 of the multi-spectral light-field camera 10 (without micro lens array and filter coatings). In other words, in the beam path of the 2D camera device there is the substrate 53 of the optical filter and/or of the micro-lens array 4 of the multi-spectral light-field device 10, without the micro lens-array 4 and the one or more layers and/or one or more structures 31.
In order to achieve a focused image of the object onto the 2D camera section 550 of the image sensor 5, it is proposed to adjust the aperture 51 of the 2D camera device 50, to achieve a longer focal length, so that the image plane of the 2D camera is on the image sensor. The length difference to cover is thus the thickness and the back focal length of the micro lens array. The high resolution 2D camera section 550 and the multi-spectral light-field camera section 510 build together a very compact twin camera.
All objects captured by the twin camera are captured within one frame and will not suffer from motion blur. Further, the parallax between those sections 510, 550 is improving the resolution of the third dimension.
The imaging system 100 of
In other words, in the imaging system 100 of
In other words again, in the imaging system 100 of
More than two devices 10 according to the invention in an imaging system 100 can be used as well.
In this context, the expression “object identification” indicates the act of recognising or naming the object and its properties, in particular its footprint, colour(s), size, spectral content, material, shape, type of reflection, surface properties, etc.
The multi-spectral light-field device 10 according to the invention, alone or in combination with a 2D camera device 50 as in the imaging system 100, takes spectral light-fields of the entire object. Each micro-lens creates a light-field depending on the spatial and spectral object point OP, the chief ray θ(r), and the imaging component parameters. For different object distances, the set of parameters is changing and the spectral and spatial content is distributed accordingly.
For object identification, the captured light-fields have to be analysed. In one embodiment, a machine-learning module, as a neural network module, is used for object identification.
In this context, the expression “machine-learning module” indicates a module which needs to be trained in order to learn i.e. to progressively improve a performance on a specific task.
The machine-learning module in a preferred embodiment is a neural network module, i.e. a module comprising a network of elements called neurons. Each neuron receives input and produces an output depending on that input and an “activation function”. The output of certain neurons is connected to the input of other neurons, thereby forming a weighted network. The weights can be modified by a process called learning which is governed by a learning or training function.
Although the neural network module is a preferred implementation of the machine-learning module, the object identification system 200 is not limited to the use of a neural network module only, but could comprise other types of machine-learning modules, e.g. and in a non-limiting way machine-learning modules arranged to implement at least one of the following machine-learning algorithms:
In one embodiment, the neural network module is a deep neural network module, e.g. it comprises multiple hidden layers between the input and output layers, e.g. at least three layers.
The machine-learning module has been trained to recognize the target object. Only image content that is relevant for the object identification is processed, which makes the image processing by the machine-learning module superior to non-compressive image processing.
In the embodiment of
In another embodiment, multiple two-dimensional cameras are used as reference devices around the multi-spectral light-field device to cover the different viewpoints of the object.
In the embodiment of
For example as by a human eye, the monochrome image of a fruit is sufficient to identify an object, e.g. an apple. Such an identification of an object by its shape has been taught to a first machine-learning module, as a first neural network module, with learned shaped images 120, so as to perform a shape identification (step 130) by the machine-learning module.
From the 2D image 150 it is also possible to define the region of interest or ROI (step 140). In the embodiment of
A second machine-learning module, as a second neural network module has been taught by a set of different objects (different fruits in the example of
In the embodiment of
Evaluating the separating results of both machine-learning modules via a third-machine learning module (step 210) gives as a final result (step 220) the identified object (an apple in
The advantage of this strategy is a reduction in the computational effort, and the possibility to reuse a once taught machine-learning module to recognize shapes in combination with a newly taught machine-learning module to recognize new properties like e.g. the gluten content.
Possible and not limitative applications of the object identification system 200 are food applications and auto-focusing applications (determination of the focal length).
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/IB2020/056800 | 7/20/2020 | WO |