The present invention relates to a system and method for estimation and correcting intensity and color distortions of images of three dimensional objects caused by scattering.
Imaging in poor atmospheric conditions affects human activities, as well as remote sensing and surveillance. Hence, analysis of images taken in haze is important.
Moreover, research into atmospheric imaging promotes other domains of vision through scattering media, such as water and tissue. Several computer vision approaches have been proposed to handle scattering environments.
An effective approach for analyzing hazy images based on polarization and capitalizes on the fact that one of the sources of image degradation in haze is partially polarized. Such analysis yields an estimate for the distance map of the scene, in addition to a dehazed image. Example of this approach is given in: [24] E. Namer and Y. Y. Schechner. “Advanced visibility improvement based on polarization filtered images”. In Proc. SPIE 5888, pages 36-45, 2005. and [33] Y. Y. Schechner, S. G. Narasimhan, and S. K. Nayar. “Polarization-based vision through haze”. App. Opt., 42:511-525, 2003.
[48] Yoav Y. Schechner, Srinivasa G. Narasimhan and Shree K. Nayar “Instant Dehazing of Images Using Polarization”. Proc. Computer Vision & Pattern Recognition Vol. 1, pp. 325-332 (2001).
Environmental parameters are required to invert the effects of haze. In particular, it is important to know the parameters of stray light (called airlight) created by haze, which greatly decreases image contrast.
A method for determining these parameters from the image data was shown in:
The abovementioned method is applicable for cases of fog or heavy haze (achromatic scattering). The method requiring inter-frame weather changes, i.e., long acquisition periods.
Some references by the inventors, which further give background information are:
The present invention discloses an apparatus and methods for estimation and correcting distortions in caused by haze atmospheric in landscape images.
According to general scope of the current invention, a method of outdoor image correction is provided comprising the steps of: acquiring first image at first polarization state; acquiring second image at second polarization state, wherein said first and said second images overlap; estimating haze parameters from said first and second images; and correcting acquired image using said estimated haze parameters, wherein said estimating haze parameters from said first and second images does no rely on appearance of sky in the acquired images.
In some embodiments, the second polarization is essentially perpendicular to the first polarization. Preferably, the first polarization is chosen to essentially minimize the effect of atmospheric haze.
In other embodiments one state of the polarization comprises of partial polarization or no polarization.
According to an exemplary embodiment of the invention, the step of estimating haze parameter comprises the steps of: identifying at least two similar objects situated at known and substantially different distances zi from the image-acquiring camera; and estimating haze parameters p and A∞ from image data associated with said objects and said known distances.
According to an exemplary embodiment of the invention the step of estimating haze parameters comprises blind estimation of the haze parameter p from analyzing high spatial frequency content of first and second images.
In some cases, analyzing high spatial frequency content of first and second images comprises wavelet analysis of said first and second images.
According to another exemplary embodiment of the invention the step of estimating haze parameters comprises using the estimated parameter p to estimate the haze parameter A∞.
In some cases, using the estimated parameter p to estimate the haze parameter A∞ comprises identifying at least two locations situated at known and substantially different distances zi from the image-acquiring camera.
In some cases using the estimated parameter p to estimate the haze parameter A∞ comprises identifying at least two similar objects situated at substantially different distances from the image-acquiring camera.
Another object of the invention is to provide system for of correcting scatter effects in an acquired image comprising: a first and second camera, wherein said first camera comprises a polarizer; and a computer receiving image data from said first and second camera, and uses parameters extracted from data acquired by first camera to correct an image acquired by said second camera.
Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. Although methods and materials similar or equivalent to those described herein can be used in the practice or testing of the present invention, suitable methods and materials are described below. In case of conflict, the patent specification, including definitions, will control. In addition, the materials, methods, and examples are illustrative only and not intended to be limiting.
The invention is herein described, by way of example only, with reference to the accompanying drawings. With specific reference now to the drawings in detail, it is stressed that the particulars shown are by way of example and for purposes of illustrative discussion of the preferred embodiments of the present invention only, and are presented in the cause of providing what is believed to be the most useful and readily understood description of the principles and conceptual aspects of the invention. In this regard, no attempt is made to show details of the invention in more detail than is necessary for a fundamental understanding of the invention, the description taken with the drawings making apparent to those skilled in the art how the several forms of the invention may be embodied in practice.
In the drawings:
a) shows the best polarized raw image;
b) shows the result of sky-based dehazing as used in the art;
c) shows result of a feature-based method assisted by ICA according to one embodiment of the current invention;
d) shows result of a distance-based method assisted by ICA according to another embodiment of the current invention;
e) shows Distance-based result according to yet another embodiment of the current invention.
a) shows raw hazy image of Scene 2;
b) shows Sky-based dehazing according to the method of the art;
c) shows Feature-based dehazing assisted by ICA according to an embodiment of the current invention;
d) Distance-based dehazing assisted by ICA according to another embodiment of the current invention;
e) shows Distance-based result according to yet another embodiment of the current invention.
The present invention relates to a devices methods and systems for polarization based approach for extracting parameters of airlight and attenuation due to haze from a pair of acquired images and using the extracted parameter for correcting the acquired image without the need for having a section of the sky in the image.
Before explaining at least one embodiment of the invention in detail, it is to be understood that the invention is not limited in its application to the details of construction and the arrangement of the components set forth in the following description or illustrated in the drawings. The invention is capable of other embodiments or of being practiced or carried out in various ways. Also, it is to be understood that the phraseology and terminology employed herein is for the purpose of description and should not be regarded as limiting.
The drawings are generally not to scale. In discussion of the various figures described herein below, like numbers refer to like parts.
For clarity, non-essential elements and steps were omitted.
As used herein, an element or step recited in the singular and proceeded with the word “a” or “an” should be understood as not excluding plural elements or steps, unless such exclusion is explicitly recited.
The order of steps may be altered without departing from the general scope of the invention.
Camera 210 is preferably a digital camera, however, a film based camera may be used provided that the film would be developed, scanned and digitize.
In order to obtain polarization depending images, a polarizer 212 is placed in the light path between the scenery 230 and the sensor 214.
Preferably, polarizer 212 is a linear polarizer mounted so its polarization axis can be rotated. According to the preferred embodiment of the invention, at least two images are acquired by camera 210 and transferred to computer 220 for analysis. Preferably, first image is taken when the polarization axis of polarization 212 is substantially such that maximum light impinges on sensor 214. A second image is acquired when the polarization axis of polarization 212 is substantially such that minimum light impinges on sensor 214, that is, when the polarization axis of polarizer 212 is substantially rotated by 90 degrees. In some embodiments, polarizer 212 is motorized. Optionally selection of first and second image is done automatically. Alternatively, an electronically controlled polarizer such as Liquid Crystal (LC) is used. Alternatively, a polarizing beam splitter may be used to split the light into two perpendicularly polarized images, each impinging of a separate sensor.
For color imaging, a color filter 216 is placed in front of sensor 214. Preferably, filter 216 is integrated into sensor 214. Alternatively, filter 216 may be a rotating filter wheel. Colors used are preferably be Red, Green and Blue (RGB), but other filter transmission bands such ad Near Infra Red (NIR) may be used.
Imaging unit 218 such as lens is used for imaging the light on the sensor. Preferably, sensor 214 is a pixilated 2-D sensor array such as Charge-Coupled Device (CCD) or another sensor array. Polarizer 212, filter 216 and lens 218 may be differently ordered along the optical path.
Optionally, polarizer 212 is integrated into the sensor 214. For example, some or all pixels in a pixilated sensor array may be associated with polarizers having different polarization states. For example, some of the pixels may be covered with a polarizing filters having different polarization axis. Additionally and alternatively, some pixels may be covered with polarizer having different degree of polarization efficiency. In some cases, some of the pixels are not covered with a polarizer.
Computer 220 may be a PC, a Laptop computer or a dedicated data processor and may be situated proximal to camera 210, in remote location or integrated into the camera. Data may be stored for off line analysis, or, displayed in real time. Display 220 is preferably used for viewing the processed image. Additionally or alternatively, the image may be printed. Optionally, processes and/or raw data may be transferred to remote location for further interpretation, for example further image processing.
Optionally, polarizer 112 is controlled by computer 220.
Optionally, distances from camera 210 to some of the imaged objects are provided by auxiliary unit 260. Auxiliary unit 260 may be a range finder such as laser range finder or radar based range finder or a map from which distances may be inferred. Additionally or alternatively, the distance to an object with a known size may be computed from its angular size on the image.
It should be noted that the current invention is not limited to visible light. Infra-Red (IR) camera may be used, having suitable sensor and imaging unit 218.
Imaging unit 118 may comprise diffractive, refractive and reflective elements. Optionally imaging unit 118 may have zoom capabilities. In some embodiments, zoom is controlled by computer 220.
In some embodiments camera 210 is remotely positioned.
In some embodiments, camera 210 is mounted of a stage 290. Stage 290 may be marks such that the direction of imaging may be measured and relayed to computer 220. In some embodiments, Stage 290 comprises directional sensor capable of measuring the direction of imaging. In some embodiments, stage 290 is motorized. In some embodiments, compute 220 controls the direction of imaging by issuing “pan” and/or “tilt” command to motorized stage 290.
Optionally, plurality of cameras is used.
For example, different cameras may be used for different spectral bends.
For example, different cameras may be used, each having different spatial resolution or different intensity resolution.
For example, different cameras may be used for imaging at different polarization states. Specifically, plurality of cameras may be used with different polarization states.
In some embodiments, the images taken for propose of extracting haze parameters needed for correcting the effects of scatter are different than the image to be corrected. It may be advantageous to include in the images taken for propose of extracting haze parameters objects that enable or ease the process of parameters extraction. For example, the observed area which needs to be corrected may be at substantially the same distance from the camera, or do not include known or similar objects. In this case, other images, comprising enough information needed to extract haze parameters may be used provided that parameters extracted from these images are relevant for the observed area. Parameters are slowly varying both spatially and in time, thus images taken in the general direction and under similar atmospheric conditions may be used.
For example, relatively wide angle lens may be used for propose of extracting haze parameters, while relatively telephoto lens is used for the image to be corrected.
For example, a different camera may be used for acquiring images for propose of extracting haze parameters.
Additionally or alternatively, images for propose of extracting haze parameters may be acquired periodically.
Additionally or alternatively, images for propose of extracting haze parameters may be acquired periodically at a different direction. For example in a direction including locations of known distance. For example, when auxiliary unit 260 is a range finder with limited range, and the region of observation is out of range, an image taken at closer range may be used for parameters extraction.
In some embodiments, digital image data is used for parameter extraction and is used to perform analog type image correction, for example in real time video viewing.
1. Stray Radiation in Haze
a shows an acquired image taken with digital camera on a hazy day. It should be noted that generally, viewing such images on a computer monitor better reviles the true colors and resolution of the enclosed small images of
For clarity of display, the images shown herein have undergone the same standard contrast stretch. This contrast stretch operation was done only towards the display. The methods according to the invention were performed on raw, un-stretched data. The data had been acquired using a Nikon D-100 camera, which has a linear radiometric response. The methods of the current invention are not restricted to high quality images and may be useful for correcting images such as acquired by a digitized video camera. However, it is preferred to perform these methods on images acquired by a high dynamic range imager having a 10-bit resolution or more. Bit resolution of a camera my be improved by averaging or summing plurality of images.
The reduced contrast and color distortion is clearly seen in
For comparison,
The parameter estimation of the sky-based dehazing relies, thus, on the existence of sky 10 in the Field Of View (FOV). This reliance has been a limiting factor, which inhibited the usefulness of that approach. Often, the FOV does not include a sky area. This occurs, for example, when viewing from a high vantage point, when the FOV is narrow, or when there is partial cloud cover in the horizon.
The methods according to embodiments of the current invention address this problem, i.e., enable successful dehazing despite the absence of sky in the FOV. Moreover, a method is provided that blindly separates the airlight radiance (the main cause for contrast degradation) from the object's signal.
According to the embodiments of the current invention, the parameter that determines this separation is estimated without any user interaction. The method exploits mathematical tools developed in the field of blind source separation (BSS), also known as independent component analysis (ICA).
ICA has already contributed to solving image separation [12, 31, 37, 43] problems, and high-level vision [10, 20, 44] particularly with regard to reflections. The problem of haze is more complex than reflections, since object recovery is obtained by nonlinear interaction of the raw images. Moreover, the assumption of independence upon which ICA relies is not trivial to accommodate as we later explain. Nevertheless, we show that the radiance of haze (airlight) can be separated by ICA, by the use of a simple pre-process. Dehazing had been attempted by ICA based on color cues [28]. However, an implicit underlying assumption behind Ref. [28] is that radiance is identical in all the color channels, i.e. the scene is gray. This is untypical in nature.
The method described in this paper is physics-based, rather than being pure ICA of mathematically abstract signals. Thanks to this approach to the problem, common ICA ambiguities are avoided. We successfully performed skyless dehazing in multiple real experiments conducted in haze. We obtained blind parameter estimation which was consistent with direct sky measurements. Partial results were presented in Ref. [36].
2. Theoretical Background
To make the explanation self-contained, this section briefly reviews the known formation model of hazy images. It also describes a known inversion process of this model, which recovers good visibility. This description is based on Ref. [33]. As shown in
D=Lobjectt (1)
where
t=e−βz (2)
t is the transmittance of the atmosphere. The transmittance depends on the distance z between the object and the camera, and on the atmospheric attenuation coefficient β. where ∞>β>0.
The second component is known as path radiance, or airlight. It originates from the scene illumination (e.g., sunlight), a portion of which is scattered into the LOS by the haze.
Ambient light, schematically depicted by thin arrows 252, animating from illumination source 250, for example the sun (but may be an artificial light source) is scattered towards the camera by atmospheric perturbations 240, creating airlight A, depicted ad dashed arrow 244. Airlight increases with object distance.
Let a(z) be the contribution to airlight from scattering at z, accounting for attenuation this component undergoes due to propagation in the medium. The aggregate of a(z) yields the airlight
Here A∞ the value of airlight at a non-occluded horizon. It depends on the haze and illumination conditions. Contrary to the direct transmission, airlight increases with the distance and dominates the acquired image irradiance
I
total
=D+A (4)
at long range. The addition of airlight [1] is a major cause for reduction of signal contrast.
In haze, the airlight is often partially polarized. Hence, the airlight image component can be modulated by a mounted polarizer. At one polarizer orientation the airlight contribution is least intense. Since the airlight disturbance is minimal here, this is the best state of the polarizer. Denote this airlight component as Amin. There is another polarizer orientation (perpendicular to the former), for which the airlight contribution is the strongest, and denoted as Amax. The overall airlight given in (Eq. 3) is given by
A=A
min
+A
max. (5)
Assuming that the direct transmission is not polarized, the energy of D is equally split among the two polarizer states. This assumption is generally justified and departure cause only minor degradation. Hence, the overall measured intensities at the polarizer orientations mentioned above are
I
min
=A
min
+D/2; Imax=Amax+D/2. (6)
The Degree Of Polarization (DOP) of the airlight is defined as
p=(Amax−Amin)/A, (7),
where A is given in Eq. (3). For narrow FOVs, this parameter does not vary much. In this work we assume that p is laterally invariant. Eq. (7) refers to the aggregate airlight, integrated over the LOS.
It should be noted that in the context of this invention, “polarization” or “polarization states” refers to difference in the state of polarizer 218 which affects a change in the acquired image. The indications “min” and “max” refers to sates with smaller and larger signals caused by the scattered light and not necessarily the absolute minimum and maximum of these contributions.
For example, Imin may refer to a polarization state that is not exactly minimize the scattered radiation. As another example, Imin may refer to an image taken with no polarization at all. In this context, the polarization parameter “p” means an “effective” polarization parameter associated to the ration of the scattered light in the different polarization states.
It should also be noted that by acquiring three or more images at substantially different linear polarization states, the signals associated with extremes states of polarization may be inferred. For example, three images may be taken at 0, 45 and 90 degrees to an arbitrary axis, wherein this axis do not necessarily coincides with the extreme effect on the scattered light. The foundation for this method may be found in ref [48]; Yoav Y. Schechner, Srinivasa G. Narasimhan and Shree K. Nayar; “Instant Dehazing of Images Using Polarization”; Proc. Computer Vision & Pattern Recognition Vol. 1, pp. 325-332 (2001).
Is p invariant to distance? Implicitly, this would mean that the DOP of a(z) is unaffected by distance. This is not strictly true. Underwater, it has recently been shown [35] that the DOP of light emanating from z may decay with z. This may also be the case in haze, due to multiple scattering. Multiple scattering also causes blur of D. For simplicity, we neglect the consequence of multiple scattering (including blur), as an effective first order approximation, as in Refs. [24, 33]. Note that
0≦p≦1 (8)
It follows that
I
min
=A(1−p)/2+D/2, Imax=A(1+p)/2+D/2. (9)
It is easy to see from Eqs. (4,9) that
Î
total
=I
min
+I
max (10)
Dehazing is performed by inverting the image formation process. The first step separates the haze radiance (airlight) A from the object's direct transmission D. The airlight is estimated as
Â=(Imax−Imin)/p (11)
Then, Eq. (4) is inverted to estimate D. Subsequently, Eq. (1) is inverted based on an estimate of the transmittance (following Eq. 3)
{circumflex over (t)}=1−Â/A∞ (12)
These operations are compounded to dehazing
{circumflex over (L)}
object=(Itotal−Â)/{circumflex over (t)} (13)
Two problems exist in this process. First, the estimation (i.e., separation) of airlight requires the parameter p. Secondly, compensation for attenuation requires the parameter A∞. Both of these parameters are generally unknown, and thus provide the incentive for this invention. In past dehazing work, these parameters were estimated based on pixels which correspond to the sky near the horizon.
A method to recover the object radiance Lobject when the sky is unseen was theoretically proposed in Ref. [33]. It required the presence in the FOV of at least two different classes of multiple objects, each class having a similar (unknown) radiance in the absence of haze.
For example, the classes can be buildings and trees. However, we found that method to be impractical. It can be difficult to identify two sets, each having a distinct class of similar objects. Actually, sometimes scenes do not have two such classes. Moreover, to ensure a significant difference between the classes, one should be darker than the other.
However, estimating the parameter based on dark objects is prone to error caused by noise. Therefore, in practical tests, we found the theoretical skyless possibility mentioned in Ref. [33] to be impractical.
In the context of this invention, similar objects may be objects or areas in the image, having a known ration of object radiance, that is: Lkobject=ηLmobject wherein η is a known ratio between radiance of objects k and m. It should be noted that similar objects are defined by this property and not necessarily by their nature. A non-limiting example of similar objects is a class of objects having same physical nature, such as similar structures or nature. For example, similar objects may be roads, buildings etc. In that case it is likely that η=1.
However, similar objects may be chosen as a road and a building, as long as the value or r is known. In some cases, the value of r is extracted from acquired images, for example images taken on haze-less day, images taken at short distance, images corrected by a different dehazing method, etc. Alternatively, the ratio r may be known from identification of the types of objects and prior knowledge about their optical properties.
It should be noted that the similar object may be a compost object spanning plurality of image pixel or having non uniform radiance but with characteristic radiance. For example, a building may comprise of darker windows. In that case, an average radiance may be used. Alternatively, dark windows may be excluded from the values representing the building.
3. Skyless Dehazing
This section introduces several methods to recover the scene radiance, when the sky is not in view according to some embodiments of the invention. They estimate the parameters p and A∞ in different ways. The method presented in Sec. 3.1 requires one class of objects residing at different distances. The consecutive methods, presented in sections 3.2 and 3.3, assume that the parameter p is known. This parameter can be blindly derived by a method described in Sec. 4. Consequently, there is reduction of the information needed about objects and their distances. The method presented in Sec. 3.2 only requires the relative distance of two areas in the FOV, regardless of their underlying objects. The method described in Sec. 3.3 requires two similar objects situated at different, but not necessarily known distances. Table 1 summarizes the requirements of each of these novel methods.
3.1 Distance and Radiance Based Dehazing
In this section, a method is developed for estimating p and A∞ based on known distances to similar objects in the FOV. Suppose we can mark two scene points: (xk, yk), k=1,2, which, in the absence of scattering, would have a similar (unknown) radiance. For example, these can be two similar buildings which have an unknown radiance Lbuild. The points, however, should be at different distances from the camera z2>z1.
For example, the two circles 11 and 12 in
Similarly, for the object at distance z2,
Let us denote
C
1
=I
1
max
−I
1
min
, C
2
=I
2
max
−I
2
min. (18)
It can be shown that C2>C1. Note that since Ikmax and Ikmin are the data at coordinates in the FOV, then C1 and C2 are known.
Now, from Eqs. (14,15)
while Eqs. (16,17) yield
where
V≡e
−β (21)
Dividing Eq. (19) by Eq. (20) yields the following constraint
G(V)≡C1Vz
Let a zero crossing of G(V) be at V0. We now show that based on this V0, it is possible to estimate p and A∞. Then, we prove the existence and uniqueness of V0.
The parameters estimation is done in the following way. It can be shown that
(I1max+I1min)=LbuildVz
and
(I2max+I2min)=LbuildVz
Following Eqs. (5,23,24), an estimate for A∞ obtained by
Based on Eqs. (14,15), define
Using Eqs. (7,25,26),
Thus, Eqs. (25,26,27) recover the required parameters p and A∞. Two known distances of similar objects in the FOV are all that is required to perform this method of dehazing, when the sky is not available.
Let us now prove the existence and uniqueness of V0.
is null only when
Due to these facts, Eq. (22) always has a unique root at V0ε(0, 1).
A typical plots of G(V) are shown in
Due to the simplicity of the function G(V), it is very easy to find V0 for example using standard Matlab tools or other methods known in the art.
The solution V0 can be found when z1 and z2 are known, or even only relatively known.
It is possible to estimate the parameters A∞ and p based only on the relative distance {tilde over (z)}=z2/z1, rather than absolute ones. For example, in
{tilde over (V)}=V
z
=e
−βz
. (30)
Then, Eq. (22) is equivalent to the constraint
{tilde over (G)}(V)≡C1{tilde over (V)}{tilde over (z)}−C2{tilde over (V)}+(C2−C1)=0. (30)
Similarly to Eq. (22), Eq. (31) also has a unique {tilde over (V)}0ε(0, 1). root at Hence, deriving the parameters is done similarly to Eqs. (25,26,27). Based on {tilde over (V)}0, A∞ estimated as
Similarly to Eq. (26),
Then, Eq. (27) yields {circumflex over (p)}
Based on these parameters, Eqs. (11,12) yield Â(x, y) and {circumflex over (t)}(x, y). Then {circumflex over (L)}object(X, y) is derived using Eq. (13), for the entire FOV. This dehazing method was applied to Scene 1, as shown in
The absolute distances z1, z2 or their ratio {tilde over (z)} can be determined in various ways. One option is to use a map (this can be automatically done using a digital map), assuming the camera location is known). Relative distance can be estimated using the apparent ratio of two similar features that are situated at different distances. Furthermore, the absolute distance can be determined based on the typical size of objects.
3.1.1 Unequal Radiance
The analysis can be extended to the case where the similar objects used for calibration of parameters do not have the same radiance, but the ratio of their radiance is known. Suppose the object at z1 has radiance Lbuild, and the object at z2 has radiance ηLbuild. Then, Eqs (16,17) may be re-written as
We maintain the definitions of Eq. (18) unchanged. It is easy to show that Eqs. (19,20,21,22, 23) are not affected by the introduction of η. Hence, the estimation of V0 (and thus β) is invariant to η, even if η is unknown.
Eq. (24) becomes
(1/η)(I2max+I2min)=LbuildVz
Now suppose that η is approximately known, e.g., by photographing these points in very clear conditions, or from very close distances. Then, based on Eqs. (23,36), and using the found V0, The expression analogous to Eq. (25) is
This equation degenerates to Eq. (25) in the special case of =1. From this result of Â∞, Eqs. (26, 27) remain unchanged, and the estimation of p does not need to be effected by η.
A similar derivation applies to the case where only the relative distances are known. Then, Eqs. (30, 31) are not affected by the introduction of Hence, the estimation of {tilde over (V)}0 is invariant to η, even if η is unknown. Then, the expression analogous to Eq. (32) is
This equation degenerates to Eq. (32) in the special case of η=1. From this result of Â∞, Eq. (33) remains unchanged, and the estimation of p does not need to be affected by η′.
3.1.2 Objects that May be at the Same Distance
Parameter calibration can even be made when the objects are at the same distance z, i.e., when {circumflex over (z)}=1. This is possible if η≠1. In this case, Eq. (38) becomes
It is solvable as long as {tilde over (V)}0≠1 i.e., the distance is not too close and the medium is not completely clear, and as long as η≠0. It can be re-formulated as
Hence there is a linear relation between {tilde over (V)}0 and the sought 1/A∞. To solve for Â∞ we must have an estimate of {tilde over (V)}0. There are various ways to achieve this. First, suppose that there is another object point at the same distance. Its measured intensity is I3total≠I2total, and its radiance (when there is no scattering medium) is χLbuild, where χ≠η is known.
Here,
which is a different linear relation between {tilde over (V)}0 and the sought 1/A∞. The intersection of Eqs. (40*,41*) yields both Â∞ and {tilde over (V)}0. Then, Eqs. (33,27) yield {circumflex over (p)}.
There are other possibilities to solve for A∞. For instance, suppose that we use only two objects, but we know or set Lbuild. This can occur, for example, when we wish that a certain building would be recovered such that it has a certain color Lbuild. Then, Eq. (23) becomes
I
1
total
=L
build
{tilde over (V)}+A
∞(1−{tilde over (V)}), (42*)
while Eq. (36) becomes
(1/η)I2total=Lbuild{tilde over (V)}+(1/η)A∞(1−{tilde over (V)}). (43*)
Eqs. (42*,43*) are two equations, which can be solved for the two unknowns {tilde over (V)}0 and A∞ yielding Â∞ and {tilde over (V)}0.
3.2 Distance-Based, Known p
In some cases, similar objects at known distances may not exist in the FOV, or may be hard to find. Then, we cannot use the method presented in Sec. 3.1. In this section we overcome this problem, by considering two regions at different distances z1<z2, regardless of the underlying objects. Therefore, having knowledge of two distances of arbitrary areas is sufficient. The approach assumes that the parameter p is known. This knowledge may be obtained by a method described in Sec. 4, which is based on ICA.
Based on a known p and on Eq. (11),  is derived for every coordinate in the FOV. As an example, the estimated airlight map  corresponding to Scene 1 is shown in
The two rectangles 17 and 18 represent two regions, situated at distances z1 and z2 respectively. Note that unlike Sec. 3.1, there is no demand for the regions to correspond to similar objects.
From Eqs. (2,12),
For regions around image coordinates (x1,y1) and (x2,y2) having respective distances z1 and z2, Eq. (34) can be written as
where V is defined in Eq. (21). It follows from Eq. (35) that
A
∞(Vz
and
A
∞(2−Vz
Dividing Eq. (36) by Eq. (37) yields the following constraint
G
p(V)≡(α−1)Vz
where
Recall that  is known here (since p is known), hence α can be calculated. Since z1<z2, Then α>0.
Similarly to (22), Eq. (38) has a unique solution at V0ε(0, 1). Let us prove the existence and uniqueness of V0.
Due to these facts, Eq. (38) has a unique root at V0ε(0, 1).
Typical plots of Gp(V) are shown in
Similarly to Sec. 3.1, it is possible to estimate A∞ based only on the relative distance {circumflex over (z)}=z2/z1, rather than absolute ones. Then,
{tilde over (G)}
p({tilde over (V)})≡(α−1){tilde over (V)}{tilde over (z)}+(α+1){tilde over (V)}−2α=0, (41)
where {tilde over (V)} defined in (30). Eq. (41) has a unique solution {tilde over (V)}ε(0, 1). Based on Eqs. (35),
This dehazing method was applied to Scene 1, as shown in
3.3 Feature-Based, Known p
In this section discloses a method according to an embodiment of the invention to estimate A∞, based on identification of two similar objects in the scene. As in Sec. 3.1, these can be two similar buildings which have an unknown radiance Lbuild. Contrary to Sec. 3.1, the distances to these objects are not necessarily known. Nevertheless, these distances should be different. As in Sec. 3.2, this method is based on a given estimate of {circumflex over (p)}, obtained, say, by the BSS method of Sec. 4. thus, an estimate of Â(x, y) is at hand.
It is easy to show [33] from Eqs. (11, 12, 13) that
I
k
total
=L
build
+S
build
Â(xk,yk), (43)
where
S
build≡(1−Lbuild/A∞) (44)
is a constant.
Buildings at different distances have different intensity readouts, due to the effects of scattering and absorption. Therefore, they have different values of Îtotal and A. According to Eq. (43), Îtotal as a function of  forms a straight line. Such a line can be determined using two data points. Extrapolating the line, its intercept yields the estimated radiance value {circumflex over (L)}build. Let the slope of the fitted line be Sbuild. We can now estimate A∞, as
Â
∞
={circumflex over (L)}
build/(1−Sbuild). (45)
Based on Â∞ and {circumflex over (p)}, we can recover {circumflex over (L)}object(x, y) for all pixels, as explained in Sec. 3.1.
As an example, the two circles 11 an 12 in
The values of these distances are ignored, as if they are unknown. The corresponding dehazing result is shown in
Unequal Radiance
The analysis can be extended to the case where the similar objects used for calibration of parameters do not have the same radiance, but the ratio of their radiance is known in ways similar to the treatment given in section 3.1.
4. Blind Estimation of p
Both Secs. 3.2, 3.3 assume that the parameter p is known. In this section, according to an embodiment of the invention, a method is developed for blindly estimating p based on low-level vision. First, note that Eq. (13) can be rewritten as
This is a nonlinear function of the raw images Imax and Imin, since they appear in the denominator, rather than just superimposing in the numerator. However, the image model illustrated in
4.1 Facilitating Linear ICA
To facilitate linear ICA, we attempt to separate A(x, y) from D(x, y). However, there is a problem: as we detail in this section, ICA relies on independence of A and D.
This assumption is questionable in our context. Thus, we describe a transformation that enhances the reliability of this assumption.
From Eq. (9), the two acquired images constitute the following equation system:
Thus, the estimated components are
Eqs. (47, 49) are in the form used by linear ICA. Since p is unknown, then the mixing matrix M and separation matrix W are unknown. The goal of ICA in this context is: given only the acquired images Imax and Imin, find the separation matrix W that yields “good”  and {circumflex over (D)}. A quality criterion must be defined and optimized. Typically, ICA would seek  and {circumflex over (D)} that are statistically independent (see [3, 5, 15, 30]). Thus, ICA assumes independence of A and D. However, this is a wrong assumption. The airlight A always increases with the distance z, while the direct transmission D decays, in general, with z.
Thus, there is a strong negative correlation between A and D.
To observe this, consider the hazy Scene 2, shown in
The strong negative correlation between A and D, corresponding to this scene is seen in
Nevertheless, due to this global correlation, A and D are highly mutually dependent.
Fortunately, this statistical dependence does not occur in all image components, therefore ICA may be used. In fact, the high correlation mentioned above occurs in very low spatial frequency components: D decays with the distance only roughly. As noted, local behavior does not necessarily comply with this rough trend. Thus, in some image components (not low frequencies), we can expect significant independence as
Hence, ICA can work in our case, if we transform the images to a representation that is more appropriate than raw pixels. We work only with linear transformations as in [17], since we wish to maintain the linear relations expressed in Eqs. (47)-(50). There are several common possibilities for linear operations that result in elimination of low frequencies.
One such option is wavelet transformation (see for example [38]), but the derivation is not limited to that domain. Other methods of removing low frequencies, such as Fourier domain high-pass filtering may be used. Define
D
c(x,y)=W{D(x,y)} (51)
as the wavelet (or sub-band) image representation of D. Here c denotes the sub-band channel, while W denotes the linear transforming operator. Similarly, define the transformed version of A, Â, {circumflex over (D)}, Imax and Imin as Ac, Âc, {circumflex over (D)}c, Icmax and Icmin respectively (see example in
where W is the same as defined in Eq. (50).
We now perform ICA over Eq. (52). As we shall see in the experiments, this approach decreases in a very effective way the statistical dependency. Hence, the assumption of statistical independence in sub-band images is powerful enough to blindly deliver the solution. In our case, the solution of interest is the matrix W, from which we derive p.
Based on p, the airlight is estimated, and can then be separated from D(x,y), as described in Sec. 2.
4.2 Scale Insensitivity
When attempting ICA, we should consider its fundamental ambiguities [15]. One of them is scale: if two signals are independent, then they remain independent even if we change the scale of any of them (or both). Thus, ICA does not reveal the true scale of the independent components. This phenomenon can be considered both as a problem, and as a helpful feature. The problem is that the estimated signals may be ambiguous. However, in our case, we have a physical model behind the mixture formulation. As we shall see, this model eventually disambiguates the derived estimation. Moreover, we benefit from this scale-insensitivity. As will be shown in Sec. 4.3, the fact that ICA is insensitive to scale simplifies the intermediate mathematical steps we take.
An additional ICA ambiguity is permutation, which refers to mutual ordering of sources. This ambiguity does not concern us at all. The reason is that our physics-based formulation dictates a special form for the matrix W, and thus its rows are not mutually interchangeable.
4.3 Optimization Criterion
Minimization of statistical dependency is achieved by minimizing the mutual information (MI) of the sources. The MI of Âc and {circumflex over (D)}c, can be expressed as (see for example [6])
(Âc,{circumflex over (D)}c)=+− (53)
Here and are the marginal entropies of Âc and {circumflex over (D)}c, respectively, while HÂ
Let us look at the separation matrix W (Eq. 50). Its structure implies that up to a scale p, the estimated airlight  is a simple difference of the two acquired images. Denote Ãc as an estimation for the airlight component Âc, up to this scale
Ã
c
=I
c
max
−I
c
min. (54)
Similarly, denote
{tilde over (D)}
c
=w
1
I
c
max
+w
2
I
c
min. (55)
as the estimation of up to a scale p, where
w
1≡(p−1), w2≡(p+1). (56)
Hence, the separation matrix of Âc and {circumflex over (D)}c is
Minimizing the statistical dependency of Âc and {circumflex over (D)}c means that Ãc and {tilde over (D)}c should minimize their dependency too. We thus minimize the MI of Ãc and {tilde over (D)}c,
({tilde over (D)}c,Ãc)=+− (58)
as a function of w1 and w2. This way we reduce the number of degrees of freedom of W to two variables, instead of four. This is thanks to the scale insensitivity and the physical model. Minimizing Eq. (58) yields estimates ŵ1 and ŵ2 which are related to w1 and w2 by an unknown global scale factor. This aspect is treated in Sec. 4.6.
As mentioned, estimating the joint entropy is unreliable and complex. Yet, Eqs. (47,49) express pointwise processes of mixture and separation: the airlight in a point is mixed only with the direct transmission of the same point in the raw frames. For pointwise [15] mixtures, Eq. (58) is equivalent to
({tilde over (D)}c,Ãc)=+−log|det({tilde over (W)})|− (59)
Here, is the joint entropy of raw frames. Its value is a constant set by the raw data, and hence does not depend on {tilde over (W)}. For this reason, we ignore it in the optimization process. Moreover, note from Eq. (54), that Ac does not depend on w1, w2. Therefore, is constant and can also be ignored in the optimization process. Thus, the optimization problem we solve is simplified to
the matrix given in Eq. (57).
At this point, the following argument can come up. The separation matrix {tilde over (W)} (Eq. 57) has essentially only one degree of freedom p, since p dictates w1 and w2. Would it be simpler to optimize directly over p? The answer is no. Such a move implies that p=(ŵ1+ŵ2)/2.
This means that the scale of ŵ1 and ŵ2 is fixed to the true unknown value, and so is the scale of the estimated sources  and {circumflex over (D)}). Hence scale becomes important, depriving us of the ability to divide {tilde over (W)} by p. Thus, if we wish to optimize the MI over p, we need to explicitly minimize Eq. (53). This is more complex than Eq. (60).
Moreover, this requires estimation of which is unreliable, since the airlight A has very low energy in high-frequency channels c. The conclusion is that minimizing Eq. (60) while enjoying the scale insensitivity is preferable to minimizing Eq. (53) over p.
4.4 Efficiency by a Probability Model
In this section, further simplifications are performed, allowing for more efficient optimization. Recall that to enable linear ICA, we use high spatial frequency bands. In natural scenes, sub-band images are known to be sparse. In other words, almost all the pixels in a sub-band image have values that are very close to zero. Hence, the probability density function (PDF) of a sub-band pixel value is sharply peaked at the origin. A PDF model which is widely used for such images is the generalized Laplacian (see for example [38])
PDF({tilde over (D)}c)=c(ρ,σ)exp[−(|{tilde over (D)}c|/σ)ρ], (61)
where ρε(0, 2) and σ are parameters of the distribution. Here c(ρ,σ) is a normalization constant. The scale parameter σ is associated with the standard deviation (STD) of the distribution. However, we do not need this scale parameter. The reason is that ICA recovers each signal up to an arbitrary intensity scale, as mentioned. Thus, optimizing a scale parameter during ICA is meaningless. We thus set a fixed unit scale (σ=1) to the PDF in Eq. (61). This means that whatever {tilde over (D)}c(x,y) is, its values are implicitly re-scaled by the optimization process to fit this unit-scale model. Therefore, the generalized Laplacian in our case is
PDF({tilde over (D)}c)=c(ρ)exp(−|{tilde over (D)}c|ρ). (62)
This prior of image statistics can be exploited for the entropy estimation needed in the optimization [4, 47]. Entropy is defined (see for example [6]) as
=ε{−log [PDF({tilde over (D)}c)]}, (63)
where ε denoted expectation. Substituting Eq. (62) into Eq. (63) and replacing the expectation with empirical averaging, the entropy estimate is
Here N is the number of pixels in the image, while C(ρ)=log [c(ρ)]. Note that C(ρ) does not depend on {tilde over (D)}c, and thus is independent of w1 and w2. Hence, C(ρ) can be ignored in the optimization process. The generalized Laplacian model simplifies the optimization problem to
The cost function is a simple expression of the variables.
4.5 A Convex Formulation
Eq. (65) is simple enough to ease optimization. However, we prefer a convex formulation of the cost function, as it guarantees a unique solution, which can be reached efficiently using for example gradient-based methods or other methods known in the art.
First, note that {tilde over (D)}c(x,y) is a convex function of w1 and w2, as seen in the linear relation given in Eq. (55). Moreover, the term [−log|w1+w2|] in Eq. (65) is a convex function of w1 and w2, in the domain (w1+w2)εR+. We may limit the search to this domain. The reason is that following Eq. (56), (ŵ1+ŵ2)=2kp where k is an arbitrary scale arising from the ICA scale insensitivity. If k>0, then (w1+w2)εR+ since by definition ρ≧0.
If k<0, we may simply multiply k by −1, thanks to this same insensitivity. Hence, the overall cost function (65) is convex, if |{tilde over (D)}c|ρ is a convex function of {tilde over (D)}c.
The desired situation of having |{tilde over (D)}c|ρ convex occurs only if ρ≧1 Apparently, we should estimate ρ at each iteration of the optimization, by fitting the PDF model (Eq. 61) to the values of {tilde over (D)}c(x,y). Note that this requires estimation of σ as well. The parameters ρ and ρ can be estimated by minimizing the relative entropy [38] (the Kullback-Leibler divergence) between the histogram of {tilde over (D)}c(x,y) to the PDF model distribution. However, this is computationally complex.
Therefore, we preferred using an approximation and set the value of ρ, such that convexity is obtained. Note that ρ<1 for sparse signals, such as typical sub-band images.
The PDF representing the sparsest signal that yields a convex function in Eq. (65) corresponds to ρ=1. Thus we decided to use ρ=1 (see also [4, 22, 47]). By this decision, we may have sacrificed some accuracy, but enabled convexity. In contrast to the steps described in the previous sections, the use of ρ=1 is an approximation. How good is this approximation? To study this, we sampled 5364 different images {tilde over (D)}c(x,y). These images were based on various values of p, c and on different raw frames. Then, the PDF model (Eq. 61) was fitted to the values of each image. The PDF fit yielded an estimate of ρ per image. A histogram of the estimates of ρ over this ensemble is plotted in
The approximation turns the minimization of I(Â, {circumflex over (D)}) to the following problem
Eq. (66) may be used for our optimization. It is unimodal and efficient to solve. For convex functions such as this, convergence speed is enhanced by use of local gradients.
4.6 Back to Dehazing
The optimization described in Sections. 4.3, 4.4 and 4.5 yields estimated ŵ1 and ŵ2. We now use these values to derive an estimate for p. Apparently, from Eq. (56), {circumflex over (p)} is simply the average of ŵ1 and ŵ2. However, ICA yields ŵ1 and ŵ2 up to a global scale factor, which is unknown. Fortunately, the following estimator
is invariant to that scale. This process is repeated in each color channel.
Once {circumflex over (p)} is derived, it is used for constructing W in Eq. (50). Then, Eq. (49) separates the airlight  and the direct transmission {circumflex over (D)}. This recovery is not performed on the sub-band images. Rather, it is performed on the raw image representation, as in prior sky-based dehazing methods.
We stress that in this scheme, we bypass all inherent ICA ambiguities: permutation, sign and scale. Those ambiguities do not affect us, because we essentially recover the scene using a physics-based method, not a pure signal processing ICA. We use ICA only to find {circumflex over (p)}, and this is done in a way (Eq. 67) that is scale invariant.
A Note about Channel Voting
In principle, the airlight DOP p should be independent of the wavelet channel c. However, in practice, the optimization described above yields, for each wavelet channel, a different estimated value {circumflex over (p)}. The reason is that some channels better comply with the independence assumption of Sec. 4.1, than other channels. Nevertheless, there is a way to overcome poor channels. Channels that do not obey the assumptions yield a random value for {circumflex over (p)}. On the other hand, channels that are “good” yield a consistent estimate. In the preferred embodiment, the optimal {circumflex over (p)} is determined by voting. Moreover, this voting is constrained to the range {circumflex over (p)}ε[0, 1], due to Eq. (66). It should be noted that other methods of identifying the most probable p are known in the art, For example statistical analysis may fit a distribution of values of p, from which the most probable one is selected. Any value outside this range is ignored. As an example, the process described in this section was performed on Scene 1. The process yielded a set of {circumflex over (p)} values, one for each channel.
5. Inhomogeneous Distance
To separate A from D using ICA, both must be spatially varying. Consider the case of a spatially constant A. This occurs when all the objects in the FOV are at the same distance z from the camera. In this case, and I(Ac, Dc) are null, no matter what the value of the constant A is. Hence, ICA cannot derive it. Therefore, to use ICA for dehazing, the distance z must vary across the FOV. Distance non-uniformity is also necessary in the other methods (not ICA-based), described in Sec. 3, for estimating p and A∞. We note that scenarios having laterally inhomogeneous z are the most common and interesting ones. Ref. [25] noted that the main challenge of vision in bad weather is that generally object distances vary across the FOV, hence making the degradation effects spatially variant. In the special cases where z is uniform, dehazing by Eq. (13) is similar to rather standard contrast stretch: subtracting a constant from Itotal, followed by global scaling. Hence the methods described in this paper address the more common and challenging cases. Thus, the importance to sometimes provide different images for extracting haze parameters.
Attention is now drawn to
Raw image data is acquired 110 using camera 210 as described in
Haze parameters p and A∞ are determined for regions of interest in the image in image analysis 120.
Effect of the haze is corrected in image correction stage 150, where global parameters p and A∞ from image analysis 120 are used to correct the image. The corrected image {circumflex over (L)}object(x, y) is than displayed on display 222.
Data Acquisition
Raw image data is acquired 110 using camera 210 as described in
Optionally, distances to at least two objects or locations in the image are supplied by auxiliary unit 260.
Image Analysis
Image analysis 120 may be performed in several ways, all within the general scope of the current invention. Selection of the appropriate method depends on the available data and the image acquired.
Optionally, plurality of these methods may be performed and the resulted corrected images are shown to the user. Alternatively or additionally, parameters obtained by the plurality of methods may be combined, for example as weighted average, and used for the correction stage 150.
Distance and Radiance Based Estimation of Both p and A∞
According to one exemplary embodiment of the current invention, a distance based estimation method 130 for obtaining both global haze parameters p and A∞ is depicted in
In order to perform this data analysis, at least two similar objects are selected in the overlapping area of the acquired images. The two objects are selected such that their distances from camera 220 are known, for example from auxiliary unit 260. The selected object are objects known to have similar the object radiance Lobject. These objects may be buildings known to be of same color, area of vegetation of the same type, etc.
For each selected object “i” situated at distance zi from the camera, Ci is evaluated from Eq. (18) by subtracting the signals corresponding to the two acquired images.
In general, each object may span plurality of image pixels. Ci may represents the average values of the object's pixels, thus the selected objects are optionally of different sizes. When more than two similar objects are selected, or plurality of object pairs are selected, effective global parameters may be obtained by averaging the calculated global parameters or by best fitting global parameters to the over constrained equations.
Eq. (22), having the unknown V, is constructed from the computed values Ci and the known distances zi.
Eq. (22) is generally numerically solved to obtain the solution V0 using methods known in the art.
Based on solution for V0, Eq. (25) is evaluated to yield the desired value of the global haze parameter A∞.
Eq. (26) is than evaluated using the value for V0 and inserted into Eq. (27) to yield the global haze parameter p.
Blind Estimation of p
According to another exemplary embodiment of the invention, global haze parameter p is first determined, and is subsequently used to calculate the global haze parameter A∞.
According to one embodiment of the current invention, a blind estimation method 140 for obtaining the global haze parameter p is depicted in
To perform the analysis, the two acquired images are filtered to remove the low frequency components, preferably using wavelets analysis. The filtered images are now presented as luminosity per channel, for example pixel (x, y), as Icmax and Icmin.
A linear combination {tilde over (D)}c of Icmax and Icmin is defined using Eq. (55) using two unknown w1 and w2.
Values for the unknown w1 and w2 is obtained by minimizing Eq (66).
The obtained values w1 and w2 are used to obtain value of global haze parameter p from Eq. (67).
Global haze parameter p from blind estimation method 140 may be used for obtaining haze parameters A∞ using any of estimation methods 142 and 144 depicted below.
Distance-Based, Known p Estimation A∞
According to an exemplary embodiment of the current invention, a distance based, known p estimation method 142 for obtaining the haze parameter A∞ is depicted in
The theoretical justification for this exemplary embodiment is detailed in section 3.2.
Using the known global haze parameter p, for each area “i”, an Âi is constructed from the acquired images according to Eq. (11).
In general, each area may span plurality of image pixels. Âi may represents the average values of the area's pixels, thus the selected area are optionally of different sizes. When more than two areas are selected, effective global parameters may be obtained by averaging the calculated global parameters or by best fitting global parameters to the over constrained equations.
A set of equations are thus constructed using the known distances zi having two unknowns V and A∞ using Eq. (35) (wherein “(x,y)” stands as “i” for identifying the area).
To solve the abovementioned set of equation, Eq. (38) is constructed, having only one unknown V.
Eq. (38) is generally numerically solved to obtain the solution V0 using methods known in the art.
Based on solution for V0, Eq. (42) is evaluated to yield the desired value of the global haze parameter A∞.
Feature-Based, Known p Estimate A∞
According to an embodiment of the current invention, a feature based, known p estimation method 144 for obtaining the haze parameters A∞ is depicted in
Using the known global haze parameter p, for each object “i”, an Âi is constructed from the acquired images according to Eq. (11).
In general, each object may span plurality of image pixels. Âi may represent the average values of the area's pixels, thus the selected area are optionally of different sizes. When more than two objects are selected, or plurality of object pairs are selected, effective global parameters may be obtained by averaging the calculated global parameters or by best fitting global parameters to the over constrained equations.
From Eq. (43) it is apparent that the total acquired signal from object “k”; Îktotal, defined by Eq. (10) is a linear transformation of Âi having Lbuild as its intercept and Sbuild as its slope (wherein “(x,y)” stands as “i” for identifying the object).
Thus, using two or more selected objects, it is easy to solve for Lbuild and Sbuild using graphical methods; numerical methods; or fitting methods known in the art.
Inserting the obtained Lbuild and Sbuild into Eq. (45), the value of global haze parameter A∞ may be obtained.
Image Correction
Haze parameters p and A∞ are determined for regions of interest in the image in image analysis 120.
Effect of the haze is corrected in image correction stage 150, where global parameters p and A∞ from image analysis 120 are used to correct the image. The corrected image {circumflex over (L)}object(x,y) is than displayed on display 222.
It should be noted that global parameters p and A∞ may be separately computed for sections of the images and applied in the correction accordingly. Separate analysis may be advantageous when the image spans large angle such that haze conditions changes within the image. It also should be noted that since global parameters are slowly varying in time, mainly though weather changes and changes in sun positioning, image analysis 120 may be performed at infrequent intervals and applied to plurality of images taken at similar conditions. Similarly, distances to objects in the image are generally unchanged and may be obtained once.
For each pixel (x,y) in the image, the haze contribution A(x,y)I is calculated from Eq (11) using global haze parameter. Since haze is spatially slowly varying, the haze contribution may be smoothed. Smoothing may take into account knowledge about the scenery such as the existence of sharp edge in the haze contribution such as at the edge of a mountain range and be tailored to create domains of continuously, optionally Monotonically varying haze contribution.
Optionally haze contribution may be subtracted from the acquired image to obtain the direct transmission:
D=I
total
−A (68)
and the results displayed to the used.
Preferably, attenuation t for each pixel in the image is calculated using the haze contribution A, the global haze parameter A∞ using Eq. (12).
For each pixel in the image, the estimation of the object luminosity is than obtain from Eq (13) and displayed to the user.
From Eq. (2), it apparent that distances to each object in the image could be obtained from t by:
z(x,y)=−log [t(x,y)]/β, (69)
where the attenuation coefficient β may be obtain by solving Eq. (69) for at least one location in the image to which the distance is known.
A “distance map” z(x,y) may be created and displayed to the user in association with the processed or original image. Foe example, distance to an object may be provided by pointing to a location on the image. Alternatively or additionally, distance information, for example in the form of contour lines may be superimposed on the displayed image.
It is appreciated that certain features of the invention, which are, for clarity, described in the context of separate embodiments, may also be provided in combination in a single embodiment. Conversely, various features of the invention, which are, for brevity, described in the context of a single embodiment, may also be provided separately or in any suitable sub combination.
Although the invention has been described in conjunction with specific embodiments thereof, it is evident that many alternatives, modifications and variations will be apparent to those skilled in the art. Accordingly, it is intended to embrace all such alternatives, modifications and variations that fall within the spirit and broad scope of the appended claims. All publications, patents and patent applications mentioned in this specification are herein incorporated in their entirety by reference into the specification, to the same extent as if each individual publication, patent or patent application was specifically and individually indicated to be incorporated herein by reference. In addition, citation or identification of any reference in this application shall not be construed as an admission that such reference is available as prior art to the present invention.
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/IL2007/000067 | 1/18/2007 | WO | 00 | 11/15/2010 |
Number | Date | Country | |
---|---|---|---|
60759579 | Jan 2006 | US |