1. Technical Field
The present invention relates to an imaging system and devices for medical, automotive, photographic and other applications.
2. Related Background Art
Wide-angle lenses are useful for a variety of applications including medical application such as endoscopes, security cameras, automotive cameras and imaging systems for photography. As the field of view increases, geometric lens distortion becomes un-avoidable. The straight lines at the off-axis field points become curved. Distortion can be minimized with appropriate lens design. Lenses are also designed with particular geometric distortion properties to match the application. Lenses may be designed to provide the best overall visual appearance and the ability to detect details in particular regions of the image. Such lenses, termed here as, Tailored Distortion Lenses, reduce distortion optically by using a combination of spheric and aspheric elements. However, sometimes significant amount of distortion can still be easily seen even in a tailored distortion lens. To further reduce the distortion of a tailored distortion lens through the design of the optics is very difficult and may require undesirable compromises of other lens parameters. However the remaining distortion can be further reduced or optimized for particular applications or particular situations in an application by processing the image data. The processed image is then displayed to the user or saved to a memory for further processing. The resulting image will have less distortion than what can be accomplished optically alone. For certain applications, the resulting images provide better views of important features, are most pleasing to view and appear more natural to the users.
Zooming is a common feature in many imaging systems. There are two ways to accomplish zooming: optical or digital. In an optical zoom, the components (lens elements, spacers, etc) are moved physically to cause a change in the overall focal length of the system. For example, a typical consumer camera employs an optical zoom lens having moving lens groups. For a given image size (film or the imaging sensor), this means that the overall field of view of the system is also changing during zooming. The field of view is reduced for increased focal length and vice versa. Digital zooming is a process in which a smaller section of an image is enlarged to fill the entire available area of the display. When a smaller section is enlarged, one loses field of view because image areas outside the zoomed area can no longer be displayed. There is a need to zoom in on an area of interest, yet simultaneously be able to see the full field of view.
Corrections of the geometric distortion heretofore required algorithms specific to each lens design. There is a need for an algorithm that accurately and generally describes lens behavior. With such a function there would be a reduced need to design a custom lens for each application. Similarly, heretofore color and geometric corrections have been accomplished in software through empirical measurements and numeric approximation of the lens mapping. Often the lens mapping function is described as a polynomial expansion containing multiple empirically determined terms specific to the particular lens. Measurements for calculating the particular algorithms require use of data from broad areas of the lens. In other cases a linear approximation for lens behavior is used that is good for only a narrow section of the image. There is a need for a descriptive function of lens behavior that applies to multiple lens designs, is applicable to the entire viewing region of the lens and can be calculated using data from a narrow region of the lens. This makes the embedding of processing algorithms easier for certain types of electronic imagers.
The image from a single lens can be modified to simulate the behavior of a different lens. Applications such as endoscopy require particular lens designs for particular procedures or even for portions of procedures. In some cases there is a need to retract a probe and change the lens or exchange the probe for another with a different lens for a different stage of the procedure. This adds to the complexity of procedures and increases the chance for contamination and infection. Complex mechanical lens elements have been used to enable a modification of the field or direction of view to help enable a single probe to be used throughout a procedure or for multiple procedures. The tradeoff is most often a smaller field of view for the ability to image a limited area. These more complex probes can often lead to a surgeon missing particular features as the field of view is narrowed or the field of view is limited to a particular direction. There is a need for endoscopic probes that can smoothly change the field of view or can smoothly change the image from one lens design to another without the need to physically change the probe.
Wide-angle lenses are also finding increasing use in automotive applications. Imaging systems incorporating wide-angle lenses are used for viewing the region behind a vehicle to eliminate blind spots unavoidable with mirrors or simply the view out the rear window. Geometric distortions in such images may result in effectively blind spots similar to those that the systems are intended to eliminate. The most effective image for driver viewing also varies by the driving situation, the particular drivers, and even driver preferences. There is a need to be able to transform the image of an automotive camera after the camera has been installed in the vehicle. There is additionally a need to be able for the driver to easily transform the image to match a changing driving situation.
There is a need for the ability to process the images captured by an electronic imager to provide modifications of the geometric distortions to the viewed image. There is a need for an image processing method, which accurately describes lens behavior and uses only a minimum number of parameters to characterize the lens mapping function. There is a need for an imaging system that uses lens designs that are tailored to the processing algorithms used to subsequently manipulate the image.
Attempts to provide algorithms have been reported in, for example, U.S. Pat. Re. 36,207, U.S. Pat. No. 7,427,263, U.S. Pat. No. 7,042,508 and U.S. Pat. No. 7,042,497. None of these attempts address the needs discussed above.
In one embodiment a new “zooming” method is disclosed. This new zooming method allows one to enlarge or shrink the central section of the image while still preserving the overall field of view. Applications of this invention are in imaging systems such as endoscopes, automotive applications and security cameras. Zooming while maintaining the field of view is accomplished with digital processing hardware and software. An image (video frame) is captured by a lens having known optical properties. This image is processed digitally to enlarge or shrink the central portion of the image producing a transformed image. In one embodiment the calculation is constrained such that the transformed image will have the same overall field of view as the originally acquired image. This means that the off-axis image is squeezed or stretched to accommodate the zooming of the central portion of the image. The transition between center and periphery is smooth and continuous. While the central portion is “zoomed in”, the periphery is “zoomed out” and vice versa. The image is captured by an electronic imager (for example, a CCD or CMOS imager) located at the focal plane of the lens. The electronic image is then manipulated using processing algorithms programmed into the same processor that acquires the images or optionally a second processor. The second processor may be remotely located from the processor that acquires the images initially.
The invention is applicable and claimed for a range of lens systems. Preferred embodiments include a wide-angle lens system. As used herein a “wide-angle” lens system refers to a lens with a filed of view equal to or greater than approximately 100 degrees.
Yet another embodiment of the invention allows for simulation of the image that would be obtained by a target lens system having a particular set of design parameters using actual image data from a different lens having different design parameters from the target lens system and calculating the simulated or transformed image data by applying a transformation to the actual image data. The simulation is effected by applying an image processing algorithm herein termed a distortion transformation, to map the image obtained using the actual lens system onto a simulated image from the target lens design.
Another embodiment of the invention uses lens that follow a new parametric equation for the lens mapping function. The constraint of following this lens mapping function in conjunction with the design parameters of the physical size of the lens, number of lens elements and the desired field of view will provide a solution for the lens design. A lens made to these design parameters is then used in an imaging system comprising additionally a sensor and a processor to acquire an image and means to provide user inputs. The processor is further programmed to accomplish coordinate transformations upon the acquired image based upon user inputs. The transformations allow improved analysis of the images through a reduction or increase in the apparent distortions and ability to zoom the image, while maintaining all available data. In another embodiment the processing of the image is done remotely from the location of acquiring the image. In another embodiment separate processors are used to acquire the image and to manipulate the image based on user inputs.
Another embodiment of the invention is a system for accomplishing image coordinate transformation to map the pixels of an imager device attached to a lens system to an arbitrary display format for viewing. Such transformations allow for the correction of distortions introduced by the lens system in conventional display formats and for the projection of images onto custom display formats as used, for example, in simulators and games.
Another embodiment of the present invention is an imaging system comprising a lens system, an image sensor, hardware and software for an image acquisition and processing system that accomplishes image storage and the zooming and coordinate transformation techniques described above, and a display.
Another embodiment of the invention introduces lenses that are specifically designed to follow a novel lens mapping function. The constraint of the lens mapping function and design parameters such as the dimensions of the lens, the number of lens elements and the target field of view, define a recipe for a lens that is particularly suitable for acquiring images that are to be further manipulated through described coordinate transformations.
Another embodiment of the invention uses lens that follow a new parametric equation for the lens mapping function. The constraint of following this lens mapping function in conjunction with the design parameters of the physical size of the lens, number of lens elements and the desired field of view will provide a solution for the lens design. A lens made to these design parameters is then used in an imaging system comprising additionally a sensor and a processor to acquire an image and means to provide user inputs. The processor is further programmed to accomplish coordinate transformations upon the acquired image based upon user inputs. The transformations allow improved analysis of the images through a reduction or increase in the apparent distortions and ability to zoom the image, while maintaining all available data.
The term “lens element” means a single transparent mass of refractive material having two opposing refracting surfaces. An aspheric element is a lens element having at least one refracting surface that is neither spherical nor flat. The term “lens component” is defined as (a) a single lens element spaced so far from any adjacent lens element that the spacing cannot be ignored in computing the optical properties of the lens assembly, or (b) two or more lens elements that have their adjacent lens surfaces either in full overall contact or overall so close together that the spacing between adjacent lens surfaces can be ignored in computing the overall lens assembly performance. Lens elements and lens components are identified numerically increasing from object to image.
For standard lenses with negligible distortion, the lens mapping function is as follows:
h(θ)=f*tan(θ) (1)
where f is the effective focal length of the lens. Lenses that follow this mapping function are known as “rectilinear” lenses. Wide-angle and fisheye lenses do not follow this equation well. A commonly used mapping function for fisheye lenses is:
h(θ)=f*θ (2)
Lenses that follow Eq (2) are known as “f-θ” or “equidistant” lenses. This equation is generally used in commercial software for de-warping images taken by fisheye lenses. The recently invented Tailored Distortion lenses by the same inventor do not follow this equation well. Most wide angle lenses do not follow this equation closely. But it is used as an approximation to the lens mapping function over a narrow field of view. A general approach is to use a high-order polynomial as follows to characterize the lens mapping function.
h(θ)=a1*θ+a2*θ2+a3*θ3+a4*θ4+ . . . (3)
where θ is given in radians. The coefficients a1, a2, a3 . . . are empirically determined from the lens design by fitting a polynomial to the actual curve of h vs. θ of the lens. Though Eq (3) is a very general technique for characterizing any lenses, it requires typically more than 4 coefficients to achieve a reasonable fit. Fitting more than 4 parameters requires use of test patterns that use a significant portion of the field of view of the lens.
In the present invention, we introduce a new mapping function for characterizing lenses:
h(θ)=(f/β)*tan(β*θ) for β>0 (4a)
h(θ)=(f/β)*sin(β*θ) for β<0 (4b)
where f is the effective focal length of the lens, and β is termed as “rectilinearity”. The rectilinearity β can be determined from any lens design program such as Zemax®, Zemax is a registered trademark of Zemax Development Corporation. In practice, β is the best-fit value that approximates the actual h(θ) vs. θ curve of the lens. Rectilinearity represents the degree of distortion in lenses. If β=1, Eq (4a) becomes Eq (1), the mapping function of distortion-free lens. When β approaches 0, either Eq (4a) or (4b) becomes Eq (2), the mapping function of a perfect “f-θ” lens. It must be noted that the image formed by a perfect f-θ lens will still look distorted, i.e. the off-axis objects are “squeezed” relative to the on-axis objects. Using the f-θ lens as a baseline reference, most commercially available wide-angle lenses “squeeze” the off-axis objects even more. Their β values are negative. The tailored distortion lenses “squeeze” off-axis objects less and have positive β values. The advantage of Eq (4a) and Eq (4b) over Eq. (3) is that it allows one to model the behavior of a lens using only two parameters. It is easier and more efficient to implement in software (either stand alone or embedded in hardware processors). In subsequent discussions reference to the lens mapping function will equivalently refer to equations 4a and 4b or simply equation 4.
Considering
In one embodiment, the lens mapping function is used in a processing algorithm to process or transform the electronic image formed by the lens and image sensors on its focal plane into a new image without discarding or losing image information in any part of the image. The entire field of view of the lens is still visible after transformation. The image formed by the lens on the image sensor is digitized into an array of points known as pixels. Each pixel will have a pair of coordinates (xx,yy) and a value which measures the light intensity hitting the pixel. For color imagers, the pixel value is represented by a vector with three components (R, G, B). Each component represents the intensity of each primary color.
The pixel coordinates (xx,yy) is a pair of indices identifying the location of the pixel within the electronic imager array. For example, a VGA imager has 640 rows and 480 columns of pixels. For ease of discussion, we will use the center of the array as the origin. If a pixel has coordinates (+100, +100), it means that this pixel is located at the interception of the +100th rows and the +100th columns from center. We will further assume that the optic axis of the lens goes through the center of the imager.
The processing algorithm takes the input image (the source image) captured by the electronic imager, and generates a new image (the target image). We will use (x, y) to represent the pixel coordinates on the target image, and (xx, yy) to present the pixel coordinates on the source image. For each (x, y) pair, the algorithm calculates a corresponding (xx, yy) such as the pixel value at (x,y) on the target image is copied from the pixel value at (xx, yy) on the source image. Using the lens mapping function various useful transformations of the image are possible. Each discussed in turn below.
In one embodiment, an image (“source image”) taken with lens1 having rectilinearity β1 and focal length f1 is transformed into a new image (“target image”) taken with a hypothetical lens2 having β2 and f2. To keep the same overall horizontal field of view, the f2 and β2 are not two independent variables. They are related to the horizontal field of view (HFOV) via the following equations (derived from Eq (4a) and (4b)):
f2=(H*β2)/tan(β2*HFOV/2) for β2>0 (5a)
f2=(H*β2)/sin(β2*HFOV/2) for β2<0 (5b)
where H is the image height (half horizontal width of the image size). This equation represents the trade-offs between the focal length and the distortion of a wide-angle lens for a fixed horizontal field of view.
The processing steps shown in
In another embodiment the operator may decide that a different or transformed image is more appropriate to the purpose or task. The user selects and enters a new set of image parameters 304 that are used to transform the image 302 for viewing 303. The user parameters may be a selection of which of the several transformations described below are to be used. In another embodiment, the user provides parameters that are used along with equations 4 and 5 above to produce a transformed image 303. Once the image is used the user may select a different set of parameters 304 and iteratively determine the best set of parameters to manipulate the image based upon the particular application. Nonlimiting examples of means to select the parameters 304 include keyboard entry, thumbwheels, sliders or any of a variety of user inputs known in the art. In another embodiment the parameters are selected through an automated system. View image may be a computer view of the image that then automatically selects parameters 304 to modify the image to match a preselected set of conditions. In one embodiment the imaging system is integrated into a surveillance system. The automated selection of transform parameters 304 may be selected on the basis of activity or motion detected in a region of the image.
Fixed Field of View Zoom Transformation.
In one embodiment an image taken with an imaging system including a lens with a focal length f1 and a rectilinearity parameter β1 is transformed into an image that would be viewed if the image were acquired with a lens having a different rectilinearity β2 or with a different focal length f2. This is a fixed field of view transformation.
In this embodiment, the details of the transformation are as follows:
Note that the transformation may be selectively along the x coordinates, the y coordinates or both. Non limiting examples of means to implement the embodiment include:
Considering
Distortion Transformation
In another embodiment an initially acquired image (“source image”) taken with lens1 having rectilinearity β1 and focal length f1 is transformed into a new image (“target image”) that after transformation appears as if taken with lens2 having β2 and f2. This is a distortion transformation. If β2>β1, the amount of distortion is reduced going from source image to target image. If β2<β1, the amount of distortion is increased in going from source image to target image. The processing steps are as follows:
If b2=1, we are transforming the image into a distortion-free image. In a distortion transformation the field of view is not necessarily preserved. The fixed field of view zoom transformation, discussed above, is a special case of distortion transformation where the f2 and β2 are constrained by equation (5) to maintain the field of view of the original image.
TV Distortion Correction
In another embodiment the off-axis horizontal and vertical lines taken with a lens having β1 and f1 are straightened. The curving of those lines is known as TV distortion. This embodiment is a TV distortion correction. This transformation will accomplish this correction without changing the distance between the horizontal/vertical lines along the y/x axis. The process flow is:
In another embodiment the acquired image is transformed through a spherical projection. Referring to
γ=x/X*hfov (6a)
φ=y/X*hfov (6b)
Where X is the width and hfov is the horizontal field of view of the target image.
The distance to the optic-axis 705 (perpendicular to the axis) from this pixel is then:
d=sqrt(sin2(γ)*cos2(φ)+sin2(φ)) (7a)
From this, we can derive the field angle 704 for this pixel:
θ=arcsin(d) (7b)
We can now apply Eq (4a) and (4b) to compute the image height h on the source image for the same incident angle θ. The source pixel coordinate (xx, yy) can then be calculated as:
xx=h(θ)*sin(γ)*cos(φ)/d (8a)
yy=h(θ)*sin(φ)/d (8b)
The process steps for this calculation is:
In another embodiment the transformed object surface is a cylinder instead of a sphere, centered on the lens pupil. This is a cylindrical projection shown in
R=X/hfov (9a)
For a target transformed pixel 808 with coordinates (x, y), the longitudinal angle 805 is then:
γ=x/R (9b)
The distance d 809 to the optic axis 803 is:
d=sqrt(R2*sin2(γ)+y2) (9c)
The field angle θ 811 for this pixel is then:
θ=a tan(d/(R*cos(γ))) (9d)
Once θ is known, the source pixel coordinate (xx, yy) can be derived using as follows:
xx=h(θ)*R*sin(γ)/d (10a)
yy=h(θ)*y/d (10b)
where h(θ) is from Eq (4a) or (4b). The processing steps for this transformation are:
In another embodiment, the object surface is a “flattened” cylinder. This is a transitional projection. Referring to
There are an infinite number of surfaces can be used for transitional projection. A requirement is that it has zero curvature along the vertical axis (y axis) and it curves along the horizontal axis (x axis) in a way such that the curvature is increased progressively when moving away from the center line of the object surface. In
The parametric equation for a cycloid is as follows:
Xc=a*(t−sin(t))−a*π (11a)
Yc=a*(1−cos(t)) (11b)
Where a is the radius of the rolling cylinder that generates the cycloid 1001, and parameter t is in the range of 0 to 2π. The arc length as measured from left end point of the cycloid is given as (also in parametric form):
S=8*a*sin2(1/4*t)−4*a (12)
Now imagine that the lens pupil is in the center (Xc=0, Yc=0) of the cycloid, the maximum horizontal field of view is now π. The entire arc length of a cycloid is 8*a from left endpoint (t=0) to right endpoint (t=2π). For exemplary purposes only, we set a=π/8. We will further assume that the target image has a horizontal field of view of HFOV=π to allow us to achieve a closed formed solution. For other HFOV values, one must solve the following equation for t in order to get the te value for the left and right endpoints:
Xc(t)/Yc(t)=tan(HFOV/2) (13)
Once the left endpoint, te, value is obtained, the arc length corresponding to the half width of the target image is then:
Se=π/2−π*sin2(1/4*te) (14)
Se will be used to normalize the pixel coordinate to the real coordinate on the cycloid surface. The normalization factor N is 2*Se/X where X is the horizontal width of the target image in pixels. Now refer to
t=4*a sin(sqrt((x*N+π/2)/π)) (15)
Knowing t, we can then calculate the field angle 1004:
θ=a cos(Yc/sqrt(Xc2+Yc2+(y*N)2)) (16)
xx=h(θ)*Xc/sqrt(Xc2+(y*N)2) (17a)
yy=h(θ)*y*N/sqrt(Xc2+(y*N)2) (17b)
The calculation process for a transitional projection is:
Repeat from step 1 for all (x, y) in the target image.
In another embodiment shown in
In another embodiment non-limiting exemplary means to select f2 and/or β2 include a slider, a thumbwheel, a button or other user interface means known in the art. In other embodiments the camera 502 may be at other locations on the vehicle, the display 507 may be located outside of the dashboard of the vehicle and the control means 508 may be any of the user interfaces discussed above and located at other locations for example on the steering wheel of the vehicle.
In another embodiment shown in
Referring to
Other nonlimiting exemplary embodiments of the imaging system shown in
In another embodiment lenses are described that are especially suitable for use in an imaging system that will make use of the aforementioned transformations. The lenses are designed such that the lens mapping function can be characterized sufficiently by equations 4a) and 4b). This constraint along with the other design parameters of physical size of the lens, number of lens elements and field of view when entered into a design program such as Code V marketed by Optical Research Associates in Pasadena, Calif. and Zemax marketed by Zemax Corporation in Bellevue, Wash. will result in a prescription for a lens such as that shown in
Embodiments of lens systems that may be used in this invention are described in the two alternative designs shown in
The embodiments of
The second lens element 28 in the doublet (1552, 1652) of the third lens group (1550, 1650) or the fifth lens element of the wide angle objective lens, has a concave object surface (1570, 1670) facing the object (1504, 1604), a convex image surface (1572, 1672) facing the image plane (1508, 1608) and a vertex (1574, 1674). The first and second lens elements 26, 28 are joined with optical cement to form the positively powered cemented doublet (1552, 1652). The term “positively powered” means that the power of the cemented doublet pair (1552, 1652) is greater than zero.
The third lens element 30 has a convex object surface (1576, 1676) facing the object, a convex image surface (1578, 1678) facing the image plane (1508, 1608) and a vertex (1580, 1680).
The third lens element 30 in the third lens group (1550, 1650) has a positive vertex power. In the embodiment of
Performance of the embodiment of
The prescription for the six element design of
The embodiments of
Aspheric Surfaces
Conventional lens elements are made by grinding and polishing glass lens blanks. The two surfaces of a lens element are typically spherical. However, an aspheric element is a lens element that has at least one of its two surfaces described by a general aspheric equation such as Equation 18
where z is the surface sag relative to the vertex of the lens on the lens surface. n Eq. 18, the variable r is the radial distance from the optical axis. The constant c is the curvature (inverse of radius) at the vertex of the surface. The constant k is a “conic” constant. The other coefficients (α1, α2, . . . ) are the aspheric coefficients provided in Table 2. The coefficients characterize the depressions in the lens surface that are made by molding the surface to match a modeled mathematical envelope in a suitable glass or plastic material. In the embodiments of
Explanation of Tables
The Title block at the top of Tables 1 provide the effective focal length f, the f#, the TTL or Total Track Length, the Image Height (h) at the full field angle and the ratio of TTL/f for the Embodiment. The columns of Table 1 are titled for: “Surface”, “Type”, “Radius”, “Thickness”, “Index (Nd)”, and “Abbe Number”. The lens element material is specified by the refractive index and Abbe number. The absence of an “Index” value in a cell in Table 1 signals that the value in the “Thickness” column cell adjacent to the missing value in the Index cell, is the distance to the vertex of the next lens surface. The “Index” column provides the index of refraction for the material at 588 nm. The “Abbe Number” is provided in the rightmost column. The data of Tables 1 and 2 are purposely reported to a limited number of significant digits to protect trade secret information of the inventor. In practice the recipe for the lens would be calculated to a number of significant digits that would be determined by the manufacturing capabilities.
At Table 1, surface 6, the Index Cell is blank. Therefore, the adjacent cell to the left, the “Thickness” column cell, shows the distance to be measured from the image surface of the preceding lens surface to the next surface which is the distance to the STOP. Surface 7, the start of the next row, is the start of the STOP row. Thickness Cell on Row 7 shows the distance from the STOP to vertex (1566, 1666) on the first doublet lens 26. The distance from surface (1558, 1658) on lens element 24 to the STOP is 4 mm. The distance from the STOP to vertex 1566 on surface 1508 is 0.4 mm for the embodiment of
An imaging system and method of application, including lens designs tailored to be used with particular transformation algorithms, electronic hardware and algorithms for image transformations is described. Exemplary application of the system including automotive, photographic and medical endoscopic are also described. The system enables improved image view for and allows customization of views by the end user even after installation of the imaging system.
This application claims the benefit of U.S. Provisional Patent Application 61/245,565, filed Sep. 21, 2009, entitled “Wide-Angle Imaging System and Device”, currently pending, by the same inventor, and incorporated by reference.
Number | Name | Date | Kind |
---|---|---|---|
5313306 | Kuban et al. | May 1994 | A |
RE36207 | Zimmermann et al. | May 1999 | E |
6005611 | Gullichsen et al. | Dec 1999 | A |
6795113 | Jackson et al. | Sep 2004 | B1 |
7042497 | Gullichsen et al. | May 2006 | B2 |
7042508 | Jan et al. | May 2006 | B2 |
7274381 | Mojaver et al. | Sep 2007 | B2 |
7427263 | Hoeg et al. | Sep 2008 | B2 |
20040012544 | Swaminathan et al. | Jan 2004 | A1 |
20060285002 | Robinson et al. | Dec 2006 | A1 |
20070126892 | Guan | Jun 2007 | A1 |
20090059041 | Kwon | Mar 2009 | A1 |
20100033551 | Agarwala et al. | Feb 2010 | A1 |
Entry |
---|
Hynek Bakstein, Tomas Pajdla, Panoramic mosaicing with a 180 field of view lens, center for machine perception, czech technical university, Prague, czech republic, Jun. 2002. |
Jie Jiang, et al, Distortion Correction for a wide-angle lens based on real-time digital image processing, Optical Engineering, vol. 42 No. 7, Jul. 2003 p. 2029-2039, Society for photo optical engineers (Jul. 2003) SPIE Digital Library. |
Number | Date | Country | |
---|---|---|---|
20110069160 A1 | Mar 2011 | US |
Number | Date | Country | |
---|---|---|---|
61245565 | Sep 2009 | US |