The present invention relates to photographic image processing and reproduction and, more particularly, to a method and apparatus for creating composite, wide angle images.
It has been recorded that as early as the 1880's the idea of including more of an image on a print than what was available from a lens was attempted. The early inventions moved the camera as well as the film to allow synchronization with the field of view. The result was a developed film that contained a 360 degree (or less) image. The left part of the print was taken earlier than the right part of the print. This time slippage created image anomalies such as double images of moving objects within the multiple fields of view used to create the composite print or image.
As technology progressed, the same type of wide view camera, referred to as the moving camera technology, has significantly been refined. One embodiment rotates a mirror instead of the camera but still requires multiple images to encompass the desired field of view. The fundamental problem with this type of camera system is that it creates time slippage from left to right across the composite field of view.
One attempt to create composite images without a time shift was developed using a parabolic mirror placed perpendicular to a camera lens. Due to the shape of the mirror, a 360 degree perpendicular image is focused on the camera lens. The primary problem with this camera system is that the 360 degree image appears circular on the camera film or sensor when projected to a flat print, the resulting image has visible anomalies much like a Mercatur map projection of the earth.
More recently, the advent of digital camera technology has enabled photographers to rapidly acquire multiple digital images by rotating the field of view of a camera while collecting images. Computer programs have been developed for combining these multiple images into a composite image. However, notwithstanding the smaller time shift across the composite image, images attempting to capture action events such as automobile racing or basketball games still result in anomalies from fast moving objects.
A broad aspect of the present invention is to provide a multi-overlapping field of view camera apparatus comprising a plurality of lens/sensors.
Another aspect of the present invention is that it defines specific geometries of planar (0-360 degrees in the X or left-right direction and 0 degrees in the Y or up-down direction), multi-planar (0-360 degrees in the X direction and greater than 0 degrees but less than 360 degrees in the Y direction, and spherical (360 degrees in both the X and Y directions).
A particular aspect of the present invention is that in all cases, the geometries must be rigidly fixed in order to create a composite image without artifacts.
Another particular aspect of the present invention is a method processing the individual overlapping images obtained from the multi-sensor array fixture and merging them into a composite field of view.
Another particular aspect of the present invention is a method of incorporating artificial intelligence through a complex neural network. Using this technique, the algorithm for registering images is optimized as well as allowing the user of the device to remove perspective error.
Another aspect is the construction of a light chamber for housing a multi-lens and multi-sensor array.
Another aspect is the encryption of linked images to preclude viewing by unauthorized personnel.
The present invention comprises a computer controlled image capturing system for capturing images that encapsulate a wide field of view and require distinct images of objects moving at rapid speeds or for capturing time sequence images as an object traverses through a stationary field of view. In one embodiment, the invention incorporates 5 Kodak DX-3900 cameras as imaging devices in a lens/sensor array fixed on a planar platform. In another embodiment, the invention incorporates 9 Kodak DX-4900 cameras as imaging devices in the lens/sensor array fixed on a planar platform. In each embodiment, the cameras can be synchronized and controlled to operate concurrently to capture images within the field of view of each camera at the same instant. Alternately, the cameras can be synchronized to capture images across the field of view of the array with a set time delay between each camera so that multiple images of an object moving rapidly across the array field of view are obtained. The latter embodiment may be useful in tracking flight paths of objects. All of the captured images are transported to a set of digital signal processing (DSP) elements in parallel where they are analyzed and a composite image is constructed.
The invention includes a mechanism for building the camera such that the lens/sensor geometries are ascertained prior to final assembly of the camera using a spherical-like light chamber. One exemplary chamber is built using a regular (equal sided) hexadecagon (16 sides) in both the x and y directions. The lenses are placed in each facet such that each lens is 10 mm in diameter. The relationship of the plane with one lens to another within the same row is exactly 22.5 degrees. If more than one row is required, the angle to adjacent rows is also exactly 22.5 degrees. The size of the hexadecagon is directly proportional to the sensor size. Another chamber architecture uses equally spaced points on a sphere in the form of a Bucky ball.
The invention also discloses that all the individual lens/sensor images be stored as n-lets. That is, for example, if there are 3 lens/sensors in the array, the individual images are stored as electronically linked triplets. In addition, all but the center image of the triplet is encoded with an encryption key. The encrypted images will appear as hidden files while the un-encrypted image will appear as a specific file within the camera. The linked image is only available to those applications and devices that possess the appropriate key and software. In this way, the photo processor will be able to analyze, register, and augment the image in accordance with the disclosed imaging algorithms. Yet, those who do not have authorization to use that feature may still use the camera as a single lens device.
The invention 10 is diagrammatically shown in
The software program is resident in a DSP program module 26 and effects control of controller 19 through a DSP processor 24. The image data (pixel data) is received by a mapper 20 which moves the pixel data from each lens/sensor 12 to specific addresses in a global memory 21, which may be RAM memory.
A DSP memory 22 is a conventional memory module and associated processor for Direct Memory Access (DMA) to Global Memory 21. DSP memory 22 is operatively coupled to the DSP array 24 which comprises a plurality of DSP integrated circuits (25). A software program resident in module 26 defines the operation of the DSP array.
A formatter 56 converts the pixel data into a form that can be used by viewers and printers. Typically, the pixel data is placed in a JPEG format.
An output module 58 sends the formatted image data to a viewing or printing device. All of the electronic modules are powered from a common regulated supply 60. While the supply 60 is conventional, it should be noted that each CCD sensor must be regulated to provide equal intensity from each sensor in order to avoid differing light levels.
In one embodiment of the invention, the array 18 uses Kodak DX-3900 cameras for lens/sensors 12. Five cameras are arranged in a geometry such that each camera lens is placed equidistance from a central point and aligned on a radius from the point such that the subtended angle between each lens/sensor 12 is 45 degrees. Power is supplied using a common 6.3 volt lead-acid battery coupled to individual voltage regulators for each camera. Controller 19 is implemented by modifying each DX-3900 and connecting focus and capture leads to relays controlled by module 62 which provides a single concurrently to each camera 12 through a single activation switch.
In another embodiment, the array 18 uses nine Kodak DX-4900 cameras with each camera corresponding to one of the lens/sensors 12. In this embodiment, the camera lens are positioned in a geometry such that each lens/sensor 12 is placed equidistance from a central point and aligned perpendicularly to a radius from such point such that the subtended angle between each lens/sensor 12 is 22.5 degrees. As in the first embodiment, power is supplied using a common 6.3 volt lead-acid battery and individual voltage regulators for each camera. The controller 19 includes additional switching functions for controlling the additional ones of the camera 12 in response to image capture commands from relay activation module 62. In both embodiments, the modification of the cameras to connect the focus and capture controls to controller 19 will be apparent to those ordinarily skilled in the art.
While the invention as described with reference to
As discussed above, the lens/sensors 12, e.g., digital cameras, are arranged into an array such that each lens/sensor field of view slightly overlaps the field of view of each adjacent lens/sensor. In one form, the lens/sensors 12 are placed in a single plane such that the field of view in the X direction, i.e., horizontal, is up to and including 360 degrees for the composite array. The field of view in the Y or vertical direction, is centered at 0 degrees, i.e., the field of view in the Y direction is a function solely of the field of view of each individual lens/sensor 12. An example of this form of array is shown in plan view in
Various architectures can be used for the array 18, such as, for example, three lens/sensors configured in a 180 degree planar array with an angular shift of 45 degrees; five lens/sensors configured in a 180 degree planar array with an angular shift of 45 degrees; nine lens/sensors configured in a 180 degree planar array with an angular shift of 22.5 degrees; and eight lens/sensors configured in a 360 degree planar array with an angular shift of 45 degrees.
All of the above are single plane embodiments. For a multiplanar array, various architectures using different numbers of lens/sensors arranged in multiple planes are possible. Some examples are: nine lens/sensors configured in a multiplanar array where one lens/sensors is in a first plane with a center of focus being defined at 0 degrees, three lens/sensors are in a second plane with a center of focus being defined at 0 degrees for one lens/sensor and the other two lens/sensors having an angular shift of 45 degrees, five lens/sensors are in a third plane and the center of focus being defined at 0 degrees for one lens/sensor with the other four lens/sensors having an angular shift of 45 degrees, the angle subtended by the planes being 15 degrees with the third plane being defined at 0 degrees; eleven lens/sensors configured in a multiplanar array where three lens/sensors are in the first plane with a center of focus defined at 0 degrees for one lens/sensor and the other two lens/sensor have an angular shift of 45 degrees, five lens/sensors are in the second plane with a center of focus defined at 0 degrees for one lens/sensor and the other four lens/sensors have an angular shift of 45 degrees, three lens/sensors are in the third plane with a center of focus being defined at 0 degrees for one lens/sensor and the other two lens/sensors have an angular shift of 45 degrees, the second plane being defined as 0 degrees and the first and third plane subtend the angles +15 and −15 degrees, respectively; thirteen lens/sensors configured in a multiplanar array where one lens/sensors is in a first plane with a center of focus at 0 degrees, four lens/sensors are in a second plane with a center of focus defined at 0 degrees for one lens/sensor and the other three lens/sensors having an angular shift of 90 degrees, eight lens/sensors are in a third plane with a center of focus defined at 0 degrees for one lens/sensor and the other seven lens/sensors having an angular shift of 45 degrees, the third plane defined as 0 degrees, the second plane is at 45 degrees and the first plane is at 90 degrees; eighteen lens/sensors configured in a spherical array where one lens/sensors is in a first plane with a center of focus at 0 degrees, four lens/sensors are in a second plane with a center of focus defined at 0 degrees for one lens/sensor and the other three lens/sensors having an angular shift of 90 degrees, eight lens/sensors are in a third plane with a center of focus defined at 0 degrees for one lens/sensor and the other seven lens/sensors having an angular shift of 45 degrees, four lens/sensors are in a fourth plane with a center of focus defined at 0 degrees for one lens/sensor and the other three lens/sensors having an angular shift of 90 degrees, one lens/sensors is in a fifth plane with a center of focus at 0 degrees, the third plane being defined as 0 degrees, the second plane is at 45 degrees, the first plane is at 90 degrees, the fourth plane is at −45 degrees and the fifth plane is at −90 degrees; twenty-two lens/sensors configured in a spherical array where one lens/sensors is in a first plane with a center of focus at 0 degrees, six lens/sensors are in a second plane with a center of focus defined at 0 degrees for one lens/sensor and the other five lens/sensors having an angular shift of 60 degrees, eight
lens/sensors are in a third plane with a center of focus defined at 0 degrees for one lens/sensor and the other seven lens/sensors having an angular shift of 45 degrees, six lens/sensors are in a fourth plane with a center of focus defined at 0 degrees for one lens/sensor and the other 5 lens/sensors having an angular shift of 60 degrees, and one lens/sensors is in a fifth plane with a center of focus at 0 degrees.
I/O module 62 incorporates functions normally found on a conventional digital camera such as focus control, image capture and a view-screen for monitoring images. The module 62 brings all these functions for all lens/sensors 12 into a single module. Additionally, module 62 interfaces with controller 19 to simultaneously apply control signals for image capture and other functions to all lens/sensors. However, the module 62 also includes set-up adjustments to allow individual control of some lens/sensor functions such as, for example, focus, or for setting time delays between actuation of each lens/sensor in order to capture multiple images of a moving object. The controller 19 may be implemented as a group of switching devices responsive to a single signal from module 62 to actuate each lens/sensor 12 concurrently.
The functions related to image captured and pixel data processing are well known and are implemented in the internal electronics of all digital cameras, including the exemplary Kodak cameras. Accordingly, the global memory 21, DSP memory 22 and processing of pixel data are known. The memory modules may be RAM or flash card either separate or part of an associated computer.
One embodiment of the invention uses a PC in lieu of a dedicated DSP array 24 since DSP array 24 is a programmable processor with program control 26. Preferably, the DSP array uses sequential program architecture although parallel processing could be used. The functions implemented in the DSP array include analysis of each of the images for light consistency by calculating a mean brightness level. The analysis may also include maximum to minimum brightness, maximum to minimum contrast, total white space, total black space, and mean contrast.
These parameters are calculated for the entire image and for the image divided into 9 equal sections or image areas (3 on top, 3 on bottom, 3 in the middle, 3 on left, 3 on right, 3 in the middle).
The baseline used for coordination is the mean brightness level and is determined by the mean brightness of the center image of the array. All other images are mathematically transformed (pixel data adjusted) so that their mean brightness is made to equal that of the baseline. This is performed on all nine areas of each image. When transforming with different vectors, a smoothing algorithm is also performed so that image overlap occurs in 25% of the next image area. The other parameters are stored for use by the AI subsystem.
Once corrected for brightness, the adjacent images are merged. The merging process requires several steps. Starting with two adjacent images a single interface line is defined. The present invention uniquely implements merging to form a composite image. Objects are determined by using color differentiation. A line segment is defined as an object and represents a vector where on one side of the vector is one color and on the other side of the vector is another color. The difference in colors is established using a high pass filter and grayscale on the image. The characteristic of the filter is initially a default of 5 pixels but will be enhanced by the AI engine as the device is utilized.
All lenses have distortions in them such as barrel effects or pincushion effects. Each lens in the array 18 is fully characterized at manufacture and these distortions are provided as a matrix of pixel corrections. Distortions generally are common around the edges of a lens so the matrix at the edge has an embedded matrix of more detailed corrections, i.e., the corrections are not linear.
The geometry between each image is defined by the distance, d, between the centroid of the lenses and the angle, alpha, between them. The angle, w, shown in
Thus, for line segment a with the origin at lens/sensor 40:
ya=Cot(w)x
And for segment b with the origin at lens/sensor 42:
yb=Cot(v)x
But for the calculations to follow, the real origin is at the centroid of the array, O. This, then requires the transformation of axes.
For line segment a, with the origin at O: ya=Cot(v)x+r, where r is the radial dimension between centroid and lens/sensor.
For segment b with the origin at O, the transformation is: (x′,y′)=(x+rCos(S),y+rSin(S)) where S is the angle between radii to each lens sensor.
By then setting the two equations of the line segments equal to each other, the coordinates (and thus the distance using Pythagorean theorem) of all common objects from the centroid of the array can be determined.
All objects that are common to two adjacent images are determined to have a representative distance, d, from the centroid of the array. This is confirmed by evaluating the following error calculation:
All common objects of the similar distances are then grouped together into bins. The width of these bins is deterministic.
Points on the objects are selected on the basis of bin identification. Each bin should be represented with a control point. This implies a state variable that is the triplet [dbin n, xn, yn]. The same point is found in the adjacent image and represented as [dbin n+1, xn+1, yn+1].
A n-dimensional polynomial transformation is applied to image n+1 in order to merge it to the control points. For every order of the polynomial, four control points are required. The assumption is that the resulting image will be rectilinear. The expansion of the polynomial will determine the number of coefficients. For example for order 2 there will be 6 coefficients (1,x,y,xy, x2,y2) For order 3 there will be 10 coefficients (1,x,y,xy,x2,y2,yx2, xy2,x3,y3) For order 4 there will be 15 coefficients and so on.
Curve fitting can be implemented using one of three techniques, i.e., linear least squares evaluation, Levenberg-Marquardt algorithm or Gauss-Newton algorithm.
A significant number of the transformations will not fall on points coincident with the (x,y) pixelation grid. This is corrected by using interpolation. There are three techniques that are used in increasing complexity: nearest neighbor interpolation where the value of an interpolated point is the value of the nearest point; bilinear interpolation where the value of an interpolated point is a combination of the values of the four closest points; and bicubic interpolation where the value of an interpolated point is a combination of the values of the sixteen closest points.
While computationally expensive, the bicubic method is the default technique. It is believed that the bicubic method can be enhanced by weighting functions which gives more emphasis to pixels closer to the transformation point and less emphasis to pixels further away from the transformation point. Computer programs that can be used as part of the merging process include Panofactory 2.1. and Matlab 6.1. It will be appreciated that computer manipulation of pixel data for the merging process is necessary for the large number of pixels that must be processed in order to merge multiple images into a composite image using the above described technique.
Due to the characteristics of the polynomial transformations the composite image will not appear rectilinear and it must be cropped in order to be rectilinear.
It is recognized that many algorithm parameters are statistically based and may not represent the best solution for a given set of images. There are numerous variations in parametric corrections such as:
In order to optimize the set, other groupings of these parameters can be implemented and the results displayed to an observer for comparison grading. The grading is recorded in the knowledgebase for future reference. Artificial intelligence (AI) can then evaluate a best set of parameters. Even the individual lens corrections are evaluated and entered into the permanent part of the knowledgebase.
As such, a multi-dimensional neural network is implemented. The memories associated with each node are hierarchical in nature. Issues such as individual lens distortions which create unique polynomials will not change once they have been locked in. Issues such as light compensation, on the other hand, will change with emphasis made on more recent memories (settings).
The artificial intelligence engine is a multi-dimensional neural network. It is a fixed architecture but the weighting functions and thresholds for each perceptron node will be unique to the individual camera, photographer, and/or scenic choice.
The fundamental equations of each node shall be:
temp=((X1*w1)+(X2*w2) . . . (Xn*wn))
If (temp>T) then output is temp, else output=0
Where X1 . . . X n are input elements, w1 . . . wn are weighted elements and T is the overall threshold for that node.
While the background software and initialized-AI engine is fixed, the dynamic nature of the knowledgebase will provide a camera that implements custom software as it is needed. The neural network is implemented using a fixed-perception architecture available in most high-end mathematics software toolboxes, e.g., Matlab 6.1.
Besides the actual image registration and light average tuning with the AI engine as described above, the problem of perspective error is also linked to the AI engine. The reason it is separated from the other parameters is that it is much more of a psychological phenomenon than a mathematical issue. It is due to the cognitive way in which the human eye sees things and how an individual wants to see scenes. For example,
There are several ways to deal with the natural but sometimes un-esthetic mapping of images of the type shown in
Another method to deal with spatial distortion of the type shown in
Near-field objects are then translated to composite image without correction. Finally, the pixel data (objects) are interpolated as required. This does imply that the outer pixels have less resolution than the inner pixels. It also implies that, in order to maintain rectangular coordinates there is not necessarily a 1:1 mapping of pixels. Pixels are, in essence, created through interpolation or removed through averaging. The compromise between pixel density and perspective error is aided by creating images with a very large number of pixels/square area. The second embodiment using lens/sensors from a Kodak DX-4900, for example, has 4.1 million pixels for a 35 mm equivalent. In this manner the over sampling interpolation (pixel creation) and under sampling (pixel averaging) is done with minimal informational loss in the result. Note that when an object is only in one image, the object is indeterminant and is defaulted to be the estimate of the closest known object that is bi-located. Selections within the AI engine will ascertain whether or not this option was a good one.
The result of the process described can be presented to a person who selects which of the approaches is preferred. This selection is recorded in the knowledgebase. The degree of compensation is also provided as options until the user makes no change and the degree in which the user selects:
While a user generally selects a full image, it is possible with an AI implementation to select sections of the composite image for augmenting perspective error.
Turning now to
In other words, for a 10 mm side, the in radius is 25.1 mm and the circumradius is 25.6 mm. This is graphically shown in
Each included angle is 360/16 or 22.5°.
Since the imaging array in this embodiment is not spherical, it can be layered as shown in
The rays of light are focused at the mid-point and then inverted to the sensors 74 on the opposite side. In this manner, then, the maximum lens/sensor pairs is 8. Analysis of imaging prototypes has indicated for at least one type of image (i.e. printed 4″×6″, 4″×7″ formats), the maximum facets would be 7 in 3 layers. Due to the geometries, the 8th space would be blank.
The presupposition in
Four embodiments of this architecture are possible, i.e.,
Alternate embodiments shown in
Applicants have found that the nearer the format is to a 4×6, the Bucky architecture is superior. As the format elongates, the pixel density of the Bucky architecture decreases.
While many light sensors are configured in a 2:3 ratio, this does not have to be the case. Square sensors can be built and are recommended for this particular invention. The reason for this is the ability to reconfigure the sensors to any aspect ratio.
Tables I and II show calculated values for the lens/sensor designs discussed above and a comparative analysis of the picture/image obtained from each design.
Legend:
w = wide, h = high, x = by, FOV = Field Of View, FL = Focal Length, Mpixel = Megapixel
Typically digital cameras record the images taken on some type of removable storage media. These use various technology but the most common being Compact Flash and Smart Media—both using a type of flash non-volatile memory. The media can then be removed from the camera and put into a reader for viewing. Most cameras also allow the uploading of the images via an output port on the camera directly to a reader or computing device.
The camera system disclosed in this application uses a set of lens/sensors creating multiple images of the same scene. By controlling the angle of these lens/sensors and controlling the capture time of each lens/sensor a composite image can be created which is marked with high density and low distortion and error.
In order to create these images, a special algorithm is required as described in the application. It is possible that this software will be provided in something other than the camera. For example, it may reside in a PC application program or in a special purpose high quality printer. In order to safeguard the printing for specified manufacturers, an organizational schema of the image data is defined. This organization will link the images, by array organization, to the single scene. In addition, all but the center image will be 128 bit encrypted. Only the authorized software and printer manufacturers will be provided the appropriate key to unlock all of the images and allow them to be registered and merged.
This image record will be maintained throughout the process including any transmissions of the data through all forms of data transmission techniques including the internet.
While the invention has been described in what is presently considered to be a preferred embodiment, many variations and modifications will become apparent to those skilled in the art. For example, while digital imaging is preferred, the invention could use an array of film-based cameras. After the scenes are captured, the film is later removed and developed. The images are then scanned into digital images using commercially available digital scanners. The digital images are then input into the Mapper through a USB port. The set geometries of the film-based camera array design are then used as input data to the DSP program. All other functions of the invention are then executed as described. Accordingly, it is intended that the invention not be limited to the specific illustrative embodiment but be interpreted within the full spirit and scope of the appended claims.
This application claims the benefit of U.S. provisional applications, Application No. 60/479,410 filed Jun. 19, 2003; Application No. 60/479,411 filed Jun. 19, 2003; and Application No. 60/486,410 filed Jul. 10, 2003.
Number | Date | Country | |
---|---|---|---|
60479410 | Jun 2003 | US | |
60479411 | Jun 2003 | US | |
60486410 | Jul 2003 | US |