When pixel density of a camera becomes high relative to sensor size, an overall size of an image capture sensor may be increased to maintain favorable noise and low light capture characteristics. However, large optical sensors, such as CCD (charge-coupled device) and CMOS (complementary metal-oxide semiconductor) are difficult to produce and are very expensive. By combining a number of smaller sensors, all behind the same lens, a composite larger sensor can be constructed. Ideally, these smaller sensors would all be tiled directly edge-to-edge with the last row of pixels on one sensor lining up perfect with the first row on the next sensor. However, most modern sensors are manufactured to have an active sensor area and some amount of non-active bezel for sensor mounting and ancillary sensor processing electronics. This bezel prevents two sensors from sitting directly next to each other in the same plane without there being a significant gap in the combined image.
In the drawings, like reference numbers generally indicate identical or similar elements. Additionally, generally, the left-most digit(s) of a reference number identifies the drawing in which the reference number first appears.
Provided herein are system, apparatus, device, method and/or product embodiments, and/or combinations and sub-combinations thereof, for an optical image splitter with a plurality of optical image splitters. Each of the optical image splitters include a plurality of reflective mirror patterns interleaved with a plurality of transmissive areas, and are configured to split an image into a plurality of reflected image portions and a plurality of transmitted image portions. These image portions are subsequently split into a plurality of additional reflected and transmitted image portions. These additional image portions are captured by optical sensors as partial images to be subsequently recombined to form the image.
A digital camera is a camera that captures photographs in digital memory. Most cameras produced today are digital, largely replacing those that capture images on photographic film. Digital and digital movie cameras share an optical system, typically using a lens with a variable aperture to focus light onto a sensor located in a body of the camera. The aperture and shutter admit the correct amount of light to the image, just as with film, but the image pickup device is electronic rather than chemical. The image sensor can be a single sensor or a plurality of sensors where each sensor captures at least a portion of the image that is subsequently stored in digital memory.
Where a plurality of sensors are employed, post processing methods stitch the partial images back together using known image processing techniques. However, most modern sensors include a non-active bezel for sensor mounting that prevents two sensors from sitting directly next to each other in the same plane without there being a significant gap in the combined image. The technology described herein segments the image into multiple portions using an optical image splitter device that includes a plurality of glass image splitters. As the image is split a plurality of times, non-contiguous portions of the image are directed to sensors arranged in common sensor planes configured around a perimeter of the optical image splitter device. Capturing non-contiguous portions in a common plane eliminates gaps in the combined image as will be discussed in greater detail hereafter.
To create a seamless image across multiple sensors, the technology described herein redirects portions of the image to different planes such that two sensors capturing adjacent areas of the image are not overlapping in physical space. In the various embodiments described herein, geometric portions (e.g., stripes or rectangular patterns) of reflective surfaces (coatings), less than the entire image, can be inserted into an image-forming light path so that some portions of the light are reflected while the remaining portions can pass through (transmitted). By using mirrored surfaces (substantially 100% reflection), rather than prior art 50% transmissive reflective mirrors, light loss can be reduced throughout the system. By attaching the reflective mirror surfaces to transparent glass shapes (e.g., glass blocks), the mirror edge results in an almost seamless transition from one sensor to the next.
As described herein, “front” refers to the lens side and “back” refers to a side further from the lens. Angles listed assume that the optical axis of the lens is at 0 degrees. Angles are all referring to rotation in the horizontal plane. Throughout the descriptions, figures and claims the terms “transmit” and “pass-through” may be used interchangeably. In addition, throughout the descriptions, figures and claims the terms “clear”, “transparent” and “transmissive” may be used interchangeably. For purposes of simplicity, vertical stripe (VS) patterns (reflective and transmissive) are numbered right-to-left (VS1-VS4) and horizontal stripe (HS) patterns numbered top-to-bottom (HS1-HS4).
In one example embodiment, the image is first split by spaced vertical reflective stripes and then split by multiple sets of spaced horizontal reflective stripes. In this embodiment, multiple distinct planes are available for partial image captures allowing these sensors to be scaled horizontally and vertically without ever occupying the same physical space. Any sensor with a bezel that is less than approximately one-half the active area can be used to make as large of an array as desired with no gap appearing between adjacent image areas. The contiguous bezels, when added together between two sensors, need to be less than the active area of the sensor.
An image plane 203, emanating as an image from the back of a camera lens, is split into multiple optical paths. For illustration purposes only, the optical paths are shown as four (4) straight single vectors of light. However, as will be shown in
The image passes through the left leg of the structure with portions incident on vertical mirror stripes 102 being reflected (R1) towards the right leg of the structure and portions incident on vertical alternating transmissive (clear) areas 202 being transmitted (T1).
Camera 304 may be any digital, digital high resolution or digital ultra-high resolution camera without departing from the scope of the technology described herein. For example, camera 304 can include, and/or can be integrated as part of, but is not limited to a DSLR (digital single-lens reflex), a DTLR (digital twin-lens reflex), video, 3D (three-dimensional), compact, smartphone, mirrorless, action (adventure), panoramic (180-360 degrees) camera, etc. While the various embodiments described are specifically directed to visible light applications, non-visible (e.g., infrared or thermal) imaging components may be interchanged without departing from the scope of the technology described herein.
To create a seamless image across multiple sensors, the technology described herein redirects portions of the image to different planes using a collection of three-dimensional triangular glass shapes (e.g., P1, P2, P3 and P4) located within the camera body 304. However, those skilled in the relevant art(s) will recognize that other numbers of triangular shapes are possible without departing from the scope of the technology described herein. For example, those skilled in the relevant art(s) can readily apply the teaching herein to any suitable number of triangular shapes, without departing from the scope of the technology described herein. The collection comprises two similar sized isosceles triangles shapes P1 and P3 and two smaller similar sized isosceles triangle shapes P2 and P4. While the triangular shapes are shown in a one-dimensional top view, each triangular shape (P1-P4) is three-dimensional (e.g., as shown in
Lens 302 receives an image from image plane 301 and outputs an image 303 (width W) as an image from the back of the lens. A transparent isosceles right triangular shape P1, with a hypotenuse length of twice the width (2W) of image 303 and a height that matches a desired height of the image, is placed behind the lens 302. P1 is vertically centered, but horizontally shifted with the optical axis of the lens (center of the lens 302) falling approximately one-quarter of the way across the hypotenuse of the triangular shape.
A first image splitter is applied to one leg of P1. The first image splitter includes one or more vertical mirrored (reflective) layer patterns 320 formed on a vertical surface of a near leg of the isosceles right triangle P1, approximately 315 degrees from the lens. The one or more mirrored (reflective) layer patterns 320 are spaced across the vertical surface by an equal number of similar sized transmissive patterns. For example, a two-stripe embodiment would include two reflective patterns interleaved with two transmissive sections (no reflection) of the same size and shape (see
A second image splitter is applied to a second leg of P1. The second image splitter includes one or more mirrored (reflective) layers with alternating same sized clear (transmissive) areas formed as horizontal mirror stripes 316 on a vertical surface of a far leg of the isosceles right triangle P1 (see
A first smaller (leg width W) isosceles right triangle P2 with a hypotenuse length matching the second leg length of P1 and a height matching the height of P1 is adhered to horizontal stripes 316. One leg of P2 is parallel to the optical axis of the lens while the other is perpendicular.
An isosceles right triangle P3, the same size/shape as P1, has a near leg adhered to an outward surface of reflective patterns 320. The far leg of P3 is located at a 45 degree angle to the optical axis of the lens. One or more horizontal mirrored layers 314 are formed on a back vertical surface of the far leg as horizontal mirror stripes spaced across the surface with alternating transmissive areas. The one or more mirrored (reflective) layer patterns 320 are spaced across the vertical surface by an equal number of similar sized transmissive patterns. For example, a two-stripe embodiment would include two reflective patterns interleaved with two transmissive sections (no reflection) of the same size and shape (see
A second smaller isosceles right triangle P4 with a hypotenuse length matching the far leg length of P3 and a height matching the height of P3 is adhered to horizontal stripes 314. One leg of P4 is parallel to the optical axis of the lens, the other is perpendicular. The perpendicular face is collinear with the back element of the lens. The reflective patterns (mirrors) and spaces may need to be scaled up in size as they move further away from the lens as the rays coming from the lens may have some divergence (i.e., not perfectly collimated).
Sensors 318 (1-4) are located in four planes on outside surfaces of P1-P4, as described in greater detail hereafter, and include N sensors, where N equals the number of reflective patterns (mirrored stripes) plus the number of alternating transmissive patterns. Each of the four planes includes an array of N sensors. For example, a four stripe (two reflective stripe patterns and two transmissive stripe patterns) configuration would include four arrays of four sensors in each of the four planes for a total of 2N (22=16) sensors.
While a 16 sensor array is pictured throughout the drawings, including image splitters with 2 mirrored stripe patterns and two transparent stripe patterns, the design can scale to any N×N number of sensors by adding more mirrored and transparent stripe patterns (in equal numbers). Likewise, the number of sensors can be scaled down to fewer sensors using fewer mirrored and transparent stripe patterns.
Imaging sensors 318 (1-4) as typically found in digital cameras include CCD (charge-coupled device) or CMOS (complementary metal-oxide semiconductor) devices. However, any high-resolution sensor can be substituted for image sensors 318 (1-4), to include visible light sensors or non-visible radiation sensors (infrared or thermal) without departing from the scope of the technology described herein. For example, in one embodiment, the disclosed image splitters could be used in conjunction with a traditional 50% transmissive reflective image splitter to achieve a multi spectral image with infrared going through one path and visible light going through the other.
In addition, while shown as multiple sensors in
One or more sensors 318-1 (e.g., an array) are connected in a first plane to a far front face of the hypotenuse of P1. Sensors described throughout the descriptions may be applied directly to P1-P4 surfaces or included as part of a printed circuit board (PCB) containing the sensor and associated electronics and are connected using, for example, solder paste or other known mounting method. The sensors have a bezel size that is less than one-half of the active sensor area. In an alternative example, when all sensors are mounted in the same direction, the bezel can be uneven as long as it is less than ½ the active area.
An array of any number of sensors can be used in this design. Four (4) sensors per board yields a combined system with 16 sensors (as pictured). The sensors are mounted to the board such that the vertical spacing edge-to-edge between sensors is slightly less than the vertical height of the active area of the sensor. It should be noted that this is just one configuration of this camera system. Other configurations where the sensors are mounted on the top or bottom face of the camera and the mirrors reflect at a 45 degree angle running from the bottom to the top of the camera are also possible using this same core principals as described herein. The sensors are justified to the top left corner of P1.
One or more sensors 318-2 (e.g., an array) are connected in a second plane (perpendicular to the first plane) to a near end face of the parallel leg of P2. The sensors have a bezel size that is less than one-half of the active sensor area. An array of any number of sensors can be used in this design. Four (4) sensors per board yields a combined system with 16 sensors (as pictured). The sensors are mounted to the board such that the vertical spacing edge-to-edge between sensors is slightly less than the vertical height of the active area of the sensor. The sensors are justified to the bottom right corner of P2.
One or more sensors 318-3 (e.g., an array) are connected in a third plane (perpendicular to the first plane and parallel to second plane) on a far end face of the hypotenuse of P3. The sensors have a bezel size that is less than half of the active sensor area. An array of any number of sensors can be used in this design. Four (4) sensors per board yields a combined system with 16 sensors (as pictured). The sensors are mounted to the board such that the vertical spacing edge-to-edge between sensors is slightly less than the vertical height of the active area of the sensor. The sensors are justified with the bottom left sensor lining up one-half way across the hypotenuse of P3 along the bottom edge.
One or more sensors 318-4 (e.g., an array) are connected in a fourth plane (parallel to the first plane and perpendicular to second plane) on a far end face of the parallel leg of P4. The sensors have a bezel size that is less than half of the active sensor area. An array of any number of sensors can be used in this design. Four (4) sensors per board yields a combined system with 16 sensors (as illustrated). The sensors are mounted to the board such that the vertical spacing edge-to-edge between sensors is slightly less than the vertical height of the active area of the sensor and mounted directly opposite the lens. The sensors are justified to the top right corner of P4.
Once the portions are captured on respective sensors they need to be combined to recreate the original image. Images portions that incur orientation changes, such as occurring during reflection, are reoriented to their original orientation. The image portions are then stitched together using image-processing techniques and subsequently stored in digital memory. Image stitching or photo stitching is the process of combining multiple photographic images with overlapping fields of view to produce a segmented panorama or high-resolution image. Commonly performed through the use of computer software, most approaches to image stitching require nearly exact overlaps between images and identical exposures to produce seamless results. Some digital cameras can stitch their photos internally.
While specific embodiments described herein cite specific arrangements of three-dimensional transparent structures and sensors, variations in transparent structures (e.g., shape, size, index of refraction, etc.), associated sensors and their respective arrangements are considered within the scope of the technology described herein using known optics principles.
In one alternate example embodiment, P1 can split the image into horizontal portions first followed by subsequent splitting into vertical portions without departing from the scope of the technology described herein.
A combination structure is illustrated that includes a plurality of glass blocks adhered together allowing an image (or portion) to be transmitted through the entire combination. Image 303 passes through the left leg of P1 with those portions incident on vertical mirror stripes 320 being reflected (R1) towards the right leg of P1 and portions incident on vertical alternating clear (transmissive) areas being transmitted (T1) (passed through) towards P3.
In a two (2) stripe embodiment, a first (left) vertical quarter image portion and a third vertical quarter image portion are reflected (R1). As a consequence, a total of one-half of the image arrives incident on a right leg surface of P1. The remaining one-half of the image is transmitted (T1) as a second quarter image portion and a fourth quarter image portion to a first leg of P3.
The reflected (R1) first vertical quarter image portion and the third vertical quarter image portion are subsequently reflected (R2) as they become incident on horizontal mirror stripes 316 as four non-contiguous image portions towards four sensor array 318-1 (shown as a reflect-reflect sensor). As such, a first total of one-quarter of the image is captured on these four separate sensors, one-sixteenth of the image each.
The reflected (R1) left vertical quarter image portion and the second vertical quarter image portion are also incident on alternating clear (transmissive) areas and are transmitted (T2) (passed through) as four non-contiguous image portions towards four sensor array 318-2 (shown as a reflect-transmit sensor). As such a second total of a one-quarter of the image is captured on these 4 sensors, one-sixteenth of the image each.
The second vertical quarter image portion and a fourth vertical quarter image portion are transmitted (T1) to P3 as they pass through the clear transmissive stripe patterns as the remaining half of the image. The second vertical quarter image portion and the fourth vertical quarter image portion are subsequently reflected (R3) as they become incident on horizontal mirror stripes 320 as four non-contiguous image portions towards four sensor array 318-3 (shown as a transmit-reflect sensor). As such, a third total of one-quarter of the image is captured on these four sensors, one-sixteenth of the image each.
The second (middle) vertical quarter image portion and a fourth (right) vertical quarter image portion are transmitted (T3) to P4 as they pass through the clear transmissive stripe patterns alternating with horizontal stripes 314 as the remaining half of the image. The second vertical quarter image portion and the fourth vertical quarter image portion are subsequently reflected (T3) as they become incident on horizontal mirror stripes 320 as four non-contiguous image portions towards four sensor array 318-4 (shown as transmit-transmit sensor). As such, a third total of one-quarter of the image is captured on these four sensors, one-sixteenth of the image each.
The combination arrangements illustrated in
As shown, an optical path is formed from image 103 emanating from the back of a camera lens (not shown) to a sensor array 318-1 with two reflection phases R1 and R2, as shown in
Image 303 is incident on a surface of vertical image splitter 502. The function of this splitter is to split the image into four vertical image portions. However, only image portions incident on two reflective vertical stripe patterns 320 (VS1 and VS3) are reflected from the splitter and form a first reflected image R1. In this case, when reflected, the image is flipped on the vertical axis, as is known.
Reflected vertical stripe image portions (VS1 and VS3) are subsequently split as they become incident on horizontal image splitter 504. However, only image portions incident on the two reflective horizontal stripe patterns (HS1 and HS3) reflect from the splitter and form a resulting reflected image R2. As with before, when reflected, the image is flipped on the vertical axis returning it to its original orientation.
Each of the four resulting image portions shown as resulting image 506 are based on the intersection of VS1 and VS3 with HS1 and HS3. These four images are formed on sensor array 318-1. As such, a total of a quarter of the image is captured on these four sensors, one-sixteenth of the image each.
As shown, an optical path is formed from image 303 emanating from the back of a camera lens (not shown) to a sensor array 318-2 with one reflection phase R1 and one transmit (pass-through) phase T2, as shown in
Image 303 is incident on a surface of vertical image splitter 502. The function of this splitter is to split the image into four vertical portions. However, only image portions incident on two reflective vertical stripe patterns (VS1 and VS3) are reflected from the splitter and form a first reflected image R1. In this case, when reflected, the image is flipped on the vertical axis.
Reflected vertical stripe image portions (VS1 and VS3) are subsequently split as they become incident on horizontal image splitter 504. However, only image portions incident on the two transmissive horizontal stripe patterns (HS2 and HS4) pass-though the splitter and form an image pass-through T2. Because of the single reflection, the image remains flipped on the vertical axis.
Each of the four resulting image squares shown as resulting image 602 are based on the intersection of VS1 and VS3 with HS2 and HS4. These four partial images are formed on sensor array 318-2. As such, a total of a quarter of the image is captured on these four sensors, one-sixteenth of the image each.
As shown, an optical path is formed from image 303 emanating from the back of a camera lens to a sensor array 318-3 with one transmit (pass-through) phase T1 and one reflect phase R3 as shown previously in
The image is transmitted to vertical image splitter 502 comprising alternating vertical reflective and clear stripe patterns. The function of this splitter is to split the image into four vertical image portions. However, only image portions incident on the two clear (transmissive) vertical stripe patterns (VS2 and VS4) pass-though the splitter and form a first image pass-through T1. Vertical stripe patterns (VS2 and VS4) are subsequently split as they become incident on horizontal image splitter 702. However, only image portions incident on the two reflective horizontal stripe patterns (HS2 and HS4) reflect from the splitter and form reflected image R3. Because of the single reflection, the final image is flipped on the vertical axis.
Each of the four squares in resulting image 704 are based on the intersection of VS2 and VS4 with HS2 and HS4. These four partial images are formed on sensor array 318-3. As such, a total of a quarter of the image is captured on these four sensors, one-sixteenth of the image each.
As shown, an optical path is formed from an image 103 emanating from the back of a camera lens to a sensor array 318-4 with two transparent structure transmit (pass-through) phases T1 and T3 as shown previously in
Image 303 is transmitted to vertical image splitter 502 comprising alternating vertical reflective and clear stripe patterns. The function of this splitter is to split the image into four vertical image portions. However, only image portions incident on the two transmissive vertical stripe patterns (VS2 and VS4) pass-though the splitter and form a first image pass-through T1. Vertical stripe patterns (VS2 and VS4) are subsequently split as they become incident on horizontal image splitter 702. However, only image portions incident on the two transmissive horizontal stripe patterns (HS1 and HS3) pass-though the splitter and form a second image pass-through T2. This image retains its original orientation.
Each of the four pass-through squares in resulting image pass-through 802 are based on the intersection of VS2 and VS4 with HS1 and HS3. These four partial images are formed on sensor array 318-4. As such, a total of a quarter of the image is captured on these four sensors, one-sixteenth of the image each.
A mirrored (reflective) layer with alternating transmissive areas is created by vertical mirror stripes 320 oriented 315 degrees from the optical axis of the lens. For illustration purposes, the stripes are depicted as separate from the transparent structures. However, in practice they are applied directly to a surface through a known coating process, such as by lithography.
A second mirrored (reflective) layer with alternating transmissive areas is created by horizontal mirror stripes 316 oriented 45 degrees from the optical axis of the lens and 90 degrees offset from the vertical mirror stripes 320. For illustration purposes, the stripes are depicted as separated from a surface of the transparent structure. However, in practice they are applied directly to a surface through a coating process, such as by lithography.
A third mirrored (reflective) layer with alternating clear (transmissive) areas is created by horizontal mirror stripes 314 oriented 45 degrees from the optical axis of the lens and 270 degrees offset from the vertical mirror stripes 320. For illustration purposes, the stripes are depicted as separated from the transparent structure. However, in practice they are applied directly to a surface through a coating process, such as by lithography.
Sensor arrays 318 (1-4) are arranged at or near intersecting outward ends of the horizontal stripes 314 and 316.
A mirrored (reflective) layer with alternating transmissive areas is created by vertical mirror stripes 320 oriented 315 degrees from the optical axis of the lens. For illustration purposes, the stripes are depicted as separate from the transparent structure. However, in practice they are applied directly to a surface through a known coating process, such as by lithography.
A second mirrored (reflective) layer with alternating transmissive areas is created by horizontal mirror stripes 316 oriented 45 degrees from the optical axis of the lens and 90 degrees offset from the vertical mirror stripes 320. For illustration purposes, the stripes are depicted as separated from the transparent structure. However, in practice they are applied directly to a surface through a coating process, such as by lithography.
A third mirrored (reflective) layer with alternating transmissive areas is created by horizontal mirror stripes 314 oriented 45 degrees from the optical axis of the lens and 270 degrees offset from the vertical mirror stripes 320. For illustration purposes, the stripes are depicted as separated from the transparent structure. However, in practice they are applied directly to a surface through a coating process, such as by lithography.
Sensor arrays 318 (1-4) are arranged at or near intersecting outward ends of the horizontal stripes 314 and 316.
Image splitter system 1302 includes four mounted printed circuit boards (PCBs), each with four sensors. The PCBs are adhered or mounted using known attachment methods. In addition, while shown as square PCBs, the PCBs may be of any shape and may include additional board real estate not shown for housing various electronics to support the sensor arrays. Alternately, the individual sensors can be mounted directly to the glass structures and wired to one or more separate PCBs.
As shown in
As shown in
As shown in
As shown in
In some instances, light passing by very thin edges (edges of reflective stripes) bends around the edge and causes chromatic aberration where the colors separate out into their component parts at the edge of the shadow. This may lead to lower color fidelity at the seams between sensors or even a small amount of lost pixel data. AI gap filling may be employed in an image-processing pipeline to interpolate the transition between different portions of the image. Gap filling is an established branch of AI image processing covering imperfections in the image in the space between sensors.
One improvement provided by the technology described herein is a lens with an image circle larger than the sensors themselves can be divided up to hit multiple smaller sensors.
Another improvement provided by the technology described herein is that only a single lens with a single nodal point is required to combine the images from each sensor into one seamless image. Current ultra-high resolution systems, using available sensors, require time-consuming manual image warping and stitching because each lens has a different perspective of the scene and therefore include parallax differences between foreground and background objects.
As may be used herein, the terms “substantially” and “approximately” provide an industry-accepted tolerance for its corresponding term and/or relativity between items. Such an industry-accepted tolerance ranges from 90-100 percent and corresponds to, but is not limited to, component values, component variations, or optical reflection/transmission values. Such relativity between items ranges from a difference of a few percent to magnitude differences.
It is to be appreciated that the Detailed Description section, and not the Summary and Abstract sections, is intended to be used to interpret the claims. The Summary and Abstract sections may set forth one or more but not all exemplary embodiments of the present invention as contemplated by the inventor(s), and thus, are not intended to limit the present invention and the appended claims in any way.
The present invention has been described above with the aid of functional building blocks illustrating the implementation of specified functions and relationships thereof. The boundaries of these functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternate boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. For example, for non-rectangular sensors, as long as they tessellate, a camera system may be created by recreating these shapes. For example, hexagonal sensors with hexagonal mirror and transparent areas would also tile, though they may, in some embodiments, require more splits to avoid the bezels.
The foregoing description of the specific embodiments will reveal the general nature of the invention that others can, by applying knowledge within the skill of the art, readily modify and/or adapt for various applications such specific embodiments, without undue experimentation, without departing from the general concept of the present invention. Therefore, such adaptations and modifications are intended to be within the meaning and range of equivalents of the disclosed embodiments, based on the teaching and guidance presented herein. It is to be understood that the phraseology or terminology herein is for the purpose of description and not of limitation, such that the terminology or phraseology of the present specification is to be interpreted by the skilled artisan in light of the teachings and guidance.
The breadth and scope of the present invention should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents.