Typical imaging devices are solid-state devices that include an array of photodetector cells, which are commonly referred to as pixels. Two examples of an imaging device include charge coupled devices (CCD) and complementary metal oxide semiconductor (CMOS) imagers.
A pixel includes an electrical potential well (e.g., a photodiode, photo-capacitor, photogate, etc.) that collects electromagnetic radiation (e.g., a photon). Pixels include discrete collection areas that collect photons during times when the pixel is actively sensing photons (referred to as an exposure period). While photons are being sensed, each pixel generates a charge that is proportional to the amount and/or intensity of the collected photons. Pixels are able to perform these operations because they include photo-conversion components that convert electromagnetic radiation into a charge (i.e., an electrical signal). After a charge is generated by a pixel, the pixel spatially restricts the charge and stores it until the charge is transferred and the pixel is reset. It is possible to control how much charge a pixel will store by adjusting the pixel's exposure period.
In addition to a pixel's exposure period, the size of a pixel's collection area also heavily influences how the pixel operates. For example, the pixel's size influences both its signal to noise ratio (SNR) and its charge saturation capability. Saturation refers to a condition where the pixel has accumulated all of the charge it can store, which means that it can no longer sense any new photons (i.e., its potential well is full to capacity). Because of their larger size, large pixels are able to store more charge than smaller pixels. Thus, larger pixels have a higher saturation level than smaller pixels.
Pixels stop accumulating charge once the exposure period elapses. The imaging device then reads out the signal. For example, CCDs read current and perform off-chip current to voltage conversion. In contrast, CMOS imagers perform charge to voltage conversions at the pixel itself. The voltage is then transferred so that the voltage electrical signal can be used to generate an image corresponding to the sensed photons. This transferring process is generally referred to as a readout of the pixel. During this readout period, the pixel is also reset back to a base charge level so that the pixel can again begin to accumulate charge once a new exposure period begins. As such, the imaging device includes various readout circuits that are connected to the pixels and that perform these readout operations.
Each pixel array also has a determined dynamic range. A dynamic range refers to the range of light that can be sensed by the pixels during their exposure periods. Dynamic range is usually defined as a ratio between the standard deviation of ambient-light noise in a low light condition and the largest non-saturating signal that can be sensed by the pixels. In other words, dynamic range is the difference between the darkest dark and the lightest light that can be captured by the pixel array during a single exposure period.
As an example, consider an office that has a window to the outside. When the sun shines light through the window, the office will have both dark, shadowy areas and brightly lit areas. With traditional imaging techniques, the bright light will quickly saturate any pixels aimed at the brightly lit areas. Typically, an auto-exposure algorithm will set the exposure time to properly expose these areas, and the resulting image will be properly exposed in these bright areas. For pixels aimed at the dark areas, exposure periods are typically not long enough for these pixels to collect an adequate amount of photons, and the resulting image will simply show those areas as an underexposed region of the image. Consequently, the image will have unintelligible, or rather imprecise, whiter or darker areas.
High dynamic range (HDR) imaging was developed in an effort to combat the above problem. As a simple and brief introduction, HDR imaging generates two separate images. One image is generated using a prolonged exposure time to capture enough photons for the dark, shadowy areas. The other image uses a very short exposure time to capture photons for the brightly lit areas. These two images are then blended/stitched together via signal/image processing to generate an image that shows objects in the bright areas (as opposed to simply a washed out white blur) as well as objects in the dark areas (as opposed to simply a dark blur), effectively increasing the dynamic range of the combined images.
While traditional HDR sensors provide accurate and highly dynamic images for static environments, serious problems arise when HDR sensors are used to generate images of environments that include moving objects in the scene. To illustrate, the prolonged exposure times and time difference between the two images for HDR imaging results in imperfect matching between the camera images, and very blurred objects in the images in the regions of the images that had moving objects. What is needed, therefore, is an image sensor and methodology that are able to generate HDR images with simultaneous exposure times for the short and long exposures, enabling HDR images that minimize the effects of motion blur.
The subject matter claimed herein is not limited to embodiments that solve any disadvantages or that operate only in environments such as those described above. Rather, this background is only provided to illustrate one exemplary technology area where some embodiments described herein may be practiced.
Disclosed embodiments are related to methods and systems for improving high dynamic range (HDR) imaging. In some embodiments, a spatially multiplexed image sensor includes a first set of red, green, and blue (RGB) pixels and a second set of RGB pixels. Each red pixel in the second set of RGB pixels is positioned proximately to at least one red pixel in the first set of RGB pixels to thereby form a red effective pixel area (i.e., a pre-determined area on the image sensor devoted only to pixels that sense the same color of light). Similar positions are selected for both green and blue pixels to form green effective pixel areas and blue effective pixel areas. In this manner, pixels from multiple different pixel sets are located near, proximate, or adjacent to each other in a given effective pixel area and form a group of like-colored pixels. Because of their proximate positions, each pixel in the group captures light from essentially the same angular perspective.
In some embodiments, the pixels in the first set are triggered during a first exposure period. During this period, these pixels actively convert sensed light into an electrical signal, which is later extracted from the pixel using a readout circuit. The pixels in the second set also convert light into an electrical signal, which is readout by a different readout circuit than the first set's readout circuit. In some embodiments, the pixels in the second set are active during some of the same time as the pixels in the first set. Consequently, in some embodiments, there is a period of time in which the pixels from multiple sets are simultaneously sensing light and converting that light into different electrical signals. Additionally, in some embodiments, the readout for the second pixel set occurs concurrently with the readout for the first pixel set. In other embodiments the readout of the second pixel set occurs sequentially with the readout of the first pixel set and is based on scenarios where there is limited readout circuitry bandwidth or communications bandwidth. The embodiments use the first and second readouts to generate a combined digital image. By following the disclosed principles, the embodiments are able to reduce or eliminate the effects of motion blurring, regardless of whether those effects are for bright objects or for dim/dark objects.
In some embodiments, each effective pixel area has multiple sets of infrared (IR) pixels. For example, one effective pixel area may include pixels from two (or more) different sets, where each set includes IR pixels. As such, an IR pixel from one set will be proximate to an IR pixel from another set in each effective pixel area. Similar configurations are available for both monochrome pixels and CMYK pixels, or even any other type of pixel.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
Additional features and advantages will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the teachings herein. Features and advantages of the invention may be realized and obtained by means of the instruments and combinations particularly pointed out in the appended claims. Features of the present invention will become more fully apparent from the following description and appended claims, or may be learned by the practice of the invention as set forth hereinafter.
In order to describe the manner in which the above-recited and other advantages and features can be obtained, a more particular description of the subject matter briefly described above will be rendered by reference to specific embodiments which are illustrated in the appended drawings. Understanding that these drawings depict only typical embodiments and are not therefore to be considered to be limiting in scope, embodiments will be described and explained with additional specificity and detail through the use of the accompanying drawings in which:
Disclosed embodiments are directed to methods and systems that improve high dynamic range (HDR) imaging. Some embodiments are focused on a spatially multiplexed image sensor that includes at least two different sets of red, green, and blue (RGB) pixels. Each red pixel in the second set of RGB pixels is positioned proximately to at least one red pixel in the first set of RGB pixels to thereby form a red effective pixel area (i.e., a pre-determined area on the image sensor devoted only to pixels that sense the same color of light). Similar arrangements are made for each green pixel and each blue pixel in the first and second sets to form green effective pixel areas and blue effective pixel areas. In some implementations, each pixel set includes its own independently controllable exposure period and/or readout circuitry.
Using this pixel framework, some embodiments then expose the first set of RGB pixels to light during a first exposure period. As a result, these pixels generate an electrical signal by converting light into electricity. During a second exposure period, the second set of RGB pixels are also exposed to light and generate an electrical signal. In some embodiments, at least a portion of the second exposure period overlaps with at least a portion of the first exposure period such that both pixel sets are sensing light during the same period of time. Additionally, some embodiments cause the readouts for both pixel sets to overlap with one another. Thereafter, a combined digital image is generated by merging the images formed from each pixel set. As a result of using this improved sensor arrangement, the embodiments are able to reduce or eliminate the effects of motion blurring for a moving object, regardless of whether that object is brightly lit or dimly lit.
In this manner, significant advantages are realized both in terms of improvements to HDR technology as well as improvements to how a camera computer system operates. For example, the embodiments significantly reduce/eliminate the effects of image artifacts associated with moving objects, which are quite pronounced in existing HDR technologies. As a result, these operations provide significantly improved HDR digital images. The disclosed embodiments also improve the technology by facilitating parallel processing when obtaining new and improved HDR images. Furthermore, the HDR image alignment algorithm is improved and optimized because the alignment is now much simpler than before. Also, the HDR algorithm is not required to detect or correct for moving objects, so the algorithm is also optimized in this manner. Therefore, the embodiments not only provide a better image, but they also generate these images more efficiently.
Having just described some of the features and technical benefits of the disclosed embodiments at a high level, attention will now be directed to
Image 205 shows an overexposed sun 220, a house 225, a shadow 230 (formed from the house 225 blocking out sunlight from the sun 220), and a ball 235 located within the shadow 230. To capture enough photons so that the objects in the shadow 230 (i.e., the ball 235) are visible, the traditional HDR sensor prolongs the exposure period as shown in image 205. Now, the ball 235 is clearly visible as a result of this prolonged exposure period. Additionally, because the exposure period was long, the overexposed sun 220 saturated the pixels causing an indistinct washed-out blur to be generated (for clarity purposes, the sun 220 in image 205 should be considered a washed-out blur because it is overexposed as a result of the long exposure period and it being a bright object). In contrast, image 210 was obtained using a very short exposure period. Here, the sun 240 is clearly and distinctively visible, but the house 245, the shadow 250, and the ball 255 are underexposed.
By selectively choosing which portions of the images 205 and 210 are to be merged together, the HDR sensor is able to provide an image with a high dynamic range as shown by image 215. For instance, as shown in the merged image 215, the sun 260, the house 265, the shadow 270, and the ball 275 are all clearly and distinctively identifiable, particularly the sun 260 and the ball 275 which, if traditional non-HDR imaging techniques were used, may result in a washed-out white blur or a blacked-out blur, respectively. In some situations, however, the images may simply be less clear because some auto-exposure algorithms do not allow for both over and underexposed regions. Accordingly,
In contrast to
Image 305 includes a sun 320, a house 325, a shadow 330, and a moving ball 335. Because the exposure period was prolonged for image 305, it is apparent that the ball 335 moved from one location to another. Consequently, image 305 includes blurring effects associated with the moving ball 335.
Image 310 includes a sun 340, a house 345, a shadow 350, and a blurred ball 355 that is moving (as represented by the arrow). Image 310, however, was generated using a short exposure period. As a result, the sun 340 is clearly and distinctively visible, but the house 345, the shadow 350, and the ball 355 are not clear because not enough photons were collected to distinctively identify those elements. Furthermore, because the exposure period was so short, the ball 355 was captured at a specific point in time, thus there are reduced or even no blurring effects in image 310.
When image 305 is combined with image 310 to form image 315, the sun 360 is clearly and distinctively defined as well as the house 365 and the shadow 370. The ball 375, however, has some blurring artifacts as a result of the merging operations. These blurring artifacts are more clearly visible in
As shown,
The graph 500 shows that the x-axis portrays the image sensor's/camera's field of view, or rather a 1D pixel array response. The y-axis illustrates the photo signal (e.g., charge) that is collected by the pixels in the image sensor. As shown, the ball moves across the camera's field of view. For instance, the ball 505 moves from position A and eventually reaches position H as shown by ball 510 (e.g., from A to B, to C, to D, to E, to F, to G, and finally to H). The plot 515 corresponds to a long exposure image (e.g., image 305 from
The blurring artifacts from
In this regard, multiple different pixel sets are co-located and co-aligned such that they collect photons from essentially the same perspective or angular reference. As will be discussed later, any number of addressable pixel sets may be used (e.g., 2, 3, 4, or more than 4). Furthermore, in some instances, the corresponding pixel areas (e.g., large pixel and a small pixel of the same color area) are positioned adjacent to one another (e.g., a larger red pixel is positioned adjacent a smaller red pixel). In other embodiments, the corresponding pixel areas are not positioned adjacently, but are instead simply positioned on the same sensor (being proximate one another), rather than being directly adjacent to one another. For example, a small red pixel belonging to an overall red pixel color area may be positioned proximate to a larger red pixel from the red pixel area, but not directly adjacent to the larger red pixel color area. Instead, the small red pixel may be positioned adjacent to a second and different pixel color (e.g., a larger blue pixel belonging to a blue effective pixel area) and while only being positioned proximate the larger red pixel. In such an embodiment, the positioning of the R2 and B2 pixels on the sensor could be switched, for example. In view of the foregoing description, it will be appreciated that the term ‘proximate,’ as used for the described effective pixel areas, includes adjacent and non-adjacent positioning of the corresponding pixels on the sensor. Because each pixel set has its own wiring, the exposure periods, readout periods, and reset periods for each pixel set are separately controllable. This configuration produces an image sensor capable of providing highly dynamic and/or variable exposure periods (e.g., shorter or longer exposure periods on demand).
As a reference, the remaining figures use a particular numbering scheme to differentiate pixel sets. Specifically, pixels with a number “1” indication (e.g., any pixels labeled as “R1,” “G1,” and “B1”) belong to a first set of addressable pixels. Any pixels with a number “2” indication (e.g., “R2,” “G2,” and “B2”) belong to the second set of addressable pixels, and so on.
Attention will now be directed to
As shown, the red effective pixel area 605 includes a red pixel R1 from set 1 and a red pixel R2 from set 2. As discussed, this red effective pixel area 605 includes pixels that detect only red light. The green effective pixel area 610 includes a green pixel G1 from set 1 and a green pixel G2 from set 2. The blue effective pixel area 615 includes a blue pixel B1 from set 1 and a blue pixel B2 from set 2. Accordingly, each red pixel in the second set of RGB pixels is positioned proximately to at least one red pixel from another set of RGB pixels (e.g., set 1) within the same red effective pixel area. Similarly, each green pixel in the second set is positioned proximately to at least one green pixel from another set within the same green effective pixel area, and each blue pixel in the second set is positioned proximately to at least one blue pixel from another set within the same blue effective pixel area.
It will be appreciated that the sizes, shapes, orientations, and/or locations of the pixels within a given effective pixel area may be different. In configuration 600, each pixel in the second set has the same size, shape, and orientation (i.e., horizontal rectangle) as each pixel in the first set, and all of the pixels are symmetric in shape. This is not a requirement, however.
For instance, one, some, or all of the pixels in a particular effective pixel area may be shaped as circles, triangles, squares, rectangles, pentagons, hexagons, or any other polygon. Similarly, one or more pixels in a particular effective pixel area may have one shape while one or more other pixels in that same effective pixel area may have a different shape.
Additionally, one, some, or all of the pixels in a particular effective pixel area may be symmetrically shaped. Alternatively, one or more pixels within the same effective pixel area may be asymmetrically shaped. Indeed, the same effective pixel area may include one or more symmetrically shaped pixels and one or more asymmetrically shaped pixels.
Additionally, the physical sizes of the pixels may vary. As will be discussed in more detail below, one or more pixels in a particular effective pixel area may have one size while one or more other pixels in that same effective pixel area may have a different size. According to the disclosed embodiments, the preferred pixel size ranges anywhere from 0.81 square microns (e.g., a 0.9 μm by 0.9 μm pixel) up to and including 400 square microns in area (e.g., a 20 μm by 20 μm pixel).
Therefore, it will be appreciated that each pixel in an effective pixel area may be uniquely sized, shaped, oriented, and/or located. Accordingly, examples of some of the different configurations for each pixel in an effective pixel area will now be presented in connection with
For example, the size of a first pixel in a particular effective pixel area may be 1/16, ⅛, 3/16, ¼, 5/16, ⅜, 7/16, ½, 9/16, ⅝, 11/16, ¾, 13/16, ⅞, 15/16 or the same size as another pixel in the same effective pixel area. Any other proportional value may be used as well. Relatedly, the sizes may be based on the proportionality in relation to the effective pixel area. For instance, the size of a first pixel in a particular effective pixel area may be 5%, 10%, 15% 20%, 25%, 30%, 35% 40% 45%, 50%, 55% (and so on) of the effective pixel area. Correspondingly, the size of a second pixel in the same effective pixel area may be 95%, 90%, 85%, 80%, 75%, 70%, 65%, 60%, 55%, 50%, 45% (and so on) of that effective pixel area. It will be appreciated that any size, shape, orientation, and/or location discussed in connection with one configuration (e.g., configuration 700) may be applied to any of the other configurations mentioned herein.
Configuration 710 shows an arrangement that is somewhat similar to configuration 600. Now, however, the pixels in configuration 710 are oriented differently (e.g., the pixels in the effective pixel area 715 are vertical rectangles).
A similar arrangement is shown in configuration 810. Here, however, the pixels in the second set are on the left side of the pixels in the first set (as shown by the effective pixel area 815) as opposed to being on the right side (as shown by the effective pixel area 805). In some embodiments, the pixels in the second set are located on the upper left or upper right regions as opposed to the lower left or lower right. Accordingly, as shown by the configurations in
A non-limiting example will be helpful. Using configuration 800 from
Configuration 900 shows that the pixels in sets 2 and 3 have the same shape, size, and orientation within a given effective pixel area while the pixel in set 1 has a different size, shape, and orientation in that same effective pixel area. Here, the pixels in sets 2 and 3 are shaped as squares and are located on the bottom portion of the rectangular pixel of set 1.
Configuration 905 shows that the pixels in sets 2 and 3 are again the same size, shape, and orientation, but now they are rectangles as opposed to squares. Furthermore, the pixels in set 2 are located in between the pixels of sets 1 and 3 such that the pixels of set 3 are not immediately adjacent to the pixels in set 1. As a result, one or more pixels in an effective pixel area may not be immediately adjacent to one or more other pixels in that effective pixel area, yet they may all still be included within the same effective pixel area and they are all still proximate to each other.
Configuration 910 shows that the pixels in sets 1, 2, and 3 are all rectangles, but none of the pixels are the same size. Furthermore, the pixels in set 2 are positioned in between the pixels of sets 1 and 3 such that the pixels in set 3 are not adjacent to the pixels in set 1 in a particular effective pixel area. Recall, the proportional sizing of the pixels in sets 1, 2, and 3 may vary in the manner described earlier but now tailored to situations involving three or more pixels. As a brief example, a pixel in set 1 may occupy 70% of a particular effective pixel area, a pixel in set 2 may occupy 10% of that effective pixel area, and a pixel in set 3 may occupy 20% of that effective pixel area. Relatedly, a pixel in set 2 may have half the surface area of a pixel in set 1, and a pixel in set 3 may have half the surface area of the pixel in set 2 (such that the pixel in set 3 is one-fourth the size of the pixel in set 1). Of course, other dimensions may be used as well.
Configuration 915 shows that the pixels in sets 1, 2, and 3 are all sized, shaped, and oriented (e.g., a vertical rectangular orientation) similarly to each other. Furthermore, the pixels in set 2 are located in between the pixels of sets 1 and 3 for a given effective pixel area.
Configuration 920 shows that the pixels in sets 1, 2, and 3 are all sized and oriented differently. Specifically (and for each effective pixel area), the pixels in set 1 are large horizontal rectangles, the pixels in set 2 are medium sized horizontal rectangles, and the pixels in set 3 are small vertical rectangles. Additionally, the pixels in each set are located proximately to each other.
Configuration 925 shows that the pixels in sets 2 and 3 are shaped, sized, and oriented in a similar manner, but the pixels in set 1 are sized and oriented differently. Furthermore, the pixels in each set are located proximately to each other. Accordingly,
It will be appreciated that the inclusion of multiple pixel sets within each effective pixel area (e.g., a R1 pixel and a R2 pixel in the same effective pixel area) does not decrease the overall spatial resolution of the sensor. It does, however, decrease the effective pixel size/area of each individual pixel. For instance, suppose a 1M (1K×1K image sensor) is provided. Here, the sensor resolution would include 1K×1K pixels, regardless of whether the pixels were exposed in different manners (i.e., constituting a multiple exposure HDR mode which is described below). However, if only one set of pixels was used to capture an image and then another set was used to capture a different image, the effective pixel size would be reduced by a factor of two times, which will impact the camera's SNR, full well capacity, and/or low light sensitivity.
To address this, the embodiments include a mechanism for selectively altering HDR modes. These modes include a first mode where all of the pixels in an effective pixel area operate in unison/sync to thereby perform traditional HDR imaging (as shown in
Attention will now be directed to
In contrast to the sequential exposure and readout periods shown in
The exposure period 1010 begins at point “A” and ends at point “D.” The readout period 1015 then begins at point “D” and ends at point “E.” In this example 1000, the entire exposure period 1020 overlaps with the exposure period 1010 in that the exposure period 1020 begins at point “B” and ends at point “C,” both of which are included between points “A” and “D.” In some embodiments, however, only a portion of the exposure time 1020 overlaps with the exposure time 1010. By overlapping at least some of the exposure times for the different pixel sets, the blurring effects (also called “ghosting”) will be significantly reduced or even completely eliminated via image processing, which is described in more detail later.
As described earlier, the image sensor is able to dynamically adjust the exposure times such that they may not be the same for each exposure instance. Preferred exposure times for the long exposure period (corresponding to exposure period 1010) typically range anywhere from 10 micro-seconds up to and including 200 milli-seconds. Preferred exposure times for the short exposure period (corresponding to exposure period 1020) typically range anywhere from 5 micro-seconds up to and including 200 micro-seconds. In some embodiments, the exposure period for one pixel set ranges anywhere from 2 to 20 times as long as the exposure period for a different pixel set.
In some embodiments, the exposure period is a function of a pixel's collection area (i.e., its surface area). Using configuration 800 from
In another embodiment, the relationship between the exposure periods for different pixel sets may also be determined by a function based on collection area. For instance, the collection area (call it “A1”) of a first set of pixels multiplied by the exposure period (“E1”) for that first set of pixels (i.e., A1*E2) may be some multiplier of the collection area (“A2”) of a second set of pixels multiplied by the exposure period (“E2”) for that second set of pixels (i.e. (A1*E1)*x=(A2*E2)).
In
Attention will now be direct to
As shown, image 1105 includes an overexposed sun 1120, a house 1125, a shadow 1130, and a moving ball 1135. Because the exposure period was prolonged, the sun 1120 will appear as a white, washed out blur. In contrast, the non-moving objects in the shadow 1130 will be clearly and distinctively defined. This is achieved because the photons aimed at that dark area were exposed for a sufficient amount of time and captured enough photons. However, the moving objects (e.g., the moving ball 1135) will be blurred/smeared. Image 1105 was captured using a first pixel set (e.g., pixels R1, G1, and B1 in
Image 1110 is a short exposure image. Here, the sun 1140 is clearly and distinctively defined because the pixels were exposed only for a very short period of time. In contrast, the house 1145, the shadow 1150, and the ball 1155 may not be distinctively shown because the pixels aimed in those directions were not exposed for a long time. Image 1110 was captured using a second pixel set (e.g., pixels R2, G2, and B2 in
Subsequently, signal/image processing is performed to stitch/merge portions of image 1105 with portions of image 1110. As the camera orientation and lens distortion is in exactly the same position for the two images, the image alignment and merging steps are substantially simplified. As a result, the operations for merging the two images together are significantly improved and the camera computer system is able to provide a better HDR image. For example, the camera computer system is able to determine that the relevant portions of image 1105 are the house 1125, the shadow 1130 and the moving ball 1135. Because the sun 1120 will be a white, washed-out blur, the camera computer system can selectively isolate various portions of the image 1105 so that those portions may or may not be included in the merged image 1115 (or those portions may not heavily influence the merged image 1115). Similarly, the camera computer system is able to determine that the sun 1140 is the most relevant portion in image 1110 as opposed to the house 1145, the shadow 1150, and/or the ball 1155.
In some embodiments, the camera computer system also identifies when an object is moving. For instance, the camera computer system is able to analyze image 1105 and identify that the moving ball 1135 is not stationary and that there are blurring effects in image 1105. When the camera computer system identifies such situations, then the camera computer system may determine that objects in image 1110 should be weighted more heavily. For instance, the camera computer system is able to determine that the position, shape, and/or other visible characteristics of ball 1155 should be weighted more heavily in its image processing. By merging the information associated with the moving ball 1135 with the information associated with ball 1155, the camera computer system is able to produce the merged image 1115 which has little to no blurring artifacts because of the overlapping exposure periods. For instance, the image 1115 shows a clear and distinct sun 1160, house 1165, and shadow 1170. Image 1115 also shows a clear and distinct ball 1175 which does not have any blurring artifacts, unlike the ball 375 from
In particular,
In this manner, the blurring artifacts from
Attention will now be directed to
In contrast to the different exposure periods shown in
The plot 1415 corresponds to the exposure of a first set of pixels while the plot 1420 corresponds to the exposure of a second set of pixels. If the pixels in the first set are smaller (i.e., less surface area) than the pixels in the second set, then the pixels in the first set (corresponding to plot 1415) will likely collect fewer photons (and hence less charge) than that pixels in the second set (corresponding to plot 1420), as generally shown in
In some embodiments, the collection area (i.e., the surface area of the pixel) ratio between multiple pixel sets is adjusted to optimize for the short and long exposure times. Instead of prolonging an exposure time, a larger pixel may simply be used. As such, larger pixels may be used to shorten the exposure time in cases where a long exposure would traditionally be used. At the same time, a smaller pixel will take a longer time to accumulate pixels, which may also help increase the dynamic range of the sensor. Accordingly, designing the image sensor according to these various different ways may also be used to help further reduce the effects of motion blurring.
In
Although not shown in
As discussed, each pixel set is independently addressable and has its own readout circuitry, as shown in
Therefore, on the design side, additional pixel/readout circuits are used because each pixel set is now separately addressable. Accordingly, the camera computer system now supports additional circuitry, including transistors, capacitors, and analog-to-digital converters (ADC), just to name a few. To provide the separate exposure, readout, and reset periods, the transfer gates, the row/column select gates, and the reset gates are separately routed or wired. This allows the first pixel set to have a separate exposure time from the second pixel set. If three or more pixel sets were provided, then three or more pixel/readout circuits would be provided in a corresponding manner. It should be noted that some camera systems include pipelining functionalities in that they can readout a frame concurrently with when a set of pixels are exposed. Accordingly, some of the disclosed embodiments also support this functionality.
The following discussion now refers to a number of methods and method acts that may be performed. Although the method acts may be discussed in a certain order or illustrated in a flow chart as occurring in a particular order, no particular ordering is required unless specifically stated, or required because an act is dependent on another act being completed prior to the act being performed.
Attention will now be directed to
As described earlier, the pixel framework now includes multiple sets of RGB pixels arranged within each effective pixel area. Initially, a first set is exposed to light during a first exposure period (act 1605). In some embodiments, this light first passes through one or more optical components of a camera (e.g., a lens). Using
Next, a first readout is obtained (act 1610). This readout obtains electrical signals that are generated by the first set of RGB pixels in response to the light received and detected by that first set during the first exposure period. This operation may be performed by the readout circuit 1500 shown in
The method also includes an act (act 1615) of exposing the second set of RGB pixels to light that is received through the one or more optical components of the camera during a second exposure period. Afterwards, a second readout is obtained (act 1620). This readout obtains electrical signals that are generated by the second set of RGB pixels in response to the light received and detected by the second set of RGB pixels during the second exposure period.
Thereafter, there is an act (act 1625) of generating a combined digital image based on the first readout and the second readout. As described earlier, this combined digital image is generated by selectively choosing various portions from the first readout and merging those portions with various portions from the second readout. Some portions may be more influential or weighted more than other portions. If more than two readouts were performed (e.g., in scenarios where there are three or more pixel sets within each effective pixel area), then the combined digital image will be generated using content from three separate images. An example of this image is shown by image 1115 in
In some embodiments, at least a portion of the second exposure period occurs concurrently with at least a portion of the first exposure period (act 1630 in
While the foregoing illustrations and examples have focused on scenarios in which only a RGB sensor is used, it will be appreciated that other types of light sensors may also be used. For example, instead of a red, green, blue pixel configuration, the embodiments may comprise a first set of monochrome pixels and a second set of monochrome pixels. Each monochrome pixel in the second set is positioned proximately to at least one monochrome pixel in the first set. In this manner, each monochrome effective pixel area may be comprised of pixels from the first and second pixel sets. Similarly, for embodiments in which the monochrome pixel sets include multiple sets of a single color, any number of monochrome pixel sets may be included in each effective pixel area. For example, pixels from 2, 3, 4, or more different sets (of the same color) may be included in each effective pixel area.
Relatedly, the image sensor may be comprised of different sets of infrared (IR) pixels. For instance, a particular effective pixel area may include any number of pixels from any number of different pixel sets. These pixels are located proximately to one another to form the effective pixel area. As such, a first set of IR pixels may sense IR light during a first exposure period, and a second set of IR pixels may sense IR light during a second, but overlapping, exposure period. A similar configuration may be made for CMYK pixels.
Even further, some embodiments may include combinations of the above. For example, in one effective pixel area, there may be a first red pixel from a first pixel set, a second red pixel from a second pixel set, a first IR pixel (or monochrome pixel) from a third pixel set, and a second IR pixel (or monochrome pixel) from a fourth pixel set. Similar configurations may be made for the blue and green pixels. In this manner, each effective pixel area may include pixels from four different sets, namely, 2 visible light pixels (each from a different set), and 2 IR or monochrome pixels (each from a different set).
The scope of this disclosure also includes any combinations of the foregoing pixel sets. Furthermore, it will be appreciated that the foregoing embodiments can be implemented to help improve the manner in which HDR imaging is performed, particularly for scenarios in which HDR imaging is used to capture images of environments that include moving objects. By following the disclosed principles, the quality HDR images can be significantly improved by at least helping to reduce and/or eliminate blurring artifacts.
Having just described the various features and functionalities of some of the disclosed embodiments, the focus will now be directed to
In its most basic configuration, the computer system 1700 includes various different components. For example,
The storage 1735 may be physical system memory, which may be volatile, non-volatile, or some combination of the two. The term “memory” may also be used herein to refer to non-volatile mass storage such as physical storage media. If the computer system 1700 is distributed, the processing, memory, and/or storage capability may be distributed as well. As used herein, the term “executable module,” “executable component,” or even “component” can refer to software objects, routines, or methods that may be executed on the computer system 1700. The different components, modules, engines, and services described herein may be implemented as objects or processors that execute on the computer system 1700 (e.g., as separate threads).
The disclosed embodiments may comprise or utilize a special-purpose or general-purpose computer including computer hardware, such as, for example, one or more processors (such the hardware processing unit 1705) and system memory (such as storage 1735), as discussed in greater detail below. Embodiments also include physical and other computer-readable media for carrying or storing computer-executable instructions and/or data structures. Such computer-readable media can be any available media that can be accessed by a general-purpose or special-purpose computer system. Computer-readable media that store computer-executable instructions in the form of data are physical computer storage media. Computer-readable media that carry computer-executable instructions are transmission media. Thus, by way of example and not limitation, the current embodiments can comprise at least two distinctly different kinds of computer-readable media: computer storage media and transmission media.
Computer storage media are hardware storage devices, such as RAM, ROM, EEPROM, CD-ROM, solid state drives (SSDs) that are based on RAM, Flash memory, phase-change memory (PCM), or other types of memory, or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to store desired program code means in the form of computer-executable instructions, data, or data structures and that can be accessed by a general-purpose or special-purpose computer.
The computer system 1700 may also be connected (via a wired or wireless connection) to external sensors (e.g., one or more remote cameras, accelerometers, gyroscopes, acoustic sensors, magnetometers, etc.). Further, the computer system 1700 may also be connected through one or more wired or wireless networks 1740 to remote systems(s) that are configured to perform any of the processing described with regard to computer system 1700.
A “network,” like the network 1740 shown in
Upon reaching various computer system components, program code means in the form of computer-executable instructions or data structures can be transferred automatically from transmission media to computer storage media (or vice versa). For example, computer-executable instructions or data structures received over a network or data link can be buffered in RAM within a network interface module (e.g., a network interface card or “NIC”) and then eventually transferred to computer system RAM and/or to less volatile computer storage media at a computer system. Thus, it should be understood that computer storage media can be included in computer system components that also (or even primarily) utilize transmission media.
Computer-executable (or computer-interpretable) instructions comprise, for example, instructions that cause a general-purpose computer, special-purpose computer, or special-purpose processing device to perform a certain function or group of functions. The computer-executable instructions may be, for example, binaries, intermediate format instructions such as assembly language, or even source code. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the described features or acts described above. Rather, the described features and acts are disclosed as example forms of implementing the claims.
Those skilled in the art will appreciate that the embodiments may be practiced in network computing environments with many types of computer system configurations, including personal computers, desktop computers, laptop computers, message processors, hand-held devices, multi-processor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, mobile telephones, PDAs, pagers, routers, switches, and the like. The embodiments may also be practiced in distributed system environments where local and remote computer systems that are linked (either by hardwired data links, wireless data links, or by a combination of hardwired and wireless data links) through a network each perform tasks (e.g., cloud computing, cloud services and the like). In a distributed system environment, program modules may be located in both local and remote memory storage devices.
Additionally or alternatively, the functionality described herein can be performed, at least in part, by one or more hardware logic components (e.g., the hardware processing unit 1205). For example, and without limitation, illustrative types of hardware logic components that can be used include Field-Programmable Gate Arrays (FPGAs), Program-Specific or Application-Specific Integrated Circuits (ASICs), Program-Specific Standard Products (ASSPs), System-On-A-Chip Systems (SOCs), Complex Programmable Logic Devices (CPLDs), Central Processing Units (CPUs), and other types of programmable hardware.
The present invention may be embodied in other specific forms without departing from its spirit or characteristics. The described embodiments are to be considered in all respects only as illustrative and not restrictive. The scope of the invention is, therefore, indicated by the appended claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.
This application is a continuation of U.S. patent application Ser. No. 15/943,140 filed on Apr. 2, 2018, entitled “MULTIPLEXED EXPOSURE SENSOR FOR HDR IMAGING,” which issued as U.S. Pat. No. ______ on ______, and which application is expressly incorporated herein by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
Parent | 15943140 | Apr 2018 | US |
Child | 17356326 | US |