The present invention relates to an image sensor and a method of capturing an image as may be employed, e.g., in a camera, namely a still-picture camera or a video camera.
Image sensors nowadays have a very limited dynamic range, so that many typical scenes cannot be fully imaged. Therefore, as high a dynamic level as possible per shot would be desirable. Previous techniques for a high dynamic range (HDR) exhibit marked image interference in the shooting of moving scenes. High-resolution shots exhibiting correct motional blurring involve a lot of effort.
There are various possibilities of expanding the dynamic range for image sensors, i.e., for HDR. The following group of possibilities provides computational combination of images following a regular shot:
Other possibilities of achieving a higher dynamic range start with the sensor design. The following options present themselves:
Further possible approaches to extending the dynamic range provide an array of pixels having different levels of sensitivity in each case [15]:
Splitting of the light beam via a beam splitter, for example, while shooting the same scene from the same perspective while using several cameras may be exploited to cover a higher level of dynamics. This enables shooting even without any artifacts. A system comprising three cameras is described in [17], for example. However, a large outlay for mechanical alignment and optical components is involved.
In the field of adaptive systems there is a proposition according to which an LC display is mounted in front of a camera [13]. Starting from an image, the brightness may then be adapted in specific image areas for the further images. Skillful reduction of the brightness in bright image areas may then create exposure of a scene which exhibits correct motional blurring. Some of the above-mentioned possibilities of expanding the dynamic range are not able to produce a high-quality HDR image of a moving scene. Artifacts will arise, since each image point is incorporated in the shot at a different time or with a different effective exposure duration. Software correction comprising estimating and interpolating the movement in the scene is possible; however, the result will invariably be inferior to a real shot.
Systems having image points of different sensitivities, as were also described above, may use different pixels on the image sensor for each of the possible cases, namely “bright” and “dark”. This reduces spatial resolution. Additional electronics in each pixel furthermore leads to reduced sensitivity since in these areas, no light-sensitive surface can be realized.
Systems which circumvent both disadvantages may use additional mechanics. As was mentioned above, it is possible, for example, to provide beam splitters in connection with utilization of several cameras [17] or to use additional optical reducers in front of each pixel [13]. However, said solutions are either extremely expensive or also lead to a reduction in the resolution.
Moreover, U.S. Pat. No. 4,040,076 describes a technique known as “skimming gate”. This technique involves initially reading out some of the accumulated charge of the pixels so as to achieve increased dynamics which, however, may use additional circuitry.
According to an embodiment, an image sensor may have: a multitude of pixel sensors, the image sensor being configured to capture an image and is configured such that during capture of the image a first pixel sensor in each one of a first number of non-overlapping first accumulation intervals which succeed each other in an essentially uninterrupted manner and together yield an exposure interval detects one value in each case so as to achieve a number of values which, if the first number is larger than 1, are subjected to a summation so as to achieve a pixel value for the first pixel sensor, and a second pixel sensor in each of a second number of non-overlapping second accumulation intervals which succeed each other in an essentially uninterrupted manner and together yield the exposure interval, detects a value so as to achieve a number of values which, if the second number is larger than 1, are subjected to a summation so as to achieve a pixel value for the second pixel sensor, a subdivision of the exposure interval into the first accumulation intervals differing from a subdivision of the exposure interval into the second accumulation intervals, wherein the multitude of pixel sensors include pixel sensors of a first color sensitivity spectrum and pixel sensors of a second color sensitivity spectrum, the first pixel sensor belonging to the pixel sensors of the first color sensitivity spectrum, and the second pixel sensor belonging to the pixel sensors of the second color sensitivity spectrum, the image sensor being configured such that the subdivision of the exposure interval into accumulation intervals for the pixel sensors of the first color sensitivity spectrum and the pixel sensors of the second color sensitivity spectrum is identical, but for the pixel sensors of the first color sensitivity spectrum is different from that for the pixel sensors of the second color sensitivity spectrum.
According to another embodiment, an camera may have an image sensor, which image sensor may have: a multitude of pixel sensors, the image sensor being configured to capture an image and is configured such that during capture of the image a first pixel sensor in each one of a first number of non-overlapping first accumulation intervals which succeed each other in an essentially uninterrupted manner and together yield an exposure interval detects one value in each case so as to achieve a number of values which, if the first number is larger than 1, are subjected to a summation so as to achieve a pixel value for the first pixel sensor, and a second pixel sensor in each of a second number of non-overlapping second accumulation intervals which succeed each other in an essentially uninterrupted manner and together yield the exposure interval, detects a value so as to achieve a number of values which, if the second number is larger than 1, are subjected to a summation so as to achieve a pixel value for the second pixel sensor, a subdivision of the exposure interval into the first accumulation intervals differing from a subdivision of the exposure interval into the second accumulation intervals, wherein the multitude of pixel sensors include pixel sensors of a first color sensitivity spectrum and pixel sensors of a second color sensitivity spectrum, the first pixel sensor belonging to the pixel sensors of the first color sensitivity spectrum, and the second pixel sensor belonging to the pixel sensors of the second color sensitivity spectrum, the image sensor being configured such that the subdivision of the exposure interval into accumulation intervals for the pixel sensors of the first color sensitivity spectrum and the pixel sensors of the second color sensitivity spectrum is identical, but for the pixel sensors of the first color sensitivity spectrum is different from that for the pixel sensors of the second color sensitivity spectrum.
According to another embodiment, a method of capturing an image with a multitude of pixel sensors, the method may have the following steps in capturing the image: controlling a first pixel sensor, so that in each one of a first number of non-overlapping first accumulation intervals which succeed each other in an essentially uninterrupted manner and together yield an exposure interval, said first pixel sensor detects one value in each case so as to achieve a number of values while—if the first number is larger than 1—summing the values so as to achieve a pixel value for the first pixel sensor, and controlling a second pixel sensor, so that in each of a second number of non-overlapping second accumulation intervals which succeed each other in an essentially uninterrupted manner and together yield the exposure interval, said second pixel sensor detects a value so as to achieve a number of values while—if the second number is larger than 1—summing the values so as to achieve a pixel value for the second pixel sensor, a subdivision of the exposure interval into the first accumulation intervals differing from a subdivision of the exposure interval into the second accumulation intervals, the multitude of pixel sensors including pixel sensors of a first color sensitivity spectrum and pixel sensors of a second color sensitivity spectrum, the first pixel sensor belonging to the pixel sensors of the first color sensitivity spectrum, and the second pixel sensor belonging to the pixel sensors of the second color sensitivity spectrum, the subdivision of the exposure interval into accumulation intervals for the pixel sensors of the first color sensitivity spectrum and the pixel sensors of the second color sensitivity spectrum being identical to one another, but being different for the pixel sensors of the first color sensitivity spectrum from that for the pixel sensors of the second color sensitivity spectrum.
Another embodiment may have a computer program having a program code for performing the method of capturing an image with a multitude of pixel sensors, which method may have the following steps in capturing the image: controlling a first pixel sensor, so that in each one of a first number of non-overlapping first accumulation intervals which succeed each other in an essentially uninterrupted manner and together yield an exposure interval, said first pixel sensor detects one value in each case so as to achieve a number of values while—if the first number is larger than 1—summing the values so as to achieve a pixel value for the first pixel sensor, and controlling a second pixel sensor, so that in each of a second number of non-overlapping second accumulation intervals which succeed each other in an essentially uninterrupted manner and together yield the exposure interval, said second pixel sensor detects a value so as to achieve a number of values while—if the second number is larger than 1—summing the values so as to achieve a pixel value for the second pixel sensor, a subdivision of the exposure interval into the first accumulation intervals differing from a subdivision of the exposure interval into the second accumulation intervals, the multitude of pixel sensors including pixel sensors of a first color sensitivity spectrum and pixel sensors of a second color sensitivity spectrum, the first pixel sensor belonging to the pixel sensors of the first color sensitivity spectrum, and the second pixel sensor belonging to the pixel sensors of the second color sensitivity spectrum, the subdivision of the exposure interval into accumulation intervals for the pixel sensors of the first color sensitivity spectrum and the pixel sensors of the second color sensitivity spectrum being identical to one another, but being different for the pixel sensors of the first color sensitivity spectrum from that for the pixel sensors of the second color sensitivity spectrum, when the program runs on a computer.
A core idea of the present invention consists in that a better compromise may be achieved between the dynamic range, the spatial resolution, the implementation outlay and the image quality if—although each pixel effectively carries out exposure over the entire exposure interval—different subdivisions of said exposure interval into accumulation intervals are performed for different pixel sensors or pixels. In the case of more than one accumulation interval per exposure interval, the values detected in the accumulation intervals are summed in order to obtain the respective pixel value. Since the exposure effectively continues to take place for all pixels over the entire exposure interval, no impairment of the image quality arises, or no artifacts arise in image movements. All pixels undergo the same image blur on account of the movement. The additional hardware outlay compared with commercially available pixel sensors, such as CMOS sensors, for example, is either entirely non-existent or can be kept very small, depending on the implementation. Moreover, a reduction in the spatial resolution is not necessary since the pixels, in principle, contribute equally to the image capturing. In this manner, pixels which accumulate charges more slowly in response to the light to be absorbed because they are less sensitive to the light or because a smaller amount of light impinges on them may be controlled with a finer subdivision, and pixels for which the opposite is true may be controlled with a coarser subdivision, thereby increasing the dynamic range overall while maintaining the spatial resolution and the image quality and while requiring only little implementation outlay.
In accordance with an embodiment, the exposure interval subdivision is performed in dependence on the level of brightness of the image at the different pixel sensors, such that the brighter the image at the respective pixel sensor, the larger the number of accumulation intervals. The dynamic range thus increases even further, since brightly illuminated pixels are less likely to go into saturation, since the exposure interval is subdivided into the accumulation intervals. The subdivisions of the illumination intervals of the pixels or pixel sensors in dependence on the image may be determined individually for each pixel in accordance with a first embodiment. The accumulation interval subdivision is selected to be finer for pixel sensors or pixels in whose positions the image is brighter, and are selected to be less fine for the other pixel sensors, i.e., are selected to exhibit fewer accumulation intervals per exposure interval. The exposure interval subdivider, which is responsible for subdividing the exposure interval into the accumulation intervals, may determine the brightness at the respective pixel sensor from the shot of the preceding image, such as from the pixel value of the preceding image for the respective pixel sensor. Another possibility is for the exposure interval subdivider to currently observe the accumulated amount of light of the pixel sensors during the exposure interval and to end a current accumulation interval and to start a new one when the current accumulated amount of light of a respective pixel sensor exceeds a predetermined amount. The observation may be performed continually or intermittently, such as periodically at intervals that are equal in length and smaller than the exposure time period, and may include, for example, non-destructive readout of an accumulator of the respective pixel sensor.
Instead of setting the exposure interval subdivision into the accumulation intervals for each pixel individually, provision may be made for the exposure interval subdivision into the accumulation intervals to be performed for different disjoint real subsets of the pixel sensors of the image sensor, said subsets corresponding to different color sensitivity spectra, for example. In this case it is also possible to use sensors in addition to the pixel sensors so as to perform image-dependent exposure interval subdivision. Alternatively, representative pixel sensors of the image sensor itself may be used. On the basis of the information thus obtained about the image or the scene, a color spectrum of the image is detected, and the exposure interval subdivision into the accumulation intervals is performed, depending thereon, for the individual pixel sensor groups, such as the individual color components of the image sensor.
Embodiments of the present invention will be detailed subsequently referring to the appended drawings, in which:
a to 3c show diagrams wherein the dynamic range at a color temperature T=2700 K for the red, green and blue color channels is represented together with a “safe range” for correct exposure of all of the color channels, specifically once for normal exposure with continuous exposure of the pixels, once for subdividing half of the exposure intervals into accumulation intervals for each primary color, and a different time for continuous exposure for blue pixels at a subdivision of half of the exposure intervals into accumulation intervals for red and green pixels;
Before several embodiments of the present application will be described below with reference to the figures, it shall be noted that identical elements which occur in several of said figures are provided with identical reference numerals and that repeated descriptions of said elements are avoided as much as possible, but that the descriptions of said elements with regard to one figure shall also apply to the other figures as long as no contradiction results from the specific descriptions of the respective figure.
In addition, it shall be noted that in the following, the description will initially relate to embodiments of the present application, according to which the exposure interval subdivision into accumulation (sub)intervals is performed, in a manner that is individual for each color, for different colors of an image sensor even though, as will be subsequently described, the present invention is not limited to this type of granularity of the exposure interval subdivision, but the exposure interval subdivision may also be determined with local granularity, e.g., it may be determined individually for each pixel or, for other local pixel groups, in dependence on the image. Illustration of the advantages of the present application with regard to the embodiments comprising exposure interval subdivision per color of an image sensor may also be readily transferred to the embodiments following same.
In order to make the advantages of image-dependent exposure interval subdivision into accumulation intervals easier to understand, the problems existing in color image sensors in connection with while balancing will initially be addressed briefly.
Digital cameras are used in a wide range of applications, and in many cases, scene illumination may vary widely. However, digital image sensors are fixed with regard to their spectral sensitivities.
For example, if white light of a specific color temperature T impinges upon a sensor, one will see, e.g., a different output in each of the color channels. The normalized output of a typical image sensor is shown in
Firstly, the color temperature of the illumination of the scene may be known in order to perform a correction. In the field of consumer photography, many algorithms are employed for automatically estimating the illumination color temperature. In high-end recording scenarios, such as moving pictures, the color temperature forms a controlled parameter known to the camera operator.
Secondly, the data should be adapted. Different color spaces may be used for applying multiplicative correction.
However, the problem of unbalanced color response is more serious. As will be shown in the following, the safe range for correct exposure is very much smaller than the camera dynamic range. Even though there are elaborate algorithms for improving underexposed color images with the aid of correctly exposed grey-scale images [5], said methods are complex with regard to the computational expenditure and involve many exposures. If a specific image region is overexposed, even elaborate error elimination techniques can only attempt to guess the missing information. However, this, too, is complex in terms of computation, and there is no guarantee for success.
Rather, it would be more important for the exposure to be correct initially. The embodiments described below achieve this goal. In particular, it is possible to address the limited dynamic range—as is provoked, by way of example, specifically by unbalanced colors—during detection. A few possibilities were described in the introduction to the description of the present application. However, many of said possibilities result in artifacts if there is a scene movement. The embodiments described below, however, allow digital sensitivity adaptation for pixels and/or color channels without impairing the motional blurring.
The problem of dynamic range reduction due to unbalanced color sensitivity will be explained in some more detail below.
The dynamic range of image sensors is limited. In particular, the dynamic range is limited from above, specifically by clipping, and from below, specifically when the signal is swallowed up by the noise. For example, reference shall be made to
As may be seen, each color channel exhibits a maximum of different intensities. The vertical doted lines on the right show the maximum intensity for each color channel. Above said intensity, no image information can be detected, and for a correctly exposed image, the scene intensities would have to remain below it.
The lower limit of the dynamic range is defined by the noise floor. The dashed horizontal line shows the standard deviation σ of the noise in the dark. The lower limit of the dynamic range is at a signal/noise ratio of 1. The dotted vertical lines on the left show the minimum intensities. Any information below said threshold will not be visible in the image but will be swallowed up by the noise.
There resulting dynamic range limits are summarized in
In the field of imaging one is interested in producing images with all three color channels at the same time. A safe range for exposure will then be that intensity range for which all of the color channels provide a valid image, i.e., an image wherein all of the pixels are correctly exposed, i.e., wherein the intensity is within the limits explained above. If there is a mismatch between the color channels, the exposure will have to be restricted to a dynamic range wherein all of the color channels produce valid images. This “safety range” is shown on the right-hand side of
The above examples are represented for a source of white light having a correlated color temperature T=2700 K. For other light sources, a different ratio of the output signals of the color channels would result.
An image which is normally captured at these color temperatures, i.e., with a continuous exposure time which is the same for all colors, shows a significant color cast. A typical white balancing operation might compensate for this by multiplying the pixel values. This multiplication corresponds to a vertical shift in the color channels in
The full dynamic range of a camera might be maintained if all of the color channels would respond to white light with the same sensitivity. An image sensor might be specifically designed to provide a balanced output for a specific color temperature. What is common is balancing for typical daylight recording conditions at T=5600 K.
In the field of analog film and photography, white balancing is sometimes achieved with optical filters. For example, a scene illuminated with tungsten filament or even the light source itself may be filtered with a color conversion filter. The camera will then see a different white balance. Said filters are still in use nowadays for high-end digital imaging. However, optical filters belong to a sensitive and expensive part of cameral equipment. Additionally, filters reduce the amount of light for all of the color channels, and the overall sensitivity is also reduced.
In order to produce balanced exposure, it would also be possible, of course, to individually set the exposure time periods of the color channels, i.e., to use different exposure intervals for the individual colors. Blurring effects, however, would then be different for the different colors, which again represents an image deterioration.
For the reasons set forth above, the following considerations result in embodiments of the present invention. In order to avoid different image properties, or different blurring in the individual pixel colors, the effective exposure time period should be the same for all colors. However, since the different color pixels, or the pixels of different colors, go into saturation at different speeds, namely in dependence on their sensitivity and the hue of the scene being captured, the exposure interval is subdivided differently for the different colors of the image sensor, e.g. into different numbers of accumulation intervals, in accordance with an embodiment of the present invention, at the ends of which readout values are read out in a respective uninterrupted readout/reset process and are finally summed to yield the pixel value. Thus, the image properties remain the same since the effective exposure time period is the same for all color pixels. However, each color may be exposed in an optimum manner, specifically to the effect that no overexposure occurs.
The decision regarding the exposure interval subdivision per color may—but need not—be made as a function of the image and/or scene, so that the dynamic range expansion may be achieved independently of the scene and/or of the image and its illumination and/or color cast. However, an improvement may also be obtained with a fixed setting of the exposure interval subdivision. For example, differences in sensitivity of the individual color pixels may be compensated for by different exposure interval subdivisions such that the dynamic range wherein all of the color pixels of a simultaneous image capturing are correctly illuminated is enlarged overall.
To illustrate this, please refer to
A more pronounced dynamic range gain, however, results in the case of
The dynamic gain that has just been described may even be increased if the exposure interval subdivision is performed as a function of the image and/or scene.
And the dynamic gain that has just been described may even be increased if in addition to the dependence on the image and/or scene even the granularity of the setting of the exposure interval subdivision is performed in dependence on the location, i.e., if the disjoint sets of pixels—these being the units in which the exposure interval subdivision may be adjusted—are separated not only in accordance with their color association but also with the lateral location within the surface of the pixel sensors of the image sensor. Specifically, if an exposure interval subdivision across the image is locally varied for pixels of the same sensitivity spectrum and/or the same color, depending on whether or not the respective part of the image sensor is brightly illuminated, the image-dependent exposure interval subdivision may even compensate for large image contrasts in that the dynamic range of respective pixels is shifted to where the amount of light is currently found at the corresponding location of the image sensor (cf.
Now that the advantages of embodiments of the present invention have been set forth and explained, embodiments of the present invention will be described in more detail below.
The image sensor 10 is configured to capture an image specifically such that, during capturing of the image, each pixel sensor 12 effectively performs exposure over a shared exposure interval, but different exposure interval subdivisions into accumulation subintervals are used among the pixel sensors 12. To illustrate this in more detail, the pixel sensors 12 are indicated as being numbered, by way of example, in
By way of example,
In other words,
It is only by way of example that the representation of
The image sensor 10 may be configured such that the exposure interval subdivision into one, two or more accumulation subintervals is fixedly set for all pixel sensors 12 and is set to differ at least for two real subsets of pixel sensors. As was described above, a different exposure interval subdivision may be employed, e.g., for the more light-sensitive pixel sensors 12 of a first color sensitivity spectrum, such as the green pixels, than for pixel sensors 12 of a different color sensitivity spectrum, such as the red and/or blue pixels. In this case, for example, the exposure interval subdivision may be selected to be finer for those pixels sensors for which a reduction in sensitivity is desired, a finer exposure interval subdivision leading to a larger number of accumulation subintervals. As was explained above with reference to
Instead of a presetting, it is also possible for the image sensor 10 to comprise an exposure interval subdivider 26 configured to perform, or set, the subdivisions of the exposure interval 18 into the accumulation subintervals. The exposure interval subdivider 26 may comprise, e.g., a user interface which allows a user to control or at least influence the exposure interval subdivision. Preferably, the exposure interval subdivider is configured to be able to change the fineness of the exposure interval subdivision of different pixels relative to one another, such as the ratio of the number of accumulation subintervals per exposure interval 18, for example. It would be possible, for example, for the exposure interval subdivider 26 to comprise an operating element for a user on which the user may input a color temperature used for illuminating a scene. For very low color temperatures, for example, provision may be made for the exposure interval subdivision to be performed and/or set to be finer for the red and green pixel sensors than for the red pixel sensors, and in the case of a high color temperature, the exposure interval subdivision might be set to be finer for the colors blue and green than for the color pixel sensors of the color red.
Alternatively or additionally to providing a user influence on the exposure interval subdivision, provision may be made for the exposure interval subdivider 26 to be configured to perform the exposure interval subdivision for the pixel sensors 12 in dependence on the image or scene. For example, the exposure interval subdivider 26 might set the ratio of the exposure interval subdivision fineness among the differently colored pixel sensors automatically in dependence on a color cast, or hue, of the image to be captured or of the scene to be captured in which the image is to be captured. An embodiment will be explained later on this score. The exposure interval subdivider might obtain information about a scene color cast from dedicated color sensors or from a shot of a preceding image.
Moreover, the image sensor 10 might be configured such that the exposure interval subdivider is able to differently set, during a shot, exposure interval subdivisions of pixel sensors of equal color or color sensitivity spectrum which are arranged at laterally different positions. In particular, the exposure interval subdivider might therefore be configured to perform the subdivision of the exposure interval 18 into the accumulation subintervals in dependence on the brightness of the image at the positions corresponding to the pixel sensors 12, so that the number of accumulation subintervals increases as the brightness of the image at the corresponding position increases. The exposure interval subdivider 26, in turn, might predicate the brightness at the corresponding pixel positions from previous image pick-ups or, as will be explained below, it might determine and/or estimate it by observing the current accumulation state of the respective pixel sensors 12. Local granularity in which the exposure interval subdivider 26 performs the local exposure interval subdivision might be pixel-wise, superpixel-wise or, naturally, even coarser than single pixel or single superpixel granularity.
The above mentioned readout/reset operations in
According to the embodiment of
The final pixel value i for the current frame or the current shot is then obtained in the image sensor 10 by summing the individual readout values if there are several of them, so that the following is true, for example, for the pixel values of the pixel sensors 12 of the image sensor 10:
wherein In be the readout value of the nth accumulation interval and N be the number of accumulation intervals within an exposure interval. As can be seen, the summation may be missing if there is only one accumulation interval present, such as with pixel No. 2 in
The sum might be weighted. For example, the image sensor 10 might be configured such that the pixel values of pixel sensors 12 of a first color sensitivity spectrum and/or of a first color are weighted with a first factor acolor
The correction might ensure white balancing, i.e., it might balance out the inherent imbalance of the sensitivity of the differently colored pixel sensors when assuming a specific white light temperature. As has already been mentioned, however, there might also be other differences between the pixels, such as differences in the size of the light emitting surface area, for which a different factor agroup
In the embodiment of
It is therefore possible, in the embodiment of
Two embodiments of an image sensor will be described below with reference to
In accordance with
The output of the readout unit 40 is adjoined by an adder 42, which exhibits a further output and a further input between which an intermediate storage (latch) 44 is connected. By means of this connection, the value read out by the readout unit 40 is added to a sum of the values of the same pixels which were previously readout in the same exposure interval. At the end of an exposure interval, thus, the summed value of the readout values of the respective pixel sensor 12 is present at the output of the adder 42, it being possible for a weighting unit 46 to optionally adjoin the output of the adder 42, which weighting unit may perform, e.g., the above-mentioned color-dependent weighting of the pixel value, so that the pixel value of the pixel considered would be present in a weighted manner at the output of the optional weighting unit 46.
The image sensor 10″ further includes an exposure interval subdivider 26 which sets, for the pixel sensor considered in
Now that the architecture of the image sensor of
In the event that the exposure interval subdivision is individually set for each pixel, the exposure interval subdivider 26 may comprise one comparator per pixel sensor 12, for example.
It shall be pointed out that the read-out values at the output of the readout unit 40 advantageously have a linear relationship with the amount of light impinging on the light-sensitive surface of the corresponding pixel sensor in the corresponding accumulation interval. It is possible that for linearization, a correction of the otherwise non-linear readout values is performed, such as by a linearizer (not shown) which is placed between the output of the readout unit 40 and the adder 42 and which applies, e.g., a corresponding linearization curve to the values and maps the latter to the linearized values in accordance with said curve. Alternatively, linearization may take place inherently in the readout process, such as in a digitization. An analog circuit might also be used. Additionally, a compensation of the dark current might be provided in the event of exposure times of different lengths, specifically even prior to the actual accumulation. Resetting of the accumulator may also be performed differently than by means of complete discharge, as was mentioned above. Rather, said resetting may be performed, in accordance with the skimming gate technique, such that during readout, only part of the accumulated charge is ever skimmed off and/or converted to voltage and another part remains within the accumulator.
Thus, the above embodiments show a possibility of performing an adaptation of the dynamic range of an image sensor for individual pixels or pixel groups. In accordance with specific embodiments it is possible, for example, for the exposure interval subdivision into accumulation subintervals to be adjusted more finely for red image points if the scene is illuminated with an incandescent lamp. Specifically, there will be clearly more red within the scene as a result, and the red channel will probably be the first to go into saturation. However, a finer subdivision of the exposure interval leads, as was described with reference to
The above embodiments may use a sensor which may reset individual image points and thus start exposure, whereas other image points or pixel sensors continue exposure. A controller, which may be arranged within the image sensor or sensor chip or may be arranged externally, may decide, depending on the brightness from past images or from the current brightness, which image points are reset. Thus, the system might control itself. As was described above, the exposure interval subdivision might then be performed at each point in time of readout in such a manner that each individual image point will not overflown in the next time segment.
Returning once again to the example of
Since in accordance with the above embodiments the exposure at each image point, or pixel, along the entire exposure time and/or exposure interval takes into account any information about changes in the intensity, the images produced have no artifacts caused by the exposure interval subdivision and/or the sampling, and both bright and dark image areas have the same amount of motional blurring. Thus, the shots that are taken of HDR sequences in the case of a sequence of image shots are also suitable for the high-quality image shots.
In addition, by means of the above embodiments, complete spatial resolution of the respective image sensor may be exploited. No pixels are provided for other levels of brightness which might not be used in the current scene brightness. The minimum sensitivity of an image sensor need not be changed since no expensive circuits need to be accommodated in each of the pixels.
With regard to above embodiments it shall be further pointed out that it is readily possible to design the above image sensors with a single CMOS sensor and, optionally, with suitable optics. Additional mechanics/optics may not necessarily be used.
In particular, in the case of
With regard to the above embodiments it shall further be pointed out that the decision about resetting the image point may be made either directly in the readout circuits of the sensor or following digitization of the image. The decision may be made on the basis of the intensity of the previous image, as was described above or, as was also described above, an adaptation to the current intensity of each individual image point may be made. If a bright object moves in front of the image point during shooting, a short readout will not make sense before this point in time.
Above embodiments therefore offer the possibility of providing a camera which might have a very high bit repetition rate, specifically a camera with an extended dynamic range. In particular, there is the possibility, with above embodiments, to obtain cameras for shooting films with a large dynamic range, high resolution and very high image quality. Particularly with large-area projection such as in the cinema, for example, recording of the correct motional blurring is an important element, and above embodiments allow achieving this goal.
Even though some aspects have been described within the context of a device, it is understood that said aspects also represent a description of the corresponding method, so that a block or a structural component of a device is also to be understood as a corresponding method step or as a feature of a method step. By analogy therewith, aspects that have been described in connection with or as a method step also represent a description of a corresponding block or detail or feature of a corresponding device.
Depending on specific implementation requirements, embodiments of the invention may be implemented in hardware or in software. Implementation may be effected while using a digital storage medium, for example a floppy disc, a DVD, a Blu-ray disc, a CD, a ROM, a PROM, an EPROM, an EEPROM or a FLASH memory, a hard disc or any other magnetic or optical memory which has electronically readable control signals stored thereon which may cooperate, or actually do cooperate, with a programmable computer system such that the respective method is performed. This is why the digital storage medium may be computer-readable. Some embodiments in accordance with the invention thus comprise a data carrier which comprises electronically readable control signals that are capable of cooperating with a programmable computer system such that any of the methods described herein is performed.
Generally, embodiments of the present invention may be implemented as a computer program product having a program code, the program code being effective to perform any of the methods when the computer program product runs on a computer. The program code may also be stored on a machine-readable carrier, for example.
Other embodiments include the computer program for performing any of the methods described herein, said computer program being stored on a machine-readable carrier.
In other words, an embodiment of the inventive method thus is a computer program which has a program code for performing any of the methods described herein, when the computer program runs on a computer. A further embodiment of the inventive methods thus is a data carrier (or a digital storage medium or a computer-readable medium) on which the computer program for performing any of the methods described herein is recorded.
A further embodiment of the inventive method thus is a data stream or a sequence of signals representing the computer program for performing any of the methods described herein. The data stream or the sequence of signals may be configured, for example, to be transferred via a data communication link, for example via the internet.
A further embodiment includes a processing means, for example a computer or a programmable logic device, configured or adapted to perform any of the methods described herein.
A further embodiment includes a computer on which the computer program for performing any of the methods described herein is installed.
In some embodiments, a programmable logic device (for example a field-programmable gate array, an FPGA) may be used for performing some or all of the functionalities of the methods described herein. In some embodiments, a field-programmable gate array may cooperate with a microprocessor to perform any of the methods described herein. Generally, the methods are performed, in some embodiments, by any hardware device. Said hardware device may be any universally applicable hardware such as a computer processor (CPU), or may be a hardware specific to the method, such as an ASIC.
While this invention has been described in terms of several embodiments, there are alterations, permutations, and equivalents which fall within the scope of this invention. It should also be noted that there are many alternative ways of implementing the methods and compositions of the present invention. It is therefore intended that the following appended claims be interpreted as including all such alterations, permutations and equivalents as fall within the true spirit and scope of the present invention.
Number | Date | Country | Kind |
---|---|---|---|
102010028746.6 | May 2010 | DE | national |
This application is a continuation of copending International Application No. PCT/EP2011/057143, filed May 4, 2011, which is incorporated herein by reference in its entirety, and additionally claims priority from German Application No. 102010028746.6-31, filed May 7, 2010, which is also incorporated herein by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/EP2011/057143 | May 2011 | US |
Child | 13669163 | US |