Most lenses are brighter in the center than at the edges. This phenomenon is known as light fall-off or vignetting. Light fall-off is especially pronounced with wide-angle lenses, certain long telephoto lenses, and many lower quality lenses. These lower quality lenses are often used in devices, such as mobile phones, because the employment of higher quality lenses would increase the costs of such devices to levels that are not commercially feasible.
Light fall-off can be mitigated through compensation techniques. Accordingly, effective fall-off compensation techniques are needed. Moreover, such techniques are needed that do not substantially increase device costs, device power consumption, or device complexity.
Various embodiments may be generally directed to fall-off compensation techniques. For example, in one embodiment, a coefficient determination module determines a fall-off correction coefficient for a pixel of an image sensor, and a fall-off correction module corrects the pixel based on an intensity value of the pixel and the fall-off correction coefficient. The fall-off correction coefficient may be based on one or more stored coefficient values, where the one or more coefficient values correspond to a squared distance between the pixel and a center position of the image sensor. In this manner, improvements in computational efficiency may be achieved. Also, reductions in power consumption, implementation complexity, and area may be attained. Other embodiments may be described and claimed.
Various embodiments may comprise one or more elements. An element may comprise any structure arranged to perform certain operations. Each element may be implemented as hardware, software, or any combination thereof, as desired for a given set of design parameters or performance constraints. Although an embodiment may be described with a limited number of elements in a certain topology by way of example, the embodiment may include more or less elements in alternate topologies as desired for a given implementation. It is worthy to note that any reference to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. The appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment.
In particular,
Optics assembly 102 may include one or more optical devices (e.g., lenses, mirrors, etc.) to project an image within a field of view onto multiple sensor elements within image sensor 104. For instance,
Image sensor 104 may include an array of sensor elements (not shown). These elements may be complementary metal oxide semiconductor (CMOS) sensors, charge coupled devices (CCDs), or other suitable sensor element types. These elements may generate analog intensity signals (e.g., voltages), which correspond to light incident upon the sensor. In addition, image sensor 104 may also include analog-to-digital converter(s) ADC(s) that convert the analog intensity signals into digitally encoded intensity values. The embodiments, however, are not limited to this example.
Thus, image sensor 104 converts light received through optics assembly 102 into pixel values. Each of these pixel values represents a particular light intensity at the corresponding sensor element. Although these pixel values have been described as digital, they may alternatively be analog.
Image sensor 104 may have various adjustable settings. For instance, its sensor elements may have one or more gain settings that quantitatively control the conversion of light into electrical signals. In addition, ADCs of image sensor 104 may have one or more integration times, which control the duration in which sensor element output signals are accumulated. Such settings may be adapted based on environmental factors, such as ambient lighting, etc. Together, optics assembly 102 and image sensor 104 may further have one or more settings. One such setting is a distance between one or more lenses of optics assembly 102 and a sensor plane of image sensor 104. Effective focal length is an example of such a distance.
In addition,
Coefficient determination module 108 determines fall-off coefficients for pixels within image sensor 104. In particular, coefficient determination module 108 may determine fall-off coefficients based on squared distances and one or more stored coefficient values. These stored values may be arranged in various ways such as in one or more look-up tables (LUTs). Such LUT(s) may store multiple coefficient values, each having an address based on a squared distance from a center position of image sensor 104. Moreover, these squared distances may be separated by substantially equal intervals.
To reduce storage requirements and/or hardware complexity, such LUT(s) may have fewer than the number of entries to cover every possible squared distance associated with image sensor 104. Accordingly, for a particular pixel, coefficient determination module 108 may access two LUT entries corresponding to a closest higher squared distance and a closest lower squared distance. From these two entries, coefficient determination module 108 may employ various interpolation techniques to produce a correction coefficient for the particular pixel.
In addition, coefficient determination module 108 may scale correction coefficients based on various settings. One such setting is the distance (e.g., effective focal length) associated with optical assembly 102 and image sensor 104.
As shown in
Accordingly, modules 108 and 110 may provide for effective fall-off correction. For instance, by basing coefficient determination on squared distances and stored coefficient values as described herein, computational efficiencies may be increased while implementation complexities may be decreased.
Apparatus 100 may be implemented in various devices, such as a handheld apparatus or an embedded system. Examples of such devices include mobile wireless phones, Voice Over IP (VoiP) phones, personal computers (PCs), personal digital assistants (PDAs), and digital cameras. In addition, this apparatus may also be implemented in land line based video phones employing standard public switched telephone network (PSTN) phone lines, integrated digital services network (ISDN) phone lines, and/or packet networks (e.g., local area networks (LANs), the Internet, etc.).
The description now turns to a quantitative discussion of fall-off correction features. As described above, light-fall off is an occurrence in which lenses are brighter at their center than at their edges. Light fall-off may be compensated with a gain factor having an inverse relationship to the fall-off amount. The fall-off ratios (relative to the maximum measured value from each respective color or image plane) of the measured median pixel values in each color plane. Equation (1), below, expresses the fall off ratio of each sampling point (i, j) in a color plane c as xc(i,j).
xc(i, j)=Qc(i, j)/Qcmax (1)
In Equation (1), Qc(i,j) is the median pixel value measured at sampling point (i, j) of color plane c and Qcmax is the maximum median pixel value measured in the same color or image plane.
The compensation factor for each pixel may be computed by using the corresponding fall-off ratio obtained above. Equation (2), below, expresses a fall-off compensation factor, Sc(i,j), at a sampling point, (i, j).
In Equation (2), w is a shaping factor that controls the extent of falloff compensation and avoids over boosting the image noise while approaching the image boundary.
In addition to being expressed with respect to sampling points, fall-off ratios may be expressed with respect to a color or image plane. More particularly, fall-off ratio may be expressed as a function of the radial distance from the center of a lens. Radial distance from the lens' center to a sampling point at pixel (i,j) may be calculated from a location (ic, jc), which is the location of the pixel at the center of the sensor array. This calculation is expressed below in Equation (3).
r(i, j)=√{square root over ((i−ic)2+(j−jc)2)} (3)
Correction coefficient curves often follow the form of cos4θ, in which θ is an angle formed by a line joining a point on the sensor array and the lens center intersecting with the lens' optical axis. A relationship exists between r and θ. This relationship may be expressed for a range of r from zero to D/2, where D is the diagonal length of the sensor. Equation (4), below, provides the relationship of θ to r.
In Equation (4), θv represents the angle of view for the image sensor and lens arrangement. An exemplary value of θv is 60 degrees. However, other values may be employed. For a range of θ from about −45 degrees to about 45 degrees, there is an approximately linear mapping between θ and r.
As expressed above in Equation (3), determining r involves calculating a square root. Unfortunately, this calculation is computationally expensive in both hardware and software. Therefore, coefficient determination module 108 may advantageously provide techniques that base the determination of compensation coefficients on squared distances (i.e., on r2). Based on Equation (3) above, squared distance is expressed below in Equation (5).
r2(i,j)=(i−ic)2+(j−jc)2 (5)
Fall-off correction implementations may employ look-up tables (LUTs) to access correction coefficients for a particular pixel. For instance,
One fall-off correction approach stores every discrete point of this curve (i.e., a point for each occurring radial distance) in an LUT. This would require the LUT to have a number of entries, N, which also denotes the maximum radial distance.
This can require a large amount of storage. For instance, a Quad Super Extended Graphics Array (QSXGA) image has 2586 by 2048 pixels (constituting approximately 5.2 megapixels) and an aspect ratio of 5:4. Thus, a LUT for QSXGA images would require N to be 1649. This magnitude of LUT entries can be problematic. For example, in a hardware (e.g., integrated circuit) implementation, excessive on-die resources may need to be utilized. Similarly, in software implementations, such a LUT may impose excessive memory allocation requirements.
To reduce on-die resource usage and/or memory requirements, a lower number of LUT entries may be used in combination with an interpolation scheme. More particularly, a correction coefficient curve may be sub-sampled at a constant rate and linear interpolation may be performed between two consecutive sub-sampled points. A drawback of this approach is that substantial interpolation inaccuracies may occur in regions of the curve having a high gradient. For instance,
Coefficient determination module 108 may reduce such interpolation error by increasing the sampling frequency as the gradient increases. This may involve transforming the coefficient curve so that it is a function of r2.
Although linear sampling is applied to the curve of
This feature is illustrated through a comparison of
Pixel buffer unit 602 receives a plurality of pixel values 630 that may correspond to an image, field, or frame. These pixel values may be received from a pixel source, such as image sensor 104. Accordingly, pixel values 630 may be received in a signal stream, such as signal stream 122. Upon receipt, pixel buffer unit 602 stores these values for fall-off correction processing. Accordingly, pixel buffer unit 602 may include a storage medium, such as memory. Examples of storage media are provided below.
Pixel buffer unit 602 may output the pixel values along with their corresponding positions. For instance,
As described above, squared distance determination module 604 determines squared distances between pixels and an image sensor center position.
Pixel coordinates 632a and 632b are received from pixel buffer unit 602. Center coordinates 624a and 624b may be stored by implementation 600, for example, in memory. Such coordinate information may be predetermined. Alternatively, such coordinate information may be received from an image sensor. For example, pixel and center coordinates may be received from image sensor 104 in sensor information 124. However, the embodiments are not limited in this context.
Upon receipt of squared distance value 636, coefficient generation module 606 generates or determines a fall-off correction coefficient for the pixel value 634. As described above, this may involve one or more stored coefficient values as well as interpolation techniques. Accordingly,
Scaling module 608 receives correction coefficient 638 and may scale it based on sensor configuration information 626. This information may include, for example, a distance, such as an effective focal length, between an optics assembly and a sensor plane of an image sensor. Configuration information 626 may be received in various ways. For instance, with reference to
When scaling according to effective focal length, scaling module 608 may increase fall-off coefficient 638 when the effective focal length increases. Alternatively, scaling module 608 may decrease fall-off coefficient 638 when the effective focal length decreases. Such scaling may be performed through the use of a multiplicative scaling coefficient. Such coefficients may be selected from a focal length to scaling coefficient mapping. However, the embodiments are not limited in this context. In fact, scaling does not need to be performed.
As shown in
Coefficient generation module 606 may be implemented in various ways. As such, exemplary implementations are shown in
As shown in
Coarse value 721 is used for table look-up, while residual value 722 is used for interpolation. Accordingly,
Thus, combining node 712 produces a correction coefficient 732, which is expressed below in Equation (6).
In addition, such techniques reduce power consumption. This advantageously increases battery life for devices, such as cameras, portable phones, and personal digital assistants (PDAs). Also, for hardware implementations, complexity and required area are reduced.
More particularly, the techniques described herein may provide advantages over grid-based implementations in which compensation coefficients for grid points are stored in a LUT. In such approaches, correction factors for individual points may be calculated using bi-cubic or bi-linear interpolation algorithms. Such algorithms require further set(s) of LUTs and much larger hardware and/or control logic. Thus, such grid-based implementations involve multiple LUTs and larger hardware and/or control logic to arrive at final correction coefficients.
In contrast, the techniques described herein smaller LUT(s) and less interpolation hardware/control logic. This is because a linear interpolation, as compared to bi-cubic or bi-linear interpolation. Moreover, the techniques described herein may eliminate the use of costly hardware and/or control logic to evaluate square roots for obtaining the actual radial distance from the center location. Further, LUT sizes may be reduced by using coarse values. However, accuracy is maintained through interpolation that employs the residual values.
Operations for the above embodiments may be further described with reference to the following figures and accompanying examples. Some of the figures may include a logic flow. Although such figures presented herein may include a particular logic flow, it can be appreciated that the logic flow merely provides an example of how the general functionality as described herein can be implemented. Further, the given logic flow does not necessarily have to be executed in the order presented unless otherwise indicated. Also, the flows may include additional operations as well as omit certain described operations. In addition, the given logic flow may be implemented by a hardware element, a software element executed by a processor, or any combination thereof. The embodiments are not limited in this context.
At a block 804, a squared distance is determined between a pixel of the image sensor and the center position of the image sensor. Based on the determined squared distance, one or more of the stored coefficient values are accessed at a block 806. This may comprise accessing two stored coefficient values. These two values may correspond to adjacent squared distances.
These accessed coefficient value(s) may be used at a block 808 to determine a fall-off correction coefficient for the pixel. When two stored coefficient values corresponding to adjacent squared distances are accessed at block 806, this determination may comprise interpolating between the two coefficient values.
At a block 810, the determined fall-off correction coefficient may be adjusted or scaled. This may be based on various settings, such an optical focal length associated with the image sensor.
At a block 812, an intensity value corresponding to the pixel is received. This intensity value is corrected at a block 814 by multiplying it with the determined fall-off correction coefficient.
As shown in
Memory 908 may store information in the form of data. For instance, memory 908 may contain LUTs, such as LUT 704 and/or LUT 714. Also, memory 908 may store image data, such as pixels and position information managed by pixel buffer unit 602 and operational data. Examples of operational data include center position coordinates and sensor configuration information (e.g., effective focal length). Memory 908 may also store one or more images (with or without fall-off correction). However, the embodiments are not limited in this context.
Alternatively or additionally, memory 908 may store control logic, instructions, and/or software components. These software components include instructions that can be executed by a processor. Such instructions may provide functionality of one or more elements in system 900.
Memory 908 may be implemented using any machine-readable or computer-readable media capable of storing data, including both volatile and non-volatile memory. For example, memory 908 may include read-only memory (ROM), random-access memory (RAM), dynamic RAM (DRAM), Double-Data-Rate DRAM (DDRAM), synchronous DRAM (SDRAM), static RAM (SRAM), programmable ROM (PROM), erasable programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), flash memory, polymer memory such as ferroelectric polymer memory, ovonic memory, phase change or ferroelectric memory, silicon-oxide-nitride-oxide-silicon (SONOS) memory, magnetic or optical cards, or any other type of media suitable for storing information. It is worthy to note that some portion or all of memory 908 may be included in other elements of system 900. For instance, some or all of memory 908 may be included on a same integrated circuit or chip with as image processing module 106. Alternatively some portion or all of memory 908 may be disposed on an integrated circuit or other medium, for example a hard disk drive, which is external. The embodiments are not limited in this context.
User interface 910 facilitates user interaction with device 902. This interaction may involve the input of information from a user and/or the output of information to a user. Accordingly, user interface 910 may include one or more devices, such as a keypad, a touch screen, a microphone, and/or an audio speaker. In addition, user interface 910 may include a display to output information and/or render images/video processed by device 902. Exemplary displays include liquid crystal displays (LCDs), plasma displays, and video displays.
Communications interface 912 provides for the exchange of information with other devices across communications media, such as network. This information may include image and/or video signals transmitted by device 902. Also, this information may include transmissions received from remote devices, such as requests for image/video transmissions and commands directing the operation of device 902.
Communications interface 912 may provide for wireless or wired communications. For wireless communications, communications interface 912 may include components, such as a transceiver, an antenna, and control logic to perform operations according to one or more communications protocols. Thus, communications interface 912 may communicate across wireless networks according to various protocols. For example, device 902 and device(s) 906 may operate in accordance with various wireless local area network (WLAN) protocols, such as the IEEE 802.11 series of protocols, including the IEEE 802.11a, 802.11b, 802.11e, 802.11g, 802.11n, and so forth. In another example, these devices may operate in accordance with various wireless metropolitan area network (WMAN) mobile broadband wireless access (MBWA) protocols, such as a protocol from the IEEE 802.16 or 802.20 series of protocols. In another example, these devices may operate in accordance with various wireless personal area networks (WPAN). Such networks include, for example, IEEE 802.16e, Bluetooth, and the like. Also, these devices may operate according to Worldwide Interoperability for Microwave Access (WiMax) protocols, such as ones specified by IEEE 802.16.
Also, these devices may employ wireless cellular protocols in accordance with one or more standards. These cellular standards may comprise, for example, Code Division Multiple Access (CDMA), CDMA 2000, Wideband Code-Division Multiple Access (W-CDMA), Enhanced General Packet Radio Service (GPRS), among other standards. The embodiments, however, are not limited in this context.
For wired communications, communications interface 912 may include components, such as a transceiver and control logic to perform operations according to one or more communications protocols. Examples of such communications protocols include Ethernet (e.g., IEEE 802.3) protocols, integrated services digital network (ISDN) protocols, public switched telephone network (PSTN) protocols, and various cable protocols.
In addition, communications interface 912 may include input/output (I/O) adapters, physical connectors to connect the I/O adapter with a corresponding wired communications medium, a network interface card (NIC), disc controller, video controller, audio controller, and so forth. Examples of wired communications media may include a wire, cable, metal leads, printed circuit board (PCB), backplane, switch fabric, semiconductor material, twisted-pair wire, co-axial cable, fiber optics, and so forth.
Power supply 914 provides operational power to elements of device 902. Accordingly, power supply 914 may include an interface to an external power source, such as an alternating current (AC) source. Additionally or alternatively, power supply 914 may include a battery. Such a battery may be removable and/or rechargeable. However, the embodiments are not limited to this example.
Numerous specific details have been set forth herein to provide a thorough understanding of the embodiments. It will be understood by those skilled in the art, however, that the embodiments may be practiced without these specific details. In other instances, well-known operations, components and circuits have not been described in detail so as not to obscure the embodiments. It can be appreciated that the specific structural and functional details disclosed herein may be representative and do not necessarily limit the scope of the embodiments.
Various embodiments may be implemented using hardware elements, software elements, or a combination of both. Examples of hardware elements may include processors, microprocessors, circuits, circuit elements (e.g., transistors, resistors, capacitors, inductors, and so forth), integrated circuits, application specific integrated circuits (ASIC), programmable logic devices (PLD), digital signal processors (DSP), field programmable gate array (FPGA), logic gates, registers, semiconductor device, chips, microchips, chip sets, and so forth. Examples of software may include software components, programs, applications, computer programs, application programs, system programs, machine programs, operating system software, middleware, firmware, software modules, routines, subroutines, functions, methods, procedures, software interfaces, application program interfaces (API), instruction sets, computing code, computer code, code segments, computer code segments, words, values, symbols, or any combination thereof. Determining whether an embodiment is implemented using hardware elements and/or software elements may vary in accordance with any number of factors, such as desired computational rate, power levels, heat tolerances, processing cycle budget, input data rates, output data rates, memory resources, data bus speeds and other design or performance constraints.
Some embodiments may be described using the expression “coupled” and “connected” along with their derivatives. These terms are not intended as synonyms for each other. For example, some embodiments may be described using the terms “connected” and/or “coupled” to indicate that two or more elements are in direct physical or electrical contact with each other. The term “coupled,” however, may also mean that two or more elements are not in direct contact with each other, but yet still co-operate or interact with each other.
Some embodiments may be implemented, for example, using a machine-readable medium or article which may store an instruction or a set of instructions that, if executed by a machine, may cause the machine to perform a method and/or operations in accordance with the embodiments. Such a machine may include, for example, any suitable processing platform, computing platform, computing device, processing device, computing system, processing system, computer, processor, or the like, and may be implemented using any suitable combination of hardware and/or software. The machine-readable medium or article may include, for example, any suitable type of memory unit, memory device, memory article, memory medium, storage device, storage article, storage medium and/or storage unit, for example, memory, removable or non-removable media, erasable or non-erasable media, writeable or re-writeable media, digital or analog media, hard disk, floppy disk, Compact Disk Read Only Memory (CD-ROM), Compact Disk Recordable (CD-R), Compact Disk Rewriteable (CD-RW), optical disk, magnetic media, magneto-optical media, removable memory cards or disks, various types of Digital Versatile Disk (DVD), a tape, a cassette, or the like. The instructions may include any suitable type of code, such as source code, compiled code, interpreted code, executable code, static code, dynamic code, encrypted code, and the like, implemented using any suitable high-level, low-level, object-oriented, visual, compiled and/or interpreted programming language.
Unless specifically stated otherwise, it may be appreciated that terms such as “processing,” “computing,” “calculating,” “determining,” or the like, refer to the action and/or processes of a computer or computing system, or similar electronic computing device, that manipulates and/or transforms data represented as physical quantities (e.g., electronic) within the computing system's registers and/or memories into other data similarly represented as physical quantities within the computing system's memories, registers or other such information storage, transmission or display devices. The embodiments are not limited in this context.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.