The field of invention pertains to image processing generally, and, more specifically, to a method and apparatus for increasing the frame rate of a time of flight measurement.
Many existing computing systems include one or more traditional image capturing cameras as an integrated peripheral device. A current trend is to enhance computing system imaging capability by integrating depth capturing into its imaging components. Depth capturing may be used, for example, to perform various intelligent object recognition functions such as facial recognition (e.g., for secure system un-lock) or hand gesture recognition (e.g., for touchless user interface functions).
One depth information capturing approach, referred to as “time-of-flight” imaging, emits light from a system onto an object and measures, for each of multiple pixels of an image sensor, the time between the emission of the light and the reception of its reflected image upon the sensor. The image produced by the time of flight pixels corresponds to a three-dimensional profile of the object as characterized by a unique depth measurement (z) at each of the different (x,y) pixel locations.
As many computing systems with imaging capability are mobile in nature (e.g., laptop computers, tablet computers, smartphones, etc.), the integration of a light source (“illuminator”) into the system to achieve time-of-flight operation presents a number of design challenges such as cost challenges, packaging challenges and/or power consumption challenges.
An apparatus is described that includes a pixel array having time-of-flight pixels. The apparatus also includes clocking circuitry coupled to the time-of-flight pixels. The clocking circuitry comprises a multiplexer between a multi-phase clock generator and the pixel array to multiplex different phased clock signals to a same time-of-flight pixel. The apparatus also includes an image signal processor to perform distance calculations from streams of signals generated by the pixels at a first rate that is greater than a second rate at which any particular one of the pixels is able to generate signals sufficient to perform a single distance calculation.
An apparatus is describing having first means for generating multiple, differently phased clock signals for a time-of-flight distance measurement. The apparatus also includes second means for routing each of the differently phased clock signals to different time-of-flight pixels. The apparatus also includes performing time-of-flight measurements from charge signals from the pixels at a rate that is greater than a rate at which any of the time-of-flight pixels generate charge signals sufficient for a time-of-flight distance measurement.
The following description and accompanying drawings are used to illustrate embodiments of the invention. In the drawings:
The set of waveforms observed in
For example, at the end of cycle 1 the Z pixel generates a first signal that is proportional to the charge collected during the existence of the pulse observed in the I+ signal, at the end of cycle 2 the Z pixel generates a second signal that is proportional to the charge collected during the existence of the pulse observed in the Q+ signal, at the end of cycle 3 the Z pixel generates a third signal that is proportional to the charge collected during the existence of pulse observed in the I− signal, and, at the end of cycle 4 the Z pixel generates a fourth signal that is proportional to the charge collected during the existence of the pair of half pulses that are observed in the Q− signal.
The first, second, third and fourth response signals generated by the Z pixel are then processed to determine the distance from the pixel to the object in front of the camera. The process then repeats for a next set of four clock cycles to determine a next distance value. As such, note that four clock cycles are consumed for each distance calculation. The consumption of four clock cycles per distance calculation essentially corresponds to a low frame rate (as frames of distance images can only be generated once every four clock cycles).
An image signal processor 202 or other functional unit (hereinafter ISP) that processes the digitized signals from the pixels to compute a distance from them is shown, however. The mathematical operations performed by the ISP 202 to determine a distance from the four pixel signals is well understood in the art and is not discussed here. However, it is pertinent to note that the ISP 202 can, in various embodiments, receive the digitized signals from the four pixels simultaneously rather than serially. This is distinct from the prior art approach of
The ISP 202 (or other functional unit) can be implemented entirely in dedicated hardware having specialized logic circuits specifically designed to perform the distance calculations from the pixel values, or, can be implemented entirely in programmable hardware (e.g., a processor) that executes program code written to perform the distance calculations, or, some other type of circuitry that involves a combination and/or sits between these two architectural extremes.
A possible issue with the approach of
The enhancement of spatial resolution is achieved by multiplexing the different I+, Q+, I− and Q− signals into a single pixel such that on each new clock cycle a different quadrature clock is directed to the pixel. As observed in
For example, as seen in
Specifically, in the example of
With one of the four pixels completing reception of its four clock signals every clock cycle, per pixel distance measurements are achieved with the same 4× speed up in frame rate achieved in the embodiment of
An image signal processor 302 or other functional unit that processes the output(s) from the four pixels is then able to generate a new distance measurement every clock cycle. In prior art approaches the pixel response signals are typically streamed out in phase with one another across all Z pixels (all Z pixels complete a set of four charge responses at the same time). By contrast, in the approach of
As such, the ISP 302 understands the different relative phases of the different pixel streams in order to perform distance calculations at the correct moments in time. Specifically, in various embodiments the ISP 302 is configured to perform distance calculations at different times for different pixel signal streams. As discussed at length above, the ability to perform a distance calculation for a particular pixel stream, e.g., immediately after a distance calculation has just been performed for another pixel stream corresponds to an increase in the frame rate of the overall image sensor (i.e., different pixels contribute to different frames in a frame sequence).
With respect to either of the approaches of
Again, this is in contrast to the approach of
Additionally, like the approach of
As observed in
As observed in
Timing and control circuitry 504 is responsible for generating the clock signals and resultant control signals that control the overall operation of the image sensor (e.g., controlling the scrolling of row encoder outputs in a rolling shutter mode). The clock generation circuitry, the multiplexers that provide clock signals to the pixels and the counters of
An ISP 504 or other functional unit as described above may be integrated into the image sensor, or, may be part of, e.g., a host side part of a computing system having a camera that includes the image sensor. In embodiments where the ISP 504 is included in the image sensor the timing and control circuitry would include circuitry that causes the ISP to be able to perform, e.g., a distance calculation from different pixel streams that are understood to be providing signals in different phase relationships to effect higher frame rates as described at length above.
It is pertinent to point out that the use of four quadrature clock signals to support distance calculations is only exemplary and other embodiments may use different number of clocks. For example, three clocks may be used if the environment that the camera will be used in can be tightly controlled. Other embodiments may use more than four clocks, e.g., if the extra resolution/performance is needed and the costs are justified. As such those of ordinary skill will recognize that other embodiments may use the teachings provided herein and apply them to time of flight systems that use other than four clocks. Notably this may change the number of pixels that together are used as a cohesive unit to effect higher frame rates (e.g., a block of eight pixels may be used in a system that uses eight clocks.
The connector 701 is affixed to a planar board 702 that may be implemented as a multi-layered structure of alternating conductive and insulating layers where the conductive layers are patterned to form electronic traces that support the internal electrical connections of the system 700. Through the connector 701 commands are received from the larger host system such as configuration commands that write/read configuration information to/from configuration registers within the camera system 700.
An RGBZ image sensor 710 and light source driver 703 are mounted to the planar board 702 beneath a receiving lens 702. The RGBZ image sensor 710 includes a pixel array having different kinds of pixels, some of which are sensitive to visible light (specifically, a subset of R pixels that are sensitive to visible red light, a subset of G pixels that are sensitive to visible green light and a subset of B pixels that are sensitive to blue light) and others of which are sensitive to IR light.
The RGB pixels are used to support traditional “2D” visible image capture (traditional picture taking) functions. The IR sensitive pixels are used to support 3D depth profile imaging using time-of-flight techniques. Although a basic embodiment includes RGB pixels for the visible image capture, other embodiments may use different colored pixel schemes (e.g., Cyan, Magenta and Yellow). The image sensor 710 may also include ADC circuitry for digitizing the signals from the pixel array and timing and control circuitry for generating clocking and control signals for the pixel array and the ADC circuitry.
The planar board 702 may likewise include signal traces to carry digital information provided by the ADC circuitry to the connector 701 for processing by a higher end component of the host computing system, such as an image signal processing pipeline (e.g., that is integrated on an applications processor).
A camera lens module 704 is integrated above the integrated RGBZ image sensor and light source driver 703. The camera lens module 704 contains a system of one or more lenses to focus received light through an aperture of the integrated image sensor and light source driver 703. As the camera lens module's reception of visible light may interfere with the reception of IR light by the image sensor's time-of-flight pixels, and, contra-wise, as the camera module's reception of IR light may interfere with the reception of visible light by the image sensor's RGB pixels, either or both of the image sensor's pixel array and lens module 703 may contain a system of filters arranged to substantially block IR light that is to be received by RGB pixels, and, substantially block visible light that is to be received by time-of-flight pixels.
An illuminator 705 composed of a light source array 707 beneath an aperture 706 is also mounted on the planar board 701. The light source array 707 may be implemented on a semiconductor chip that is mounted to the planar board 701. The light source driver that is integrated in the same package 703 with the RGBZ image sensor is coupled to the light source array to cause it to emit light with a particular intensity and modulated waveform.
In an embodiment, the integrated system 700 of
An applications processor or multi-core processor 850 may include one or more general purpose processing cores 815 within its CPU 401, one or more graphical processing units 816, a main memory controller 817, an I/O control function 818 and one or more image signal processor pipelines 819. The general purpose processing cores 815 typically execute the operating system and application software of the computing system. The graphics processing units 816 typically execute graphics intensive functions to, e.g., generate graphics information that is presented on the display 803. The memory control function 817 interfaces with the system memory 802. The image signal processing pipelines 819 receive image information from the camera and process the raw image information for downstream uses. The power management control unit 812 generally controls the power consumption of the system 800.
Each of the touchscreen display 803, the communication interfaces 804-807, the GPS interface 808, the sensors 809, the camera 810, and the speaker/microphone codec 813, 814 all can be viewed as various forms of I/O (input and/or output) relative to the overall computing system including, where appropriate, an integrated peripheral device as well (e.g., the one or more cameras 810). Depending on implementation, various ones of these I/O components may be integrated on the applications processor/multi-core processor 850 or may be located off the die or outside the package of the applications processor/multi-core processor 850.
In an embodiment one or more cameras 810 includes an integrated traditional visible image capture and time-of-flight depth measurement system having an RGBZ image sensor with enhanced frame rate output as described at length above. Application software, operating system software, device driver software and/or firmware executing on a general purpose CPU core (or other functional block having an instruction execution pipeline to execute program code) of an applications processor or other processor may direct commands to and receive image data from the camera system.
In the case of commands, the commands may include entrance into or exit from any of the 2D, 3D or 2D/3D system states discussed above. Additionally, commands may be directed to configuration space of the image sensor and light to implement configuration settings consistent the teachings above. For example the commands may set an enhanced frame rate mode of the image sensor.
Embodiments of the invention may include various processes as set forth above. The processes may be embodied in machine-executable instructions. The instructions can be used to cause a general-purpose or special-purpose processor to perform certain processes. Alternatively, these processes may be performed by specific hardware components that contain hardwired logic for performing the processes, or by any combination of programmed computer components and custom hardware components.
Elements of the present invention may also be provided as a machine-readable medium for storing the machine-executable instructions. The machine-readable medium may include, but is not limited to, floppy diskettes, optical disks, CD-ROMs, and magneto-optical disks, FLASH memory, ROMs, RAMs, EPROMs, EEPROMs, magnetic or optical cards, propagation media or other type of media/machine-readable medium suitable for storing electronic instructions. For example, the present invention may be downloaded as a computer program which may be transferred from a remote computer (e.g., a server) to a requesting computer (e.g., a client) by way of data signals embodied in a carrier wave or other propagation medium via a communication link (e.g., a modem or network connection).
In the foregoing specification, the invention has been described with reference to specific exemplary embodiments thereof. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention as set forth in the appended claims. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.