The field of invention pertains generally to the computing sciences, and, more specifically, to an image sensor having multiple output ports.
Image data taken by an image sensor is typically provided to an image signal processing pipeline. The image signal processing pipeline then performs various computations on the image data to generate, e.g., data for display. The image signal processing pipeline is typically implemented with a pipeline (software, hardware or both) that concurrently processes different blocks of image data from the image sensor. For example, while a first block is being processed by a demosaicing stage, another block may be processed by a noise reduction stage. After an image signal processing pipeline processes data from an image sensor, the processed data may be forwarded to a display or, e.g., system memory (e.g., by way of a direct-memory-access (DMA) transfer).
Here, each image signal processing pipeline 107_1, 107_2 is dedicated to a particular camera and image sensor. That is, for example, image signal processing pipeline 107_1 is dedicated to the processing of image data generated by image sensor 110, and, image signal processing pipeline 107_2 is dedicated to the processing of image data generated by image sensor 111.
An apparatus is described that includes an image sensor having a first output port and a second output port. The first output port is to transmit a first image stream concurrently with a second image stream transmitted from the second output port.
An apparatus is described that includes means for performing a method performed by an image sensor. The apparatus includes means for accepting configuration information for a first image port of an image sensor for a first image type. The apparatus includes means for accepting configuration information for a second image port of the image sensor for a second image type, the first image type being different than the second image type. The apparatus of includes means for generating a plurality of analog signals from a pixel array. The apparatus includes means for converting the analog signals into digital pixel values. The apparatus includes means for transmitting some of the digital pixel values from the first output port. The apparatus includes means for transmitting others of the digital pixel values from the second output port.
The following description and accompanying drawings are used to illustrate embodiments of the invention. In the drawings:
A current trend is to enhance computing system imaging capability by integrating depth capturing into its imaging components. Depth capturing may be used, for example, to perform various intelligent object recognition functions such as facial recognition (e.g., for secure system un-lock) or hand gesture recognition (e.g., for touchless user interface functions).
According to one depth information capturing approach, referred to as “time-of-flight” imaging, the computing system emits infra-red (IR) light onto an object and measures, for each of multiple pixels of an image sensor, the time between the emission of the light and the reception of its reflected image upon the sensor. The image produced by the time of flight pixels corresponds to a three-dimensional profile of the object as characterized by a unique depth measurement (Z) at each of the different (x,y) pixel locations.
An “RGBZ” image sensor is an appealing solution for achieving both traditional image capture and time of flight depth profiling from within a same camera package. An RGBZ image sensor is an image sensor that includes different kinds of pixels, some of which are sensitive to visible light (the RGB pixels) and others of which are used to measure depth information (the time-of-flight pixels).
In a common implementation, time of flight pixels are designed to be sensitive to IR light because, as mentioned above, IR light is used for the time-of-flight measurement so that the time-of-flight measurement light does not disturb users and does not interfere with the traditional imaging functions of the RGB pixels. The time-of-flight pixels additionally have special associated clocking and/or timing circuitry to measure the time at which light has been received at the pixel. Because the time-of-flight pixels are sensitive to IR light, however, they may also be conceivably be used (e.g., in a second mode) as just IR pixels and not time-of-flight pixels (i.e., IR information is captured but a time of flight measurement is not made).
An RGBZ image sensor therefore naturally generates two kinds of video data streams (an RGB visible image stream and a depth information (Z) stream) each having its own set of streaming characteristics such as frame size, frame structure and frame rate. RGBZ sensors are currently being designed to “fit” into the platform 100 of
A problem is that the image signal processing pipeline that is dedicated to the RGBZ sensor is itself required to multiplex its processing of the different data stream types that are generated by the RGBZ sensor. An image signal image processor is a fairly complex system (typically implemented as a multi-stage pipeline implemented in hardware or software or both), and, as such, its multiplexing between the two data streams requires a time consuming and performance degrading switching back and forth between an RGB stream state and a Z stream state.
Here, it is worthwhile to note that in a typical implementation the density of visible light (RGB) pixels on the surface area of the sensor is typically greater than the density of time-of-flight pixels on the surface area of the sensor. As such, if a nominal window of visible (RGB) pixels is read-out from the sensor and a nominal window of time-of-flight pixels is read-out from the sensor, the window of visible data typically contains more data than the window of depth data. If the different image data types are to be streamed over the same link with the same clocking rate, the visible RGB stream will naturally have larger frames and a slower frame rate while the depth Z stream will naturally have smaller frames and possibly a faster frame rate.
With respect to the multiplexing of the two different streams on the same link as observed in
With the arrangement depicted in
Although not specifically drawn in
Although an implementation where the first stream is an RGB video stream and the second stream is a depth Z stream is one possibility, it is believed that a number of other possible use cases may fit this general scenario. Some examples include: 1) the first stream is an RGB video stream and the second stream is a subset (e.g., smaller window or lower resolution image) of the first stream; 2) the first steam is an RGB video stream and the second stream is an IR image stream; 3) the first stream is an RGB video stream and the second stream is a phase focusing stream (e.g., where the second stream is generated from a subset of the pixel array's pixels that detect information used to determine what direction the lens of an auto-focusing camera should be moved); 4) the first stream is a spatial subset of the image captured at one exposure time and the second stream is a spatial subset of the image captured at second exposure time (e.g., for single-frame HDR captures).
Additionally, it is pertinent to point out that there may even be circumstances where the frame sizes as between the pair of streams are the same in size or at least comparable in size. For example, the first stream may be composed of the upper or left half of an RGB video image while the second stream may be composed of the lower or right half of the same RGB video stream. Here, for example, different lines or rows from the image sensor are essentially multiplexed to different output ports. Such cases may arise, e.g., when the timing or physical properties of the image sensor cause the image sensor to generate an RGB image stream having an overall data rate that is greater than what a single image signal processor can handle or has been configured to handle.
It is also pertinent to point out that, for simplicity, the examples of
As observed in
Which configuration as between
Both of the approaches of
As with the comparison between the configurations of
In various embodiments a single sensor may support the operation of any one or more of the configurations described above in
Another comment regarding implementations having more than two output port concerns the fact that any combination of the approaches outlined above with respect to
Where alternative implementations can exist having a single stream of information from a pixel array 501 that is used to feed more than one output port (such as when the second stream is a subset of the first stream, or the first and second streams alternatively transmit different frame sections of a same stream), such implementations will be noted.
Each of the embodiments of
Each of the embodiments of
As will be described in more detail below, the timing at which the pixel array 501 generates the different types of image signals, the timing at which the ADC circuitry 502 converts the different image signals into digital data and the timing and framing structure at which the digital data is transmitted from its corresponding output port may vary from embodiment to embodiment and is apt to at least partially be a function of the characteristics of the image data streams that the output ports 513_1, 513_2 have been configured to provide. The timing and control circuitry 503 may also generate synchronization signals, such as blank fields, frame valid signals or other types of output signals that the receiving side uses to comprehend the framing structure that the digital pixels are being formatted according to.
Each of the embodiments of
After analog-to-digital conversion is performed, digital pixels for both types of images are multiplexed to the correct output port. For example, if visible images are being streamed on the first port 513_1a and depth images are being streamed on the second port 513_2a, digital RGB pixels from the ADC circuit 502a are multiplexed to the first port 513_1a and digital Z pixels from the ADC are multiplexed to the second port 513_2a. The multiplexing of the different image types into the ADC circuit 502a and the alternating of the ADC cells between converting RGB signals and converting Z signals causes the design of
It is pertinent to point out that as the image sensor architectures of
It is pertinent to point out that register control space for a particular output port may accept configuration information for, and the timing and control circuitry may be designed to support in response, a number of different image sensing techniques and formats. Some examples include setting a frame size, setting a frame rate, setting a specific exposure time (which establishes how long pixels are to be enabled to sense incident light), setting a specific window position (which defines a center for a set of pixels to actually use for image generation); setting a specific window size (which establishes a perimeter of pixels on the surface of the sensor within which the image is taken), setting a snapshot/still frame mode (which corresponds to the taking of a single picture rather than a continuous stream of images) vs. a streaming mode, setting a preview capture mode (which is typically a lower resolution mode often with interleaved frames at different focus positions of a camera lens to, e.g., permit a user to quickly determine a proper amount of “zoom-in” or “zoom-out” prior to taking a picture), setting a skipping mode (which reduces the resolution of an image by reading out pixels only from, e.g., every other row within the pixel array), setting a binning mode (which reduces the resolution of an image by combining read-outs of more than one pixel into a single pixel value), setting a pixel depth (the number of bits used to digitally represent a pixel's value). The extent to which a setting for one port for any of these parameters might affect the available settings for another port for any of these parameters is a matter of design choice depending on how sophisticated/complicated the timing and control circuitry is desired to be.
It is also pertinent to point out that although the visible image pixels discussed above have been described as RGB pixels (red, green, blue), other embodiments may use different colored pixel schemes (e.g., Cyan, Magenta and Yellow, or panchromatic) in various spatial arrangements.
An applications processor or multi-core processor 750 may include one or more general purpose processing cores 715 within its CPU 401, one or more graphical processing units 716, a main memory controller 717, an I/O control function 718 and an appropriate number of image signal processor pipelines 719. The general purpose processing cores 715 typically execute the operating system and application software of the computing system. The graphics processing units 716 typically execute graphics intensive functions to, e.g., generate graphics information that is presented on the display 703. The memory control function 717 interfaces with the system memory 702. The image signal processing pipelines 719 receive image information from the camera and process the raw image information for downstream uses. The power management control unit 712 generally controls the power consumption of the system 700.
Each of the touchscreen display 703, the communication interfaces 704-707, the GPS interface 708, the sensors 709, the camera 710, and the speaker/microphone codec 713, 714 all can be viewed as various forms of I/O (input and/or output) relative to the overall computing system including, where appropriate, an integrated peripheral device as well (e.g., the one or more cameras 710). Depending on implementation, various ones of these I/O components may be integrated on the applications processor/multi-core processor 750 or may be located off the die or outside the package of the applications processor/multi-core processor 750.
As observed in
Both the image signal processing pipelines 766-769 may be configured with appropriate register space (e.g., within the applications processor for the image signal processing pipelines and within the image sensor for the output ports) by software of firmware including operating system and/or device driver software and/or firmware.
As such, embodiments of the invention may include various processes as set forth above. The processes may be embodied in machine-executable instructions. The instructions can be used to cause a general-purpose or special-purpose processor to perform certain processes. Alternatively, these processes may be performed by specific hardware components that contain hardwired logic for performing the processes, or by any combination of programmable computer components and custom hardware components.
Elements of the present invention may also be provided as a machine-readable medium for storing the machine-executable instructions. The machine-readable medium may include, but is not limited to, floppy diskettes, optical disks, CD-ROMs, and magneto-optical disks, FLASH memory, ROMs, RAMs, EPROMs, EEPROMs, magnetic or optical cards, propagation media or other type of media/machine-readable medium suitable for storing electronic instructions. For example, the present invention may be downloaded as a computer program which may be transferred from a remote computer (e.g., a server) to a requesting computer (e.g., a client) by way of data signals embodied in a carrier wave or other propagation medium via a communication link (e.g., a modem or network connection).
In the foregoing specification, the invention has been described with reference to specific exemplary embodiments thereof. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention as set forth in the appended claims. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.
This application is a continuation of U.S. application Ser. No. 15/476,165, filed Mar. 31, 2017, which is a continuation of U.S. application Ser. No. 14/580,025, titled “IMAGE SENSOR HAVING MULTIPLE OUTPUT PORTS”, filed Dec. 22, 2014, the contents of each are incorporated by reference in their entirety.
Number | Name | Date | Kind |
---|---|---|---|
6831688 | Lareau | Dec 2004 | B2 |
7247393 | Hazel et al. | Jul 2007 | B2 |
7936038 | Jeong et al. | May 2011 | B2 |
7990636 | Park et al. | Aug 2011 | B2 |
8027107 | Hwang et al. | Sep 2011 | B2 |
8116018 | Park et al. | Feb 2012 | B2 |
8159762 | Lim et al. | Apr 2012 | B2 |
8218016 | Park et al. | Jul 2012 | B2 |
8450673 | Souchkov | May 2013 | B2 |
8645099 | Min | Feb 2014 | B2 |
8664579 | Olsen et al. | Mar 2014 | B2 |
9001220 | Kim | Apr 2015 | B2 |
9182490 | Velichko | Nov 2015 | B2 |
9247109 | Wang | Jan 2016 | B2 |
9866740 | Lewkow | Jan 2018 | B2 |
20020071046 | Harada | Jun 2002 | A1 |
20040169749 | Acharya | Sep 2004 | A1 |
20070206238 | Kawai | Sep 2007 | A1 |
20070236582 | Romano et al. | Oct 2007 | A1 |
20090079589 | Hagiwara | Mar 2009 | A1 |
20090079859 | Hagiwara | Mar 2009 | A1 |
20100265316 | Sali | Oct 2010 | A1 |
20110194007 | Kim et al. | Aug 2011 | A1 |
20110242115 | Tsao et al. | Oct 2011 | A1 |
20110317034 | Athreya et al. | Dec 2011 | A1 |
20120104228 | Souchkov | May 2012 | A1 |
20120147235 | Parks | Jun 2012 | A1 |
20130026384 | Kim | Jan 2013 | A1 |
20130026859 | Bae et al. | Jan 2013 | A1 |
20130134470 | Shin et al. | May 2013 | A1 |
20130222549 | Yoon et al. | Aug 2013 | A1 |
20130277533 | Olsen | Oct 2013 | A1 |
20130335725 | Hardegger | Dec 2013 | A1 |
20140009648 | Kim | Jan 2014 | A1 |
20140028870 | Plowman et al. | Jan 2014 | A1 |
20140118493 | Sali | May 2014 | A1 |
20140160259 | Blanquart et al. | Jun 2014 | A1 |
20140169693 | Kuo et al. | Jun 2014 | A1 |
20140204250 | Kim | Jul 2014 | A1 |
20140253758 | Metz | Sep 2014 | A1 |
Number | Date | Country |
---|---|---|
1478176 | Nov 2004 | EP |
Entry |
---|
PCT/US2015/062157—International Search Report & Written Opinion, dated Mar. 8, 2016, 12 pages. |
Hakanen, Jesse, “Accelerating Image Processing Pipeline on Mobile Devices Using GPU,” Tampere University of Technology, Master's Degree Programme in Information Technology, Master of Science Thesis, (Jun. 2014), pp. 1-54. |
EP Supplementary European Search Report in European Application No. 15873983.9, dated Aug. 9, 2018, 7 pages. |
Number | Date | Country | |
---|---|---|---|
20180097979 A1 | Apr 2018 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 15476165 | Mar 2017 | US |
Child | 15831925 | US | |
Parent | 14580025 | Dec 2014 | US |
Child | 15476165 | US |