This application relates to Time-of-Flight (ToF) sensors, methods, and non-transitory computer-readable media with phase filtering of a depth signal.
Time-of-flight (TOF) is a technique used in rebuilding three-dimensional (3D) images. The TOF technique includes calculating the distance between a light source and an object by measuring the time for light to travel from the light source to the object and return to a light-detection sensor, where the light source and the light-detection sensor are located in the same device.
Conventionally, an infrared light-emitting diode (LED) is used as the light source to ensure high immunity with respect to ambient light. The information obtained from the light that is reflected by the object may be used to calculate a distance between the object and the light-detection sensor, and the distance may be used to reconstruct the 3D images. The 3D images that are reconstructed may then be used in gesture and motion detection. Gesture and motion detection is being used in different applications including automotive, drone, and robotics, which require more accurate and faster obtainment of the information used to calculate the distance between the object and the light-detection source in order to decrease the amount of time necessary to reconstruct the 3D images.
Image sensing devices typically include an image sensor, an array of pixel circuits, signal processing circuitry and associated control circuitry. Within the image sensor itself, charge is collected in a photoelectric conversion device of the pixel circuit as a result of impinging light. Subsequently, the charge in a given pixel circuit is read out as an analog signal, and the analog signal is converted to digital form by an analog-to-digital converter (ADC).
However, there are many noise sources that affect an output of the ToF sensor. For example, some noise sources include shot noise in the photon, KTC noise in the circuit, system noise and fixed pattern noise from pixel and circuit design, and quantization noise in the ADC. All of these noise sources in the pixel data will contribute to depth noise.
Due to depth aliasing and other problems, existing filtering methods are performed on raw pixel data domain. However, the raw pixel data domain typically includes one or more frames of pixel values, and hence, requires a specific amount of frame memory to store the raw pixel data for filtering. Accordingly, there exists a need for noise filtering methods for a ToF sensor that do not suffer from these deficiencies.
As described in greater detail below, phase filtering methods are performed directly on a depth signal, which typically does not require frame memory in the case of spatial filtering. If temporal filtering is implemented, the amount of frame memory required will still be less than the amount of frame memory needed for filtering of the raw pixel data. Additionally, the phase filtering methods of the present disclosure solve the issue of distance aliasing during the filtering process. Further, the phase filtering methods of the present disclosure utilize both distance information and pixel strength.
Various aspects of the present disclosure relate to ToF sensors, methods, and non-transitory computer-readable media. In one aspect of the present disclosure, a ToF sensor includes an array of pixels and processing circuitry. At least one pixel of the array of pixels is configured to generate a depth signal. The processing circuitry is configured to determine a phase value from the depth signal and perform phase filtering on the phase value.
Another aspect of the present disclosure is a method for filtering noise. The method includes determining, with processing circuitry, a phase value from a depth signal that is generated by one pixel from an array of pixels. The method also includes performing, with the processing circuitry, phase filtering on the phase value.
In yet another aspect of the present disclosure, a non-transitory computer-readable medium comprises instructions that, when executed by an electronic processor, cause the electronic processor to perform a set of operations. The set of operations includes determining a phase value from a depth signal that is generated by one pixel from an array of pixels. The set of operations also includes performing phase filtering on the phase value.
This disclosure may be embodied in various forms, including hardware or circuits controlled by computer-implemented methods, computer program products, computer systems and networks, user interfaces, and application programming interfaces; as well as hardware-implemented methods, signal processing circuits, image sensor circuits, application specific integrated circuits, field programmable gate arrays, digital signal processors, and other suitable forms. The foregoing summary is intended solely to give a general idea of various aspects of the present disclosure, and does not limit the scope of the present disclosure.
These and other more detailed and specific features of various embodiments are more fully disclosed in the following description, reference being had to the accompanying drawings, in which:
In the following description, numerous details are set forth, such as flowcharts, equations, and circuit configurations. It will be readily apparent to one skilled in the art that these specific details are exemplary and do not to limit the scope of this application.
In this manner, the present disclosure provides improvements in the technical field of time-of-flight sensors, as well as in the related technical fields of image sensing and image processing.
The vertical signal line 113 conducts the analog signal for a particular column to a column circuit 130. In the example of
The column circuit 130 is at least partially controlled by a horizontal driving circuit 140 (for example, a column scanning circuit). Each of the vertical driving circuit 120, the column circuit 130, and the horizontal driving circuit 140 receive one or more clock signals from a controller 150. The controller 150 controls the timing and operation of various image sensor components.
In some examples, the controller 150 controls the column circuit 130 to convert analog signals from the array 110 to digital signals. The controller 150 may also control the column circuit 130 to output the digital signals via signal lines 160 to an output circuit for additional signal processing, storage, transmission, or the like. In some examples, the controller 150 includes an electronic processor (for example, one or more microprocessors, one or more digital signal processors, application specific integrated circuits (ASIC), field programmable gate arrays (FPGA), or other suitable processing devices) and a memory.
Additionally, the column circuit 130 may perform various signal processing methods, and in particular, phase filtering methods as described in greater detail below. For example, one or more of the image processing circuits 132 may be controlled by the electronic processor of the controller 150 to perform the phase filtering methods as described below and output the processed signals as the digital signals via the signal lines 160 to an output circuit for additional signal processing, storage, transmission, or the like. In some examples, the electronic processor of the controller 150 controls the memory of the controller 150 to store digitals signals before or after performing the phase filtering methods as described below. In some examples, the memory of the controller 150 is a non-transitory computer-readable medium that includes computer readable code stored thereon for performing the various signal processing methods. Examples of a non-transitory computer-readable medium are described in greater detail below.
In some examples, the one or more image processing circuits 132 may interface to memory via the signal lines 160. The memory, e.g., static random access memory (SRAM) or dynamic random access memory (DRAM), may be implemented on the same piece of semiconductor as the image sensor 100, or it may be implemented in a separate memory chip which is connected to the image sensor via the signal lines 160. In the case of implementation on a separate memory chip, the memory chip may be physically stacked with the image sensor and the electrical connection between the two chips may be done by through silicon vias (TSV) or other connection methods. The memory is generally referred to as the “frame buffer” which stores the digital signals for all pixels comprising a frame before performing the phase filtering methods as described below.
Alternatively, in some examples, image processing circuits (for example, one or more microprocessors, one or more digital signal processors, application specific integrated circuits (ASIC), field programmable gate arrays (FPGA), or other suitable processing devices) that includes SRAM or DRAM which collectively form a “frame buffer” and are external to the image sensor 100 may receive the digital signals via a bus connected to the signal lines 160 and perform the phase filtering methods as described herein. Additionally or alternatively, the image processing circuits that are external to the image sensor 100 may retrieve the digital signals from the memory of the controller 150 that stores the digital signals and perform the phase filtering methods as described herein.
In a ToF sensor, the phase value of certain pixels may be an outlier within a configuration of multiple pixels due to noise, received signal strength, and other factors. In ToF processing, the outlier phase values are detected and declared to be invalid pixels in a given pixel configuration.
Noise filtering is a form of weighted or unweighted pixel averaging, either by linear or non-linear methods. Ideally, invalid pixels are not included in the pixel averaging to avoid distorting the valid pixels.
Specifically, as illustrated in
Generally the weights can be chosen by many linear or non-linear digital filter design methods such as considering the effect of the pixel noise, the degree of blurring the data, simplicity of implementation, and others. For ease of understanding, it may be assumed that all weights W=1 in the above expression (1). In this example, the average output 412 of the center pixel 402 with respect to neighboring pixels 404-410 is then a phase value of 181.5 degrees when the actual phase value of the center pixel 402 is 3 degrees. The correct average output value should be close to 360 degrees (or 0 degrees). The comparative filtering method in the above expression (1) (i.e., straight averaging) does not work for phase values of pixels that are close to 360 degrees or 0 degrees because it does not take into account the wrap around property of the phase values.
At block 504, validity of the center pixel 402 is examined. If it is not valid, then the filtering embodiment skips the pixel and move the window of pixels 400 to the next pixel location. Assuming the center pixel 402 is valid, processing circuitry (e.g., the electronic processor of the controller 150, the one or more image processing circuits 132, processing circuitry external to the image sensor 100, other suitable processing circuitry, or a combination thereof) may calculate an offset 506 to shift the phase value of the center pixel 402 to be centered at 180 degrees. The offset 506 is equal to 180 degrees minus the actual phase value of the center pixel 402. In the example of
At block 508, assuming the neighboring pixels 404-410 are valid, then the processing circuitry applies a modulo shift to each of the phase values of the neighboring pixels 404-410 using the offset 506. In the example of
At block 510, the processing circuitry calculates a weighted mean 512 based on the modulo shifted value with expression (2) below.
As illustrated in
The description in the previous two paragraphs assumes the pixels 404-410 are valid. In general, blocks 508 and 510 includes in the calculation only those pixels that are valid. The invalid neighboring pixels are ignored.
At block 514, the processing circuitry shifts back the weighted mean 512 using the offset 506 to generate an average output phase value 516 (i.e., a “filtered phase value”) with respect to the center pixel 402. As illustrated in
After generating the average output phase value 516 with respect to the center pixel 402, the processing circuitry selects a new center pixel with four neighboring pixels and repeats blocks 504, 508, 510, and 514 for the new center pixel. The processing circuitry also performs blocks 504, 508, 510, and 514 for each center pixel in the frame 502 that has four neighboring pixels, assuming all of the pixels are valid. The weighted average processing in
At block 604, validity of the center pixel 402 is examined. If it is not valid, then the value from a valid neighboring pixels among 404, 406, 408 and 410 are chosen and is copied to the center pixel. The filtering continues as described in the following using the new value for the center pixel. Specifically, the processing circuitry may calculate an offset 606 to shift the phase value of a valid neighboring pixel to be centered at 180 degrees. The offset 606 is simply 180 degrees minus the actual phase value of the valid neighboring pixel. In the example of
At block 608, assuming the neighboring pixels 404-410 are valid, then the processing circuitry modulo shifts the phase values of the neighboring pixels 404-410 using the offset 606. In the example of
At block 610, the processing circuitry calculates a weighted mean 612 based on the modulo shifted value with the expression (2) described above. As illustrated in
At block 614, the processing circuitry shifts back the weighted mean 612 using the offset 606 to generate an average output phase value 616 with respect to the center pixel 402. As illustrated in
After generating the average output phase value 616 with respect to the center pixel 402, the processing circuitry selects a new center pixel with four neighboring pixels and repeats blocks 604, 608, 610, and 614 for the new center pixel. The processing circuitry also performs blocks 604, 608, 610, and 614 for each center pixel in the frame 602 that has four neighboring pixels, assuming all of the center pixels are invalid.
The description for
Additionally, in the third example method 700, the processing circuitry calculates a weighted mean of circular angles by calculating the arctangent of weighted averages of horizontal and verticals components of the phase value at each pixel in the pixels 400 (at block 704). For example, the weighted mean may be determined by expression (3) below.
The third example method 700 does not have the phase wrap around problem described above because the phase values are not directly averaged together. Additionally, in the third example method 700, only the valid pixels are included. Further, the kernel size of the selected area is not limited to a N×N kernel. The kernel size may be the 4 closest neighboring pixels or other suitable number of closest neighboring pixels.
As illustrated in
As illustrated in
When the processing circuitry determines that most valid pixels are in the first quadrant and the fourth quadrant, then the processing circuitry selectively shifts the phase value of every pixel by 180 degrees (at block 904).
After shifting the phase value of every pixel by 180 degrees, the processing circuitry calculates the weighted average similar to the first example method 500 as described above in expression (2) and in
First, the first example method 500 uses the center pixel to determine whether a shift is necessary. The fourth example method 900 uses the entire neighborhood of pixels to determine whether a shift is necessary.
Second, the first example method 500 shifts by an offset that depends on a phase value of the center pixel. The fourth example method 900 may always shifts by 180 degrees. In some examples, the fourth example method 900 may shift by a different constant amount that may be dependent on the center pixel.
Third, the first example method 500 does not check the number of valid pixels in the aliasing quadrants. The fourth example method 900 always checks the number of valid pixels in the aliasing quadrants.
The fourth example method 900 does not have the phase wrap around problem described above because the phase values are always shifted by a specific amount. Further, the kernel size of the selected area is not limited to a N×N kernel. The kernel size may be the 4 closest neighboring pixels or other suitable number of closest neighboring pixels.
As illustrated in
As illustrated in
Lastly, as illustrated in
The methods 500-700 and 900 described above are spatial filters that use one or more spatial neighbor pixels to filter a center pixel. However, the methods 500-700 and 900 are not limited to spatial filters. For example, pixels in different frames may be used as the neighboring pixels to temporally filter a center pixel in a current frame (i.e., frame n).
Specifically, as illustrated in
Specifically, as illustrated in
As illustrated in
With regard to the processes, systems, methods, heuristics, etc. described herein, it should be understood that, although the steps of such processes, etc. have been described as occurring according to a certain ordered sequence, such processes could be practiced with the described steps performed in an order other than the order described herein. It further should be understood that certain steps could be performed simultaneously, that other steps could be added, or that certain steps described herein could be omitted. In other words, the descriptions of processes herein are provided for the purpose of illustrating certain examples, and should in no way be construed so as to limit the claims.
Accordingly, it is to be understood that the above description is intended to be illustrative and not restrictive. Many examples and applications other than the examples provided would be apparent upon reading the above description. The scope should be determined, not with reference to the above description, but should instead be determined with reference to the appended claims, along with the full scope of equivalents to which the claims are entitled. It is anticipated and intended that future developments will occur in the technologies discussed herein, and that the disclosed systems and methods will be incorporated into such future embodiments. In sum, it should be understood that the application is capable of modification and variation.
All terms used in the claims are intended to be given their broadest reasonable constructions and their ordinary meanings as understood by those knowledgeable in the technologies described herein unless an explicit indication to the contrary is made herein. In particular, use of the singular articles such as “a,” “the,” “said,” etc. should be read to recite one or more of the indicated elements unless a claim recites an explicit limitation to the contrary.
The Abstract of the Disclosure is provided to allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing Detailed Description, it may be seen that various features are grouped together in various embodiments for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separately claimed subject matter.