This disclosure relates to distance sensing, and, more particularly, to using image sensors, to implement distance sensing for systems such as vehicle, and robotic equipment.
Distance sensing is generally known in the art. Distance sensors (or proximity sensors) generally work by outputting a signal of some kind, (e.g., laser, infrared (IR) LED, or ultrasonic waves) and then reading how the signal has changed on its return. That change may be in the intensity of the returned signal, the time it takes the signal to return, etc. Techniques have been developed for sensors such as Ultra Sonic Sensor, IR LED sensor, Laser Distance Sensor (LIDAR), and LED Time-Of-Flight Distance Sensor,
Embodiments provide novel types approaches to implementing distance sensing and measuring using image sensors. In various embodiments, image sensors are configured for detecting object distance based on images captured by the image sensors. In those embodiments, parameters within the images are analyzed to determine a distance of the object relative to the image sensors. In some implementations, techniques for distance detecting object distance in accordance with the present disclosure are deployed within a vehicle. In those implementations, the distance sensing in accordance with the present disclosure can be used to aid various driving scenario, such as different levels of autonomous self-driving by the vehicle. In some implementations, the distance sensing in accordance with the present disclosure can be employed in robotic equipment such as unmanned underwater devices to aid distance sensing of certain underwater objects of interest. In some implementations, the distance sensing in accordance with the present disclosure can be employed in monitoring or surveillance for detecting or measuring object distance relative to a reference point. Other implementations of the distance sensing in accordance with the present disclosure are contemplated.
The accompanying drawings, referred to herein and constituting a part hereof, illustrate embodiments of the disclosure. The drawings together with the description serve to explain the principles of the invention.
In the appended figures, similar components and/or features can have the same reference label. Further, various components of the same type can be distinguished by following the reference label by a second label that distinguishes among the similar components. If only the first reference label is used in the specification, the description is applicable to any one of the similar components having the same first reference label irrespective of the second reference label.
In the following description, numerous specific details are provided for a thorough understanding of the present invention. However, it should be appreciated by those of skill in the art that the present invention may be realized without one or more of these details. In other examples, features and techniques known in the art will not be described for purposes of brevity.
Conventional distance sensing typically involves generating a signal, transmitting a signal towards an object, receiving a returned signal and comparing certain aspects of returned signal with those of the original signal to determine a distance of the object. For example, a typical LIDAR sensor emits pulsed light waves into the surrounding environment. These pulses bounce off surrounding objects and return to the LIDAR sensor. The LIDAR sensor uses the time it took for each pulse to return to the sensor to calculate the distance it traveled.
However, one problem for this conventional approach is that it does not scale well to be used in various scenarios such as autonomous driving. Take LIDAR sensor as an example, while it may work well in a situation where limited amount of LIDAR sensors are in its surrounding environment, it may not work as well when a large number of LIDAR sensors are sending signals to more or less the same object(s) in the environment. This can typically happen on a road, where LIDAR sensors of multiple vehicles are transmitting signals towards same object(s) in the environment at more or less the same time. This can result in signal interference and wrong signal is intercepted by a LIDAR that did not transmit that signal. In short, for achieving distance sensing in a complex environment such as a busy highway, conventional distance sensing techniques do not work all that well.
Image sensors, such as cameras, are generally considered in the art less suitable for distance sensing as compared to other conventional sensors mentioned above like LIDAR sensor. This is mainly due to analyzing an image is much more complicated and time consuming than analyzing a received signal. This is more acute in a real-time situation such as autonomous driving, where processing and measuring is required to be almost instant. For example, if there is a truck ahead on the road, the vehicle should know that information well before the vehicle is coming close to the truck. It is difficult to analyze an image captured by the camera onboard the vehicle and determine there is a truck ahead in very small amount of time (no more than a few seconds) using image analysis techniques without large processing power.
However, unlike the conventional distance sensing techniques, distance sensing using image sensors are much less prone to signal interferences. Images of objects can be captured just like how human eyes may perceive the objects. An insight provided by the inventors of the present disclosure is that image sensor may be used for distance sensing if the processing of captured images of objects in a scene can be somehow reduced to a manageable degree such that the image based object distance measuring can be completed in less than a few seconds.
Conventionally, after the image 100 is taken by an image sensor, image analysis is employed to identify object 105 from image 100 and image analysis is performed based on the single image captured, which, as mentioned above, could be time and processing consuming. In this embodiment, since multiple images of the object are captured by image sensors 101 and 103 more or less at the same time, image parameters such as an area or a dimension of object 105 in the multiple images can be measured and compared for determining distance R1. For example, let's assume image sensor 101 has a focal length f1, and image sensor 103 has a focal length f2. Let's also assume object 105 has a width H in the real scene as shown in
h1/h2=f1*(R1+dR)/(f2*R1) (formula 1)
In formula 1 above, dR is a distance between image sensors 101 and 103, which is predetermined and preset. Both f2 and f1 are known. Thus, the object distance R1 can be solved according to formula 1 by measuring the image width h1 and h2 of object 105 in images captured by image sensors 101 and 103.
Measuring a width of object 105 in an image, such as h1 or h2 can be achieved in a timely fashion. One way of measuring such a dimension in an image is by counting pixels that have similar pixel values, such as color. As shown in the image 100, the object 105 is a white truck and the road is black. Thus, the number of white pixels representing object 105 in the image 100 can be counted. Although there are other white objects in image 100, these objects are apart from the white object (truck) 105, thus they would not be counted as part of white object (truck) 105 so long as only adjacent white pixels are counted. Since image sensors 101 and 103 are arranged relatively proximate to each other, the white object (truck) 105 appears more or less the same position in image 100 for both image sensors 101 and 103. Thus, the same or similar white pixels representing object 105 can be counted in the image 100 captured by image sensors 101 and 103 for determining h1 and h2.
In one implementation, the image dimensions of object 105 may be measured with the help of an image outline frame. For example, assuming object 105 area in the real scene is A. The image area a1 captured by sensor 101 is thus: a1˜f1{circumflex over ( )}2*A/R1{circumflex over ( )}2, and the image area a2 captured by sensor 103 is: a2˜f2{circumflex over ( )}2*A/R2{circumflex over ( )}2. According these relationships, the following is true:
a1/a2=f1{circumflex over ( )}2*(R1+dR){circumflex over ( )}2/(f2*R1){circumflex over ( )}2 (formula 2)
In formula 2, dR, f1 and f2 are known as explained above. Thus, the object distance R1 can be solved according to formula 2 by measuring the image areas a1 and a2 of object 105 in images captured by image sensors 101 and 103.
b1/b2=Φ1*(R1+dR){circumflex over ( )}2/(Φ2*R1{circumflex over ( )}2) (formula 3)
Thus, as explained above, dR, Φ1, Φ2 are known, the object distance R1 can be solved according to formula 3 by measuring brightness b1 and b2 of point target 205 in the images captured by image sensors 101 and 103.
This method of sensing object distance using image sensors can be useful if the target object is a shining or illuminated object, for example a traffic light. Measuring brightness of a traffic light in an image can be achieved by finding the pixels that have high illuminance values compared to surrounding pixels. The illuminance values of these pixels across images captured by image sensors 101 and 103 can be compared as shown in
In some embodiments, method 600 may be implemented in one or more processing devices (e.g., a digital processor, an analog processor, a digital circuit designed to process information, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information). The one or more processing devices may include one or more devices executing some or all of the operations of method 600 in response to instructions stored electronically on an electronic storage medium. The one or more processing devices may include one or more devices configured through hardware, firmware, and/or software to be specifically designed for execution of one or more of the operations of method 600.
At 602, images of an object can be captured by multiple image sensors simultaneously or near simultaneously.
At 604, image parameters regarding the images captured at 602 can be extracted. Example image parameters may include pixel color values, pixel illuminance values and/or any other image parameters.
At 606, the image parameters extracted at 604 can be analyzed for determining a distance of an object in the images relative to the image sensors. For example, pixels having similar color and/or illuminance values can be counted to determine an area in the images representing the object. As explained above, different area values for the object in different images can be compared for determining the distance of the object relative to the image sensors.
At 608, a distance of the object can be determined based on the image parameters analyzed at 606. Example determinations of the object distance can be found at formula 1 through formula 3 described and illustrated herein.
Computing System Example for Implementing the Conjugate Image Method in Accordance with the Present Disclosure
Any suitable computing system can be used for performing the operations described herein. For example,
The memory 914 can include any suitable non-transitory computer-readable medium. The computer-readable medium can include any electronic, optical, magnetic, or other storage device capable of providing a processor with computer-readable instructions or other program code. Non-limiting examples of a computer-readable medium include a magnetic disk, memory chip, ROM, RAM, an ASIC, a configured processor, optical storage, magnetic tape or other magnetic storage, or any other medium from which a computer processor can read instructions. The instructions may include processor-specific instructions generated by a compiler and/or an interpreter from code written in any suitable computer-programming language, including, for example, C, C++, C#, Visual Basic, Java, Python, Perl, JavaScript, and ActionScript.
The system 900 can also include a bus 916. The bus 916 can communicatively couple one or more components of the system 900. The system 900 can also include a number of external or internal devices such as input or output devices. For example, the system 900 is shown with an input/output (“I/O”) interface 919 that can receive input from one or more input devices 920 or provide output to one or more output devices 922. The one or more input devices 920 and one or more output devices 922 can be communicatively coupled to the I/O interface 919. The communicative coupling can be implemented via any suitable manner (e.g., a connection via a printed circuit board, connection via a cable, communication via wireless transmissions, etc.). Non-limiting examples of input devices 920 include a touch screen (e.g., one or more cameras for imaging a touch area or pressure sensors for detecting pressure changes caused by a touch), a mouse, a keyboard, or any other device that can be used to generate input events in response to physical actions by a user of a computing device. Non-limiting examples of output devices 922 include an LCD screen, an external monitor, a speaker, or any other device that can be used to display or otherwise present outputs generated by a computing device.
The system 900 can execute program code that configures the processor 912 to perform one or more of the operations described above with respect to
The system 900 can also include at least one network interface device 924. The network interface device 924 can include any device or group of devices suitable for establishing a wired or wireless data connection to one or more data networks 929. Non-limiting examples of the network interface device 924 include an Ethernet network adapter, a modem, and/or the like. The system 900 can transmit messages as electronic or optical signals via the network interface device 924.
The system 900 can also include image capturing device(s) 930, such as a camera or other imaging device that is capable of capturing a photographic image. The image capturing device(s) 930 can be configured to capture still images and/or video. The image capturing device(s) 930 may utilize a charge coupled device (“CCD”) or a complementary metal oxide semiconductor (“CMOS”) image sensor to capture images. Settings for the image capturing device(s) 930 may be implemented as hardware or software buttons. In some examples, the system 900 can include a regular color camera configured for capturing RGB color images and an MR camera configured for capturing NIR images. The regular color camera and the NIR camera can be configured so that the fields of the view of the two cameras are substantially the same. In addition, the two cameras may have a matching resolution and have a synchronous image capturing from both sensors.
Numerous specific details are set forth herein to provide a thorough understanding of the claimed subject matter. However, those skilled in the art will understand that the claimed subject matter may be practiced without these specific details. In other instances, methods, apparatuses, or systems that would be known by one of ordinary skill have not been described in detail so as not to obscure claimed subject matter.
Unless specifically stated otherwise, it is appreciated that throughout this specification discussions utilizing terms such as “processing,” “computing,” “calculating,” “determining,” and “identifying” or the like refer to actions or processes of a computing device, such as one or more computers or a similar electronic computing device or devices, that manipulate or transform data represented as physical electronic or magnetic quantities within memories, registers, or other information storage devices, transmission devices, or display devices of the computing platform.
The system or systems discussed herein are not limited to any particular hardware architecture or configuration. A computing device can include any suitable arrangement of components that provide a result conditioned on one or more inputs. Suitable computing devices include multi-purpose microprocessor-based computer systems accessing stored software that programs or configures the computing system from a general purpose computing apparatus to a specialized computing apparatus implementing one or more embodiments of the present subject matter. Any suitable programming, scripting, or other type of language or combinations of languages may be used to implement the teachings contained herein in software to be used in programming or configuring a computing device.
Embodiments of the methods disclosed herein may be performed in the operation of such computing devices. The order of the blocks presented in the examples above can be varied—for example, blocks can be re-ordered, combined, and/or broken into sub-blocks. Certain blocks or processes can be performed in parallel.
The use of “adapted to” or “configured to” herein is meant as open and inclusive language that does not foreclose devices adapted to or configured to perform additional tasks or steps. Additionally, the use of “based on” is meant to be open and inclusive, in that a process, step, calculation, or other action “based on” one or more recited conditions or values may, in practice, be based on additional conditions or values beyond those recited. Headings, lists, and numbering included herein are for ease of explanation only and are not meant to be limiting.
While the present subject matter has been described in detail with respect to specific embodiments thereof, it will be appreciated that those skilled in the art, upon attaining an understanding of the foregoing, may readily produce alterations to, variations of, and equivalents to such embodiments. Accordingly, it should be understood that the present disclosure has been presented for purposes of example rather than limitation, and does not preclude the inclusion of such modifications, variations, and/or additions to the present subject matter as would be readily apparent to one of ordinary skill in the art.
While this disclosure contains many specifics, these should not be construed as limitations on the scope of any invention or of what may be claimed, but rather as descriptions of features that may be specific to particular embodiments of particular inventions. Certain features that are described in this patent document in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. Moreover, the separation of various system components in the embodiments described in this patent document should not be understood as requiring such separation in all embodiments.
Only a few implementations and examples are described and other implementations, enhancements and variations can be made based on what is described and illustrated in this patent document.
A recitation of “a”, “an” or “the” is intended to mean “one or more” unless specifically indicated to the contrary. Ranges may be expressed herein as from “about” one specified value, and/or to “about” another specified value. The term “about” is used herein to mean approximately, in the region of, roughly, or around. When the term “about” is used in conjunction with a numerical range, it modifies that range by extending the boundaries above and below the numerical values set forth. In general, the term “about” is used herein to modify a numerical value above and below the stated value by a variance of 10%. When such a range is expressed, another embodiment includes from the one specific value and/or to the other specified value. Similarly, when values are expressed as approximations, by use of the antecedent “about,” it will be understood that the specified value forms another embodiment. It will be further understood that the endpoints of each of the ranges are included with the range.
All patents, patent applications, publications, and descriptions mentioned here are incorporated by reference in their entirety for all purposes. None is admitted to be prior art.
This application claims the benefit of priority from U.S. Provisional Patent Application No. 63/131,301, titled “DUAL DISTANCED SENSING METHOD FOR PASSIVE RANGE FINDING”, filed Dec. 28, 2020, which is hereby incorporated by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
63131301 | Dec 2020 | US |