The present disclosure relates to a method and apparatus for controlling a distance measurement apparatus.
It is important for a self-propelled system such as a self-guided vehicle and a self-propelled robot to avoid a collision with another vehicle, a person, or other objects. For that purpose, a system that carries out sensing of an external environment with a camera or a distance measurement apparatus has been used.
As for distance measurement, there have been proposed a variety of devices each of which measures the distance to one or more objects present in a space. For example, Japanese Unexamined Patent Application Publication No. 2018-124271, Japanese Unexamined Patent Application Publication No. 2009-217680, and Japanese Unexamined Patent Application Publication No. 2018-049014 disclose systems each of which measures the distance to an object with a TOF (time-of-flight) technology.
Japanese Unexamined Patent Application Publication No. 2018-124271 discloses a system that measures the distance to an object by detecting reflected light from the object. While changing the direction of a light beam in each of a plurality of frame periods, this system causes one or more light receiving elements of an image sensor to sequentially detect the reflected light. Such an operation successfully shortens the time required to acquire distance information on the entire target scene.
Japanese Unexamined Patent Application Publication No. 2009-217680 discloses a method for detecting a traverse object that moves in a direction different from the direction of movement of an own vehicle. It is disclosed, for example, that a reduction in signal-to-noise ratio is achieved by increasing the intensity or number of emissions of an optical pulse from a light source.
In order to obtain detailed distance information on a distant physical object, Japanese Unexamined Patent Application Publication No. 2018-049014 discloses providing, separately from a first distance measurement apparatus, a second distance measurement apparatus that emits a light beam to a distant physical object.
One non-limiting and exemplary embodiment provides a technology for more efficiently acquiring distance information on one or more physical objects that are present in a scene.
In one general aspect, the techniques disclosed here feature a method for controlling a distance measurement apparatus including a light emitting device capable of changing a direction of emission of a light beam and a light receiving device that detects a reflected light beam produced by the emission of the light beam. The method includes acquiring data representing a plurality of images acquired at different points in time by an image sensor that acquires an image of a scene to be subjected to distance measurement, determining, on the basis of the data representing the plurality of images, a degree of priority of distance measurement of one or more physical objects included in the plurality of images, and executing distance measurement of the one or more physical objects by causing the light emitting device to emit the light beam in a direction corresponding to the degree of priority and in an order corresponding to the degree of priority and causing the light receiving device to detect the reflected light beam.
It should be noted that general or specific aspects of the present disclosure may be implemented as a system, an apparatus, a method, an integrated circuit, a computer program, a storage medium such as a computer-readable storage disk, or any selective combination thereof. The computer-readable storage medium may include a nonvolatile storage medium such as a CD-ROM (compact disc-read-only memory). The apparatus may be constituted by one or more apparatuses. In a case where the apparatus is constituted by two or more apparatuses, the two or more apparatuses may be placed within one piece of equipment, or may be placed separately in each of two or more separate pieces of equipment. The term “apparatus” as used herein or in the claims may not only mean one apparatus but also mean a system composed of a plurality of apparatuses.
An aspect of the present disclosure makes it possible to more efficiently acquire distance information on one or more physical objects that are present in a scene.
Additional benefits and advantages of an aspect of the present disclosure will become apparent from the specification and drawings. The benefits and/or advantages may be individually obtained by the various aspects and features of the specification and drawings, which need not all be provided in order to obtain one or more of such benefits and/or advantages.
In the present disclosure, all or some of the circuits, units, apparatuses, members, or sections or all or some of the functional blocks in the block diagrams may be implemented as one or more of electronic circuits including, but not limited to, a semiconductor device, a semiconductor integrated circuit (IC), or an LSI (large scale integration). The LSI or IC can be integrated into one chip, or also can be a combination of multiple chips. For example, functional blocks other than a memory may be integrated into one chip. The name used here is LSI or IC, but it may also be called system LSI, VLSI (very large scale integration), or VLSI (ultra large scale integration) depending on the degree of integration. A Field Programmable Gate Array (FPGA) that can be programmed after manufacturing an LSI or a reconfigurable logic device that allows reconfiguration of the connection or setup of circuit cells inside the LSI can be used for the same purpose.
Further, it is also possible that all or some of the functions or operations of the circuits, units, apparatuses, members, or sections are implemented by executing software. In such a case, the software is stored on one or more non-transitory storage media such as a ROM, an optical disk, or a hard disk drive, and when the software is executed by a processor, the software causes the processor together with peripheral devices to execute the functions specified in the software. A system or device may include such one or more non-transitory storage media on which the software is stored and a processor together with necessary hardware devices such as an interface.
In order to measure distances to a plurality of objects scattered about over a wide range in a scene, a conventional distance measurement apparatus uses a method for illuminating the scene thoroughly with a light beam, for example, by raster scanning. With such a method, even an area where no object is present is illuminated with the light beam, and the light beam is emitted in a predetermined order. Therefore, even in the presence of a dangerous or important object in the scene, it is impossible to preferentially emit the object with the light beam. In order to emit the light beam preferentially in a particular direction regardless of order of light emission of scanning, it is necessary to, as disclosed, for example, in Japanese Unexamined Patent Application Publication No. 2018-049014, add a distance measurement apparatus that performs distance measurement preferentially in a certain direction.
Embodiments of the present disclosure provide technologies that make it possible to efficiently acquire distance information on an object without adding a distance measurement apparatus. The following gives a brief overview of the embodiments of the present disclosure.
A control method according to an exemplary embodiment of the present disclosure is a method for controlling a distance measurement apparatus including a light emitting device capable of changing a direction of emission of a light beam and a light receiving device that detects a reflected light beam produced by the emission of the light beam. The method includes acquiring data representing a plurality of images acquired at different points in time by an image sensor that acquires an image of a scene to be subjected to distance measurement, determining, on the basis of the data representing the plurality of images, a degree of priority of distance measurement of one or more physical objects included in the plurality of images, and executing distance measurement of the one or more physical objects by causing the light emitting device to emit the light beam in a direction corresponding to the degree of priority and in an order corresponding to the degree of priority and causing the light receiving device to detect the reflected light beam.
According to the foregoing method, a degree of priority of distance measurement of one or more physical objects included in the plurality of images is determined on the basis of the data representing the plurality of images, and distance measurement of the one or more physical objects is executed by causing the light emitting device to emit the light beam in a direction corresponding to the degree of priority and in an order corresponding to the degree of priority and causing the light receiving device to detect the reflected light beam. Such control makes it possible to efficiently execute distance measurement of a particular physical object having a high degree of priority.
The distance measurement apparatus may be mounted on board a movable body. The method may include acquiring, from the movable body, data representing a movement of the movable body. The degree of priority may be determined on the basis of the data representing the plurality of images and the data representing the movement of the movable body.
The foregoing method makes it possible to determine the degree of priority of the physical object according to a state of movement of the movable body. The movable body may be a vehicle such as an automobile or a two-wheeler. The data representing the movement of the movable body may contain, for example, information such as the velocity, rate of acceleration, or rate of angular acceleration of the movable body. The degree of priority of the physical object cam be more appropriately determine by using not only the data representing the plurality of images but also the data representing the movement of the movable body. For example, on the basis of the velocity or rate of acceleration of the own vehicle and a motion vector of a physical object computed from the plurality of images, the degree of risk of the physical object can be estimated. Flexible control such as setting a high degree of priority for a physical object having a high degree of risk is possible.
Determining the degree of priority may include generating a motion vector of the one or more physical objects on the basis of the plurality of images, generating, on the basis of the data representing the movement of the movable body, a motion vector of a stationary object that is generated due to the movement of the movable body, and determining the degree of priority on the basis of a relative velocity vector that is a difference between the motion vector of the physical object and the motion vector of the stationary object.
According to the foregoing method, for example, as the relative velocity vector becomes greater, the degree of risk of the physical object becomes higher, so that the degree of priority can be made higher. As a result, a dangerous physical object can be intensively and efficiently subjected to distance measurement.
The method may further include, after having executed the distance measurement, outputting, to the movable body, data containing information identifying the physical object and information indicating a distance to the physical object. This allows the movable body to perform an action of, for example, avoiding the physical object.
The degree of priority may be determined on the basis of a magnitude of a time change in the relative velocity vector. The time change in the relative velocity vector represents the rate of acceleration of the physical object. A physical object having a higher rate of acceleration can be determined to be more dangerous and have higher priority. The degree of priority may be determined on the basis of a magnitude of the relative velocity vector.
Acquiring the data representing the plurality of images may include acquiring data representing first, second and third images consecutively acquired by the image sensor. Determining the degree of priority may include generating a first motion vector of the physical object on the basis of the first image and the second image, generating a second motion vector of the physical object on the basis of the second image and the third image, generating, on the basis of the data representing the movement of the movable body, a motion vector of a stationary object that is generated due to the movement of the movable body, generating a first relative velocity vector that is a difference between the first motion vector and the motion vector of the stationary object, generating a second relative velocity vector that is a difference between the second motion vector and the motion vector of the stationary object, and determining the degree of priority on the basis of a difference between the first relative velocity vector and the second relative velocity vector. Such an action makes it possible to determine the degree of priority as appropriate according to a time change in motion vector.
The method may further include repeating more than once a cycle including acquiring the data representing the images, determining the degree of priority of distance measurement of the physical object, and executing the distance measurement of the physical object. A plurality of the cycles may be repeated at regular short time intervals (e.g. approximately few microseconds to few seconds). By repeating determination of the degree of priority and distance measurement, distance measurement of a physical object having a high degree of risk or degree of importance can be appropriately executed even in a traffic environment that changes very rapidly with the passage of time.
For a physical object on which the distance measurement was executed in a cycle, the distance measurement may be continued in a next cycle without determining the degree of priority. In general, it is preferable that distance measurement of a physical object determined to have high priority be continued in the next and subsequent cycles. The foregoing method makes it possible to track the object by skipping determination of the degree of priority and continuing the distance measurement.
The method may further include determining a duration of illumination with the light beam according to the degree of priority. For example, a physical object having a higher degree of priority may be illuminated with the light beam for a longer time. In a case where an indirect TOF method is used as the distance measurement method, the measurable range of distances can be made larger as the duration of illumination with the light beam and the period of exposure of the light receiving device are made longer. For this reason, by lengthening the duration of illumination with the light beam of a physical object having a high degree of priority, the measurable range of distances to the physical object can be extended.
The method may further include determining a number of occurrences of the emission of the light beam and detection of the reflected light beam according to the degree of priority. For example, the number of occurrences may be increased for a physical object having a higher degree of priority. Accuracy of distance measurement can be increased by increasing the number of occurrences. For example, errors in distance measurement can reduced by a process of, for example, averaging results of more than one occurrence of distance measurement.
The light receiving device may include the image sensor. Alternatively, the image sensor may be a device that is independent of the light receiving device.
The image sensor may be configured to acquire the images from light emitted by the light emitting device. In that case, the light emitting device may be configured to emit, separately from the light beam, flush light that illuminates a wide range.
A control apparatus according to another embodiment of the present disclosure controls a distance measurement apparatus including a light emitting device capable of changing a direction of emission of a light beam and a light receiving device that detects a reflected light beam produced by the emission of the light beam. The control apparatus includes a processor and a storage medium having stored thereon a computer program that is executed by the processor. The computer program causes the processor to execute operations including acquiring data representing a plurality of images acquired at different points in time by an image sensor that acquires an image of a scene to be subjected to distance measurement, determining, on the basis of the data representing the plurality of images, a degree of priority of distance measurement of one or more physical objects included in the plurality of images, and executing distance measurement of the one or more physical objects by causing the light emitting device to emit the light beam in a direction corresponding to the degree of priority and in an order corresponding to the degree of priority and causing the light receiving device to detect the reflected light beam.
A system according to still another embodiment of the present disclosure includes the control apparatus, the light emitting device, and the control apparatus.
A computer program according to still another embodiment of the present disclosure is executed by a processor that controls a distance measurement apparatus including a light emitting device capable of changing a direction of emission of a light beam and a light receiving device that detects a reflected light beam produced by the emission of the light beam. The computer program causes the processor to execute operations including acquiring data representing a plurality of images acquired at different points in time by an image sensor that acquires an image of a scene to be subjected to distance measurement, determining, on the basis of the data representing the plurality of images, a degree of priority of distance measurement of one or more physical objects included in the plurality of images, and executing distance measurement of the one or more physical objects by causing the light emitting device to emit the light beam in a direction corresponding to the degree of priority and in an order corresponding to the degree of priority and causing the light receiving device to detect the reflected light beam.
The following describes an exemplary embodiment of the present disclosure. It should be noted that the embodiment to be described below illustrates a general or specific examples. The numerical values, shapes, constituent elements, placement and topology of constituent elements, steps, orders of steps, or other features that are shown in the following embodiment are merely examples and are not intended to limit the present disclosure. Further, those of the constituent elements in the following embodiment which are not recited in an independent claim representing the most generic concept are described as optional constituent elements. Further, the drawings are schematic views and are not necessarily strict illustrations. Furthermore, in the drawings, substantially the same components are given the same reference signs, and a repeated description may be omitted or simplified.
A configuration and operation of a distance measurement system according to exemplary Embodiment 1 of the present disclosure are described.
The distance measurement system 10 includes an imaging apparatus 100, a distance measurement apparatus 200, and a processing apparatus 300. The imaging apparatus 100 acquires a two-dimensional image by imaging a scene. The distance measurement apparatus 200 emits light, detects reflected light produced by the light thus emitted being reflected by a physical object, and thereby measures the distance to the physical object. The processing apparatus 300 acquires image information acquired by the imaging apparatus 100, distance information acquired by the distance measurement apparatus 200, and movement information and movement plan information that are sent from the control apparatus 400 of the movable body. The processing apparatus 300 generates, on the basis of those pieces of information thus acquired, information regarding the surrounding environment and outputs, to the control apparatus 400, the information regarding the surrounding environment. The information regarding the surrounding environment is hereinafter referred to as “surrounding information”.
The imaging apparatus 100 includes an optical system 110 and an image sensor 120. The optical system 110 includes one or more lenses and forms an image on a photosensitive surface of the image sensor 120. The image sensor 120 is a sensor, such as a CMOS(complementary metal-oxide semiconductor) or a CCD(charge-coupled device), that generates and outputs two-dimensional image data.
The imaging apparatus 100 acquires a luminance image of a scene in the same direction as the distance measurement apparatus 200. The luminance image may be a color image or a black-and-white image. The imaging apparatus 100 may image a scene by means of outside light or may image a scene by illuminating the scene with light from a light source. The light emitted from the light source may be a diffused light, or the whole scene may be imaged by sequentially illuminating the scene with a light beam. The imaging apparatus 100 is not limited to a visible-light camera but may be an infrared camera.
The imaging apparatus 100 performs continuous imaging and generates moving image data in accordance with instructions from the processing apparatus 300.
The distance measurement apparatus 200 includes a light emitting device 210, a light receiving device 220, a control circuit 230, and a processing circuit 240. The light emitting device 210 can emit a light beam in any direction within a predetermined range. The light receiving device 220 receives a reflected light beam produced by the light beam emitted by the light emitting device 210 being reflected by a physical object in a scene. The light receiving device 220 includes an image sensor or one or more photodetectors that detect the reflected light beam. The control circuit 230 controls the timing and direction of emission of the light beam that is emitted from the light emitting device 210 and the timing of exposure of the light receiving device 220. The processing circuit 240 calculates, on the basis of a signal outputted from the light receiving device 220, a distance to an object illuminated with the light beam. The distance can be measured by measuring or calculating the time from emission to reception of the light beam. It should be noted that the control circuit 230 and the processing circuit 240 may be implemented by one integrated circuit.
The light emitting device 210 is a beam scanner capable of changing the direction of emission of the light beam under control of the control circuit 230. The light emitting device 210 can sequentially illuminate some areas within a distance measurement target scene with the light beam. The wavelength of the light beam that is emitted from the light emitting device 210 is not limited to particular wavelengths, but may for example be any wavelength that falls within a visible to infrared range.
A light source capable of changing the direction of emission of light by means of a structure different from a light emitting device having a movable mirror may be used. For example, as disclosed in Japanese Unexamined Patent Application Publication No. 2018-124271, a light emitting device including a reflective waveguide may be used. Alternatively, a light emitting device that, by adjusting the phase of light that is outputted from each antenna by an antenna array, changes the direction of light of the whole array.
Light L0 emitted from a light source such as a laser element is inputted to the plurality of phase shifters 20 of the phase shifter array 20A via the optical divider 30. Light having passed through the plurality of phase shifters 20 of the phase shifter array 20A is inputted to each of the plurality of optical waveguide elements 80 of the optical waveguide array 80A with its phase shifted by certain amounts in the Y direction. Light inputted to each of the plurality of optical waveguide elements 80 of the optical waveguide array 80A is emitted as a light beam L2 from a light exit surface 80s parallel to an X-Y plane in a direction intersecting the light exit surface 80s.
Light inputted to the optical waveguide layer 15 propagates along the X direction through the optical waveguide layer 15 while being reflected by the first mirror 11 and the second mirror 12. An arrow in
Applying the driving voltage to the electrodes 13 and 14 causes the refractive index of the optical waveguide layer 15 to change, so that the direction of light that is emitted outward from the optical waveguide element 80 changes. According to changes in the driving voltage, the direction of the light beam L2, which is emitted from the optical waveguide array 80A, changes. Specifically, the direction of emission of the light beam L2 shown in
Applying the driving voltage to the pair of electrodes 23 and 24 causes the total reflection waveguide 21 to be heated by the heater 22. This results in a change in the refractive index of the total reflection waveguide 21, so that there is a shift in the phase of light that is emitted from an end of the total reflection waveguide 21. Changing the phase difference in light that is outputted from two adjacent phase shifters 20 of the plurality of phase shifters 20 shown in
The foregoing configuration allows the light emitting device 210 to two-dimensionally change the direction of emission of the light beam L2. Details such as the principle of operation and method of operation of such a light emitting device 210 are disclosed in Japanese Unexamined Patent Application Publication No. 2018-124271, the entire contents of which are hereby incorporated by reference.
Next, an example configuration of the image sensor of the light receiving device 220 is described. The image sensor includes a plurality of light receiving elements two-dimensionally arrayed along a photosensitive surface. The image sensor may be provided with an optical component facing the photosensitive surface of the image sensor. The optical component may include, for example, at least one lens. The optical component may include another optical element such as a prism or a mirror. The optical component may be designed so that light having diffused from one point on an object in a scene converges at one point on the photosensitive surface of the image sensor.
The image sensor may for example be a CCD (charge-coupled device) sensor, a CMOS (complementary metal-oxide semiconductor) sensor, or an infrared array sensor. Each of the light receiving elements includes a photoelectric conversion element such as a photodiode and one or more charge accumulators. Electric charge produced by photoelectric conversion is accumulated in the charge accumulators during an exposure period. The electric charge accumulated in the charge accumulator is outputted after the end of the exposure period. In this way, each of the light receiving elements outputs an electric signal corresponding to the amount of light received during the exposure period. This electric signal may be referred to as “detection signal”. The image sensor may be a monochrome imaging element, or may be a color imaging element. For example, a color imaging element having an R/G/B, R/G/B/IR or R/G/B/W filter may be used. The image sensor may have detection sensitivity not only to a visible wavelength range but also to a range of wavelengths such as ultraviolet, near-infrared, mid-infrared, or far-infrared wavelengths. The image sensor may be a sensor including a SPAD (single-photon avalanche diode). The image sensor may include an electronic shutter of a mode by which all pixels are exposed en bloc, i.e. a global shutter mechanism. The electronic shutter may be of a rolling-shutter mode by which an exposure is made for each row or of an area shutter mode by which only some areas adjusted to a range of illumination with a light beam are exposed.
With reference to the timing of emission of light from the light emitting device 210, the image sensor receives reflected light in each of a plurality of exposure periods differing in timing of start and end from one each other and outputs, for each exposure period, a signal indicating the amount of light received.
The control circuit 230 determines the direction and timing of emission of light by the light emitting device 210 and outputs a control signal to the light emitting device 210 to instruct the light emitting device 210 to emit light. Furthermore, the control circuit 230 determines the timing of exposure of the light receiving device 220 and outputs a control signal to the light receiving device 220 to instruct the light receiving device 220 to make an exposure and output a signal.
The processing circuit 240 acquires signals, outputted from the light receiving device 220, that indicate electric charge accumulated during a plurality of different exposure periods and, on the basis of those signals, calculates a distance to a physical object. The processing circuit 240 calculates, on the basis of ratios of electric charge accumulated separately in each of the plurality of exposure periods, the time from emission of the light beam from the light emitting device 210 to reception of the reflected light beam by the light receiving device 220 and calculates a distance from the time thus calculated. Such a distance measurement method is referred to as “indirect TOF method”.
Let it be assumed here that Cfd1 is the integral capacitance of electric charge that is accumulated in the light receiving element during the first exposure period, Cfd2 is the integral capacitance of electric charge that is accumulated in the light receiving element during the second exposure period, Iph is a photoelectric current, and N is a charge transfer clock number. The output voltage of the light receiving element in the first exposure period is expressed by Vout1 as follows:
Vout1=Q1/Cfd1=N×Iph×(T0−Td)/Cfd1
The output voltage of the light receiving element in the second exposure period is expressed by Vout2 as follows:
Vout2=Q2/Cfd2=N×Iph×Td/Cfd2
In the example shown in
Td={Vout2/(Vout1+Vout2)}×T0
Assuming that C is the velocity of light (≈3×108 m/s), the distance L between the device and the object is expressed by the following formula:
L=1/2×C×Td=1/2×C×{Vout2/(Vout1+Vout2)}×T0
The image sensor, which in actuality outputs electric charge accumulated during an exposure period, may be unable, in terms of time, to make two consecutive exposures. In such a case, for example, a method shown in
Thus, in the example shown in
It should be noted that in actual distance measurement, the image sensor may receive not only light emitted from the light source and reflected by an object but also background light, i.e. extraneous light such as sunlight or surround lighting. Accordingly, in general, an exposure period is provided so that accumulated charge generated by background light falling on the image sensor with no light beam being emitted can be measured in the exposure period. By subtracting, from the amount of electric charge that is measured when a reflection of a light beam is received, the amount of electric charge measured in a background exposure period, the amount of electric charge in a case where the only the reflection of the light beam is received can be obtained. For simplicity, the present embodiment omits a description of an operation concerning background light.
Although, in this example, indirect TOF distance measurement is performed, direct TOF distance measurement may alternatively be performed. In a case where direct TOF distance measurement is performed, the light receiving device 220 includes a sensor including light receiving elements equipped with timer counters and two-dimensionally arranged along a photosensitive surface. The timer counters start measuring time at the start of an exposure and finish measuring time at a point in time where the light receiving elements have received reflected light. In this way, the timer counters measure time separately for each of the light receiving elements to directly measure the time of flight of the light. The processing circuit 240 calculates a distance from the time of flight thus measured of the light.
In the present embodiment, the imaging apparatus 100 and the distance measurement apparatus 200 are separate apparatuses, the functions of the imaging apparatus 100 and the distance measurement apparatus 200 may be integrated into one apparatus. For example, it is possible to use the light receiving device 220 of the distance measurement apparatus 200 instead of the imaging apparatus 100 to acquire a luminance image. The light receiving device 220 may acquire a luminance image without light being emitted from the light emitting device 210, or may acquire a luminance image formed by light emitted from the light emitting device 210. In a case where the light emitting device 210 emits the light beam, a luminance image of the whole scene may be generated by storing luminance images of parts of the scene as sequentially acquired by a plurality of the light beams and integrating the luminance images. Alternatively, a luminance image of the whole scene may be generated by making a continuous exposure while sequentially emitting the light beam. The light emitting device 210 may emit, separately from the light beam, light that diffuses over a wide range, whereby the light receiving device 220 may acquire a luminance image.
The processing apparatus 300 is a computer connected to the imaging apparatus 100, the distance measurement apparatus 200, and the control apparatus 400. The processing apparatus 300 includes a first storage device 320, a second storage device 330, a third storage device 350, an image processing module 310, a risk calculation module 340, an own-vehicle movement processing module 360, and a surrounding information generation module 370. The image processing module 310, the risk calculation module 340, the own-vehicle movement processing module 360, and the surrounding information generation module 370 may be implemented by one or more processors. By executing a computer program stored on a storage medium, a processor of the processing apparatus 300 may function as the image processing module 310, the risk calculation module 340, the own-vehicle movement processing module 360, and the surrounding information generation module 370.
The image processing module 310 processes an image outputted by the imaging apparatus 100. The first storage device 320 has stored therein data such as an image acquired by the imaging apparatus 100 and a processing result generated by the processing apparatus 300, with the image and the processing result being associated with each other. The processing result contains, for example, information such as the degree of risk of an object in a scene. The second storage device 330 has stored therein a predetermined conversion table or function that is used in a process that is executed by the risk calculation module 340. The risk calculation module 340 calculates the degree of risk of an object in a scene with reference to the conversion table or function stored in the second storage device 330. The risk calculation module 340 calculates the degree of risk of an object on the basis of the relative velocity vector and acceleration vector of the object. The own-vehicle movement processing module 360 generates, on the basis of an image processing result and a risk calculation result stored in the first storage device 320 and movement information and movement plan information acquired from the movable body and with reference to data stored in the third storage device 350, information regarding the movement and processing of the movable body. The surrounding information generation module 370 generates surrounding information on the basis of an image processing result stored in the first storage device 320, a risk calculation result, and information regarding the movement and processing of the movable body.
The image processing module 310 includes a preprocessing module 311, a relative velocity vector module 312, and a recognition processing module 313. The preprocessing module 311 performs an initial signal process on image data generated by the imaging apparatus 100. The relative velocity vector module 312 calculates the motion vector of a physical object in a scene on the basis of an image acquired by the imaging apparatus 100. The relative velocity vector module 312 further generates the relative velocity vector of the physical object from the motion vector thus calculated and an apparent motion vector based on own-vehicle movement. The recognition processing module 313 recognizes one or more physical objects from an image processed by the preprocessing module 311.
In the example shown in
The following describes a configuration of the processing apparatus 300 in more detail.
The preprocessing module 311 performs signal processes such as noise reduction, edge extraction, and signal enhancement on a series of image data generated by the imaging apparatus 100. These signal processes are referred to as “preprocessing”.
The relative velocity vector 312 calculates the respective motion vectors of one or more physical objects in a scene on the basis of a series of image data subjected to preprocessing. The relative velocity vector module 312 calculates a motion victor for each physical object in a scene on the basis of a plurality of images acquired at different points in time within a certain period of time, i.e. a plurality of frames of image at different timings in a moving image. The relative velocity vector module 312 generates a movement vector based on the movable body that was generated by the own-vehicle movement processing module 360. The movement vector based on the movable body is the apparent movement vector of a stationary object that is generated due to the movement of the movable body. The relative velocity vector module 312 generates a relative velocity vector from the difference between a motion vector calculated for each physical object in a scene and an apparent movement vector based on the movement of the own vehicle. The relative velocity vector may be generated, for example, for each feature point such as a point of inflection on an edge of each physical object.
The recognition processing module 313 recognizes one or more physical objects from each frame of image processed by the preprocessing module 311. This recognition processing may include a process of extracting a movable object such as a vehicle, a person, or a bicycle or a stationary object in a scene, for example, from an image and outputting an area of the image as a rectangular area. As a method of recognition, any method such as machine learning or pattern matching may be used. An algorithm for the recognition processing is not limited to a particular one, but any algorithm may be used. For example, in a case where learning and recognition of a physical object by machine learning are performed, a previously-trained learned model is stored on a storage medium. Applying the learned model to each frame of image data inputted makes it possible to extract a physical object such as a vehicle, a person, or a bicycle.
The storage device 320 has stored therein a variety of data generated by the imaging apparatus 100, the distance measurement apparatus 200, and the processing apparatus 300. For example, the storage device 320 has stored therein the following data:
Image data generated by the imaging apparatus 100.
Preprocessed image data, data on a relative velocity vector, and data representing a result
of recognition of a physical object; generated by the image processing module 310. Data representing a degree of risk for each physical object calculated by the risk calculation module 340.
Distance data for each physical object generated by the distance measurement apparatus 200.
The storage device 330 has stored therein a predetermined correspondence table or function for risk calculation and parameters thereof.
The risk calculation module 340 estimates, according to a relative velocity vector for each edge feature point calculated by the relative velocity vector module 312, the predicted relative position of a physical object including an edge feature point. The predicted relative position is a position where the physical object will be present after a predetermined period of time. The predetermined period of time may for example be set to be equal in length of time to an inter-frame spacing. The risk calculation module 340 determines, on the basis of the correspondence table of predicted relative position and degree of risk stored in the storage device 330 and the magnitude of the relative velocity vector, a degree of risk corresponding to the predicted relative position thus calculated. Meanwhile, the risk calculation module 340 calculates the acceleration vector of own-vehicle movement on the basis of a plan of movement of the own vehicle generated by the own-vehicle movement processing module 360. In a case where the absolute value of the acceleration vector is greater than a predetermined magnitude, the risk calculation module 340 calculates a degree of risk entailed in the turning and acceleration/deceleration of the own vehicle. The risk calculation module 340 obtains an orthogonal component and a straight-forward component of the acceleration vector. In a case where the absolute value of the orthogonal component is greater than a predetermined threshold, the risk calculation module 340 refers to the correspondence table shown in
The storage device 350 has stored therein a correspondence table showing a relationship between position of physical object in image and magnitude of apparent motion vector.
The own-vehicle movement processing module 360 acquires, from the control apparatus 400 of the movable body mounted with the distance measurement system 10, movement information on the movement of the movable body made between a preceding frame f0 and a current frame f1 and movement plan information. The movement information contains information on the velocity or rate of acceleration of the movable body. The movement plan information contains information indicating a future movement of the movable body, e.g. information such as forward movement, a right turn, a left turn, acceleration, or deceleration. The own-vehicle movement processing module 360 generates, with reference to the data stored in the storage device 350 and from the movement information thus acquired, an apparent motion vector that is generated by the movement of the movable body. Further, the own-vehicle movement processing module 360 generates, from the movement plan information thus acquired, the acceleration vector of the own vehicle in a next frame f2. The own-vehicle movement processing module 360 outputs, to the risk calculation module 340, the apparent motion vector thus generated and the acceleration vector thus generated of the own vehicle.
The control apparatus 400 acquires movement information and movement plan information from a self-guided vehicle system, a navigation system, or other various on-board sensors mounted on board the own vehicle. The other on-board sensors may include a steering sensor, a velocity sensor, an acceleration sensor, a GPS, and a driver monitoring sensor. The movement plane information is for example information that indicates a next movement of the own vehicle that is determined by the self-guided vehicle system. Another example of the movement plan information is information that indicates a next movement of the own vehicle predicted on the basis of a scheduled traveling route acquired from the navigation system and information from the other on-board sensors.
Next, an operation of the distance measurement system 10 is described in more detail.
The processing apparatus 300 determines whether an end signal has been inputted from input means, e.g. the control apparatus 400 shown in
The processing apparatus 300 instructs the imaging apparatus 100 to take a two-dimensional image of a scene. The imaging apparatus 100 generates two-dimensional image data and outputs it to the storage device 320 of the processing apparatus 300. As shown in
The preprocessing module 311 of the processing apparatus 300 performs preprocessing of the two-dimensional image acquired by the imaging apparatus 100 and stored in the storage device 320 in step S1200. The preprocessing includes, for example, a filter noise reduction process, an edge extraction process, and an edge enhancement process. The preprocessing may be a process other than these processes. The preprocessing module 311 stores a result of the preprocessing in the storage device 320. In the examples shown in
The relative velocity vector module 312 of the processing apparatus 300 generates a relative velocity vector using a most recent frame f1 of two-dimensional image processed in step S1300 and an immediately preceding frame f0 of two-dimensional image processed in step S1300. The relative velocity vector module 312 performs matching between a feature point set in the most recent frame f1 of image and stored in the storage device 320 and a feature point set in the immediately preceding frame f0 of image and stored in the storage device 320. For the feature points thus matched, a vector connecting the position of the feature point in the frame f0 with the position of the feature point in the frame f1 is extracted as a motion vector. The relative velocity vector module 312 calculates a relative velocity vector by subtracting, from the motion vector, a vector based on own-vehicle movement calculated by the own-vehicle movement processing module 360. The relative velocity vector thus calculated is associated with the feature point in the frame f1 used for the calculation of the relative velocity vector, and is stored in the storage device 320 in such a form as to describe the coordinates of the initial and terminal points of the vector. A method for calculating a relative velocity vector will be described in detail later.
The relative velocity vector module 312 conducts a clustering of a plurality of relative velocity vectors calculated in step S1400. This clustering is based on the directions and magnitudes of the vectors. For example, the relative velocity vector module 312 conducts the clustering on the basis of the differences between the initial and terminal points of the vectors in an x-axis direction and the differences between the initial and terminal points of the vectors in an y-axis direction. The relative velocity vector module 312 assigns a number to an extracted cluster and associates the cluster with the current frame f1. As shown in
The risk calculation module 340 of the processing apparatus 300 calculates a predicted relative position in the next frame f2 on the basis of a relative velocity vector stored in the storage device 320. The risk calculation module 340 calculates a degree of risk using a relative velocity vector in the same cluster whose predicted relative position is nearest to the position of the own vehicle. According to the predicted relative position, the risk calculation module 340 calculates a degree of risk with reference to the storage device 330. Meanwhile, the risk calculation module 340 generates an acceleration vector on the basis of a plan of movement of the own vehicle inputted from the control apparatus 400 of the movable body and calculates a degree of risk according to the acceleration vector. The risk calculation module 340 calculates an overall degree of risk of the cluster by integrating the degree of risk calculated on the basis of the predicted relative position and the degree of risk calculated on the basis of the acceleration vector. As shown in
The control circuit 230 of the distance measurement apparatus 200 refers to the storage device 320 and determines the presence or absence of a distance measurement target according to a degree of risk for each cluster. For example, in a case where there is a cluster whose degree of risk is higher than a threshold, the presence of a distance measurement target is determined. In the absence of a distance measurement target, the operation returns to step S1100. In the presence of one or more distance measurement targets, the operation proceeds to step S1650. For clusters associated with the current frame f1, distance measurement of a cluster, i.e. a physical object, having a relative velocity vector with a high degree of risk is preferentially performed. For example, the processing apparatus 300 determines, as a distance measurement target, a range of positions in a next frame as predicted from the relative velocity vector of each cluster to be subjected to distance measurement. As distance measurement targets, for example, a given number of clusters may be determined in descending order of degree of risk. Alternatively, a plurality of clusters may be determined in descending order of degree of risk until the proportion of a total of ranges of predicted positions of clusters to a two-dimensional space serving as a range of imaging of the light receiving device 220 exceeds a certain value.
The control circuit 230 determines whether distance measurement has been completed for all clusters to be subjected to distance measurement. In a case where distance measurement has not been completed for all clusters to be subjected to distance measurement, the operation proceeds to step S1700. In a case where distance measurement has been completed for all clusters to be subjected to distance measurement, the operation proceeds to step S1800.
The control circuit 230 executes distance measurement for one of the clusters determined as distance measurement targets in step S1600 that is yet to be subjected to distance measurement. For example, of the clusters determined as distance measurement targets and yet to be subjected to distance measurement, a cluster, i.e. a physical object, having the highest degree of risk may be determined as a distance measurement target. The control circuit 230 sets the direction of emission of the light beam so that a range corresponding to the cluster is illuminated. For example, a direction toward a predicted relative position corresponding to a feature point in the cluster may be set as the direction of emission of the light beam. The control circuit 230 sets the timing of emission of the light beam from the light emitting device 210 and the timing of exposure of the light receiving device 220 and outputs the respective control signals to the light emitting device 210 and the light receiving device 220. Upon receiving the control signal, the light emitting device 210 emits the light beam in a direction indicated by the control signal. Upon receiving the control signal, the light receiving device 220 starts an exposure and detects reflected light from the physical object. Each light receiving element of the image sensor of the light receiving device 220 outputs, to the processing circuit 240, a signal indicating electric charge accumulated within each exposure period. The processing circuit 240 calculates a distance by the aforementioned method for a pixel, included in the range of illumination with the light beam, in which electric charge was accumulated during an exposure period.
The processing circuit 240 outputs the distance thus calculated to the storage device 320 in association with a cluster number. As shown in
The surrounding information generation module 370 of the processing apparatus 300 refers to the storage device 320 and integrates, for each cluster, a result of image recognition by the recognition processing module 313 and a distance stored for each cluster. A method for integrating data will be described in detail later.
The surrounding information generation module 370 converts the data integrated in step S1800 into output data and outputs the output data to the control apparatus 400 of the movable body. The output data will be described in detail later. This output data is referred to as “surrounding information”. After the data output, the operation returns to step S1100.
By repeating the operation from step S1100 to step S1900, the distance measurement system 10 repeatedly generates information on the surrounding environment that is used for the movable body to move.
The control apparatus 400 of the movable body executes control of the movable body on the basis of the surrounding information outputted by the distance measurement system 10. An example of the control of the movable body is automatically controlling mechanisms such as an engine, a motor, a steering, a brake, and an accelerator of the movable body. The control of the movable body may be providing, to a driver who drives the movable body, information needed for driving or may be alerting the driver. The information may be provided to the driver by an output device, such as a head-up display or a speaker, mounted on board the movable body.
In the example shown in
Next, the calculation of a relative velocity vector in step S1400 is described in detail.
The own-vehicle movement processing module 360 of the processing apparatus 300 acquires, from the control apparatus 400 of the movable body, information on the movement of the movable body during a period from the time of acquisition of the immediately preceding frame f0 to the time of acquisition of the current frame f1. The information on the movement may contain, for example, the travel speed of the vehicle and information on the direction and distance of movement during the period from the timing of the immediately preceding frame f0 to the timing of the current frame f1. Furthermore, the own-vehicle movement processing module 360 acquires, from the control apparatus 400, information indicating a plan of movement of the movable body during a period from the timing of the current frame f1 to the timing of the next frame f2, e.g. a control signal to an actuator. The control signal to the actuator may for example be a signal that gives an instruction to perform an action such as acceleration, deceleration, a right turn, or a left turn.
The relative velocity vector module 312 of the processing apparatus 300 refers to the storage device 320 and determines whether a matching process has been completed for all feature points in the immediately preceding frame f0 of image and all feature points in the current frame f1 of image. In a case where the matching process has been completed for all feature points, the operation proceeds to step S1450. In a case where the matching process has not been completed for all feature points, the operation proceeds to step S1403.
The relative velocity vector module 312 selects, from among the feature points extracted in the current frame f0 of image and stored in the storage device 320 and the feature points extracted in the current frame f1 of image and stored in the storage device 320, a point yet to be subjected to the matching process. The selection is preferentially carried out for the feature points in the immediately preceding frame f0.
The relative velocity vector module 312 performs matching between the feature point selected in step S1403 and a feature point in a frame different from the image in which the feature point is included. The relative velocity vector module 312 determines whether in the period of time from the immediately preceding frame f0 to the current frame f1, a physical object having the feature point or the position on a physical object that corresponds to the feature point has gone out of sight of the imaging apparatus 100, i.e. the angle of view of the image sensor. In a case where the feature point selected in step S1403 is a feature point in the immediately preceding frame f0 of image and there is no corresponding feature point among the feature points in the current frame f1 of image, the determination is “yes” in step S1404. That is, in a case where there is no feature point in the current frame f1 of image that corresponds to a feature point in the immediately preceding frame f0 of image, it is determined that a position corresponding to the feature point has gone out of sight of the imaging apparatus 100 in the period of time from the immediately preceding frame f0 to the current frame f1. In that case, the operation returns to step S1402. On the other hand, in a case where the feature point selected in step S1403 is not a feature point in the immediately preceding frame f0 of image or in a case where the feature point selected is a feature point in the immediately preceding frame f0 of image and there is a corresponding feature point in the current frame f1 of image, the operation proceeds to step S1405.
The relative velocity vector module 312 performs matching between the feature point selected in step S1403 and a feature point in a frame different from the image in which the feature point is included. The relative velocity vector module 312 determines whether in the period of time from the immediately preceding frame f0 to the current frame f1, a physical object having the feature point or the position on a physical object that corresponds to the feature point has come into sight of the imaging apparatus 100 or has come to occupy a discriminably-large area. In a case where the feature point selected in step S1403 is a feature point in the current frame f1 of image and there is no corresponding feature point in the immediately preceding frame f0 of image, the determination is “yes” in step S1405. That is, in a case where there is no feature point in the immediately preceding frame f0 of image that corresponds to a feature point in the current frame f1 of image, it is determined that the feature point is a feature point of a physical object having first appeared in sight of the imaging apparatus 100 in the current frame f1. In that case, the operation returns to step S1402. On the other hand, in the case of successful matching between a feature point in the current frame f1 of image and a feature point in the immediately preceding frame f0 of image, the operation proceeds to step S1406.
The relative velocity vector module 312 generates a motion vector for a feature point selected in step S1403 and identified as a specific feature point included in the same physical object in both the current and immediately preceding frames f1 and f0 of image. The motion vector is a vector connecting the position of the feature point in the immediately preceding frame f0 of image with the position of the corresponding feature point in the current frame f1 of image.
The matching process may be performed by a method of template matching typified, for example, by sum of squared difference(SSD) or sum of absolute difference(SAD). In the present embodiment, a figure of an edge including a feature point serves as a template image, and a portion in an image differing less from this template image is extracted. Matching may involve the use of a method other than this.
The relative velocity vector module 312 generates a motion vector based on own-vehicle movement. The motion vector based on own-vehicle movement represents a relative movement, i.e. apparent movement, of a stationary object as seen from the own vehicle. The relative velocity vector module 312 generates a motion vector based on own-vehicle movement at the initial point of each motion vector generated in step S1406. The motion vector based on own-vehicle movement is generated on the basis of the information acquired in step S1401 on the direction and distance of movement during the period from the timing of the immediately preceding frame f0 to the timing of the current frame f1 and information on the correspondence relationship between coordinates of vanishing point of motion vector based on own-vehicle movement, distance from vanishing point, and magnitude of vector stored in the storage device 350 as shown in
The relative velocity vector module 312 generates a relative velocity vector that is the difference between the motion vector of each feature point generated in step S1406 and an apparent motion vector based on own-vehicle movement generated in step S1407. The relative velocity vector module 312 stores, in the storage device 320, the coordinates of the initial and terminal points of the relative velocity vector thus generated. As shown in
By repeating the operation from step S1402 to step S1408, the processing apparatus 300 generates relative velocity vectors for all feature points in the frames.
The relative velocity vector module 312 of the processing apparatus 300 determines the velocity of the own vehicle from the distance of movement during the period from the timing of the immediately preceding frame f0 and the timing of the current frame f1 that was acquired in step S1401 and a time interval between the frames.
The relative velocity vector module 312 refers to the storage device 350 and acquires the coordinates of a vanishing point in an image. The relative velocity vector module 312 regards the initial point of each motion vector generated in step S1406 as the initial point of an apparent motion vector based on own-vehicle movement. In a case where the movable body mounted with the distance measurement system 10 travels substantially in the direction of the vanishing point, the direction from the vanishing point toward the initial point of the motion vector is regarded as the direction of the apparent motion vector based on own-vehicle movement.
The relative velocity vector module 312 refers to the storage device 350 and sets the magnitude of the vector according to the distance from the vanishing point to the initial point of the motion vector. Then, the relative velocity vector module 312 adds a correction according to the velocity of the movable body calculated in step S1471 and determines the magnitude of the vector. Through the foregoing process, a motion vector based on own-vehicle movement is determined.
Next, an operation that is performed by the risk calculation module 340 of the processing apparatus 300 is described in detail.
The risk calculation module 340 refers to the storage device 320 and determines whether the calculation of a degree of risk has been completed for all clusters generated and associated with the current frame f1 in step S1450. In a case where the calculation of a degree of risk has been completed for all clusters, the process proceeds to step S1600. In a case where the calculation of a degree of risk has not been completed for all clusters, the process proceeds to step S1502.
The risk calculation module 340 selects, from among the clusters associated with the current frame f1, a cluster for which the calculation of a degree of risk has not been completed. The risk calculation module 340 refers to the storage device 320 and selects, from among the relative velocity vectors associated with the feature points included in the cluster thus selected and as the relative velocity vector of the cluster, a vector having a terminal point whose coordinates are nearest to the own-vehicle position.
The risk calculation module 340 resolves the vector selected in step S1502 into the following two components. One of the two components is an own-vehicle direction vector component, i.e. a vector component pointing toward the position of the own vehicle or the imaging apparatus 100. For example, in an image of a scene generated by the imaging apparatus 100, this vector component is a component pointing toward the middle of the lower side of the image. The other of the two components is a vector component orthogonal to a direction toward the own vehicle. The terminal point of a vector twice as great in magnitude as the own-vehicle direction vector component is calculated as a relative position that the feature point in the frame f2 following the current frame f1 may assume with respect to the own vehicle. Furthermore, the risk calculation module 340 determines, with reference to the storage device 330, a degree of risk corresponding to the relative position, obtained from the relative velocity vector, that the feature point may assume with respect to the own vehicle.
The risk calculation module 340 calculates, on the basis of the movement plan information acquired in step S1401, calculates a degree of risk according to rate of acceleration. The risk calculation module 340 refers to the storage device 320 and generates an acceleration vector from the difference between a relative velocity vector from the immediately preceding frame f0 to the current frame f1 and a relative velocity vector from the current frame f1 to the next frame f2. The risk calculation module 340 determines a degree of risk according to acceleration vector with reference to the correspondence table of acceleration vector and degree of risk stored in the storage device 330.
The risk calculation module 340 integrates the degree of risk according to predicted position calculated in step S1503 and the degree of risk according to rate of acceleration calculated in step S1504. The risk calculation module 340 calculates an overall degree of risk by multiplying the degree of risk according to predicted position by the degree of risk according to rate of acceleration. After step S1505, the process returns to step S1501.
By repeating the operation from step S1501 to step S1505, an overall degree of risk is calculated for all clusters.
Next, a more detailed example of a method for calculating a degree of risk according to rate of acceleration in step S1504 is described.
The risk calculation module 340 calculates the acceleration vector of the own vehicle on the basis of the movement plan information acquired in step S1401.
The risk calculation module 340 resolves the acceleration vector of the own vehicle obtained in step S1541 into a component acting in the direction of forward movement of the own vehicle and a component acting in an orthogonal direction. The component acting in the direction of forward movement is a component acting in a vertical direction in the drawings, and the component acting in an orthogonal direction is a component acting in a horizontal direction in the drawings. In each of the examples shown in
The risk calculation module 340 determines whether the absolute value of one of the components into which the acceleration vector was resolved in step S1542 that acts in an orthogonal direction exceeds a predetermined value Th1. In a case where the magnitude of the component acting in an orthogonal direction exceeds Th1, the process proceeds to step S1544. In a case where the magnitude of the component acting in an orthogonal direction does not exceed Th1, the process proceeds to step S1545.
The risk calculation module 340 refers to the storage device 320 and calculates, for the relative velocity vector in the frame f1, the magnitude of a component acting in the same direction as the orthogonal component of the acceleration vector extracted in step S1542. The risk calculation module 340 refers to the storage device 330 and determines a degree of risk from the orthogonal component of the acceleration vector.
The risk calculation module 340 determines whether the absolute value of one of the components into which the acceleration vector was resolved in step S1542 that acts in the direction of forward movement falls below a predetermined value Th2. In a case where the magnitude of the component acting in the direction of forward movement is less than Th2, the process proceeds to step S1505. In a case where the magnitude of the component acting in the direction of forward movement is greater than equal to Th2, the process proceeds to step S1546. A state where the magnitude of the component acting in the direction of forward movement is less than a certain value indicates that there is no rapid acceleration or deceleration. A state where the magnitude of the component acting in the direction of forward movement is greater than or equal to a certain value indicates that there is a certain degree of rapid acceleration or deceleration. In this example, a degree of risk according to rate of acceleration is not calculated in the case of poor acceleration or deceleration.
The risk calculation module 340 refers to the storage device 320 and calculates, for the relative velocity vector in the current frame f1, the magnitude of a component acting in a direction toward the own vehicle.
The risk calculation module 340 determines whether one of the components into which the acceleration vector was resolved in step S1542 that acts in the direction of forward movement is less than or equal to a predetermined value −Th2. In a case where the component acting in the direction of forward movement is less than or equal to −Th2, the process proceeds to step S1548. In a case where the component acting in the direction of forward movement is greater than −Th2, the process proceeds to step S1549. Note here that Th2 is a positive value. Accordingly, a state where the component of the acceleration vector acting in the direction of forward movement is less than or equal to −Th2 shows that there is a certain degree of rapid deceleration.
The risk calculation module 340 refers to the storage device 320 and, for a relative velocity vector associated with the frame f1, multiplies the magnitude, calculated in step S1546, of a component acting toward the own vehicle by a coefficient of deceleration. The coefficient of deceleration is a value less than 1, and may be set as a value that is in inverse proportion to the absolute value of the rate of acceleration of forward movement calculated in step S1542. The risk calculation module 340 refers to the storage device 330 and determines a degree of risk from the straight-forward component of the acceleration vector.
The risk calculation module 340 refers to the storage device 320 and, for a relative velocity vector associated with the frame f1, multiplies the magnitude, calculated in step S1546, of a component acting toward the own vehicle by a coefficient of acceleration. The coefficient of acceleration is a value greater than 1, and may be set as a value that is in proportion to the absolute value of the rate of acceleration of forward movement calculated in step S1542. The risk calculation module 340 refers to the storage device 330 and determines a degree of risk from the straight-forward component of the acceleration vector.
Next, a detailed example of an operation of step S1600 is described.
The control circuit 230 determines whether the number of clusters selected exceeds a predetermined value C1. In a case where the number of clusters selected as distance measurement targets exceeds C1, the operation proceeds to step S1650. In a case where the number of clusters selected as distance measurement targets is less than or equal to C1, the operation proceeds to step S1602.
The control circuit 230 refers to the storage device 320 and determines whether a determination of a distance measurement target has been completed for all relative velocity vectors of the frame. In a case where a determination of a distance measurement target has been completed for all relative velocity vectors of the frame, the operation proceeds to step S1606. In a case where a determination of a distance measurement target has not been completed for all relative velocity vectors of the frame, the operation proceeds to step S1603.
The control circuit 230 refers to the storage device 320 and extracts, from among the relative velocity vectors of the frame, vectors for which a determination of a distance measurement target has not been completed. In this example, a vector with the highest degree of risk is selected from among the vectors for which a determination of a distance measurement target has not been completed.
The control circuit 230 determines whether the degree of risk of the relative velocity vector selected in step S1603 falls below a predetermined standard Th4. In a case where the degree of risk of the vector falls below Th4, the operation proceeds to step S1650. In a case where the degree of risk of the vector is greater than or equal to Th4, the operation proceeds to step S1605.
The control circuit 230 determines, as a cluster to be subjected to distance measurement, a cluster including the vectors selected in step S1603 and determines that a determination of a distance measurement target has been completed for all vectors included in the cluster. After step S1605, the operation proceeds to step S1601.
The control circuit 230 determines whether one or more clusters to be subjected to distance measurement have been extracted. In a case where no one cluster to be subjected to distance measurement has been extracted, the operation returns to step S1100. In a case where one or more clusters to be subjected to distance measurement have been extracted, the operation proceeds to step S1650.
By repeating steps S1601 to S1606, the control circuit 230 selects all clusters to be subjected to distance measurement. Although, in the present embodiment, the control circuit 230 executes the operation of step S1600, the processing apparatus 300 may execute the operation of step S1600 in place of the control circuit 230.
Next, a specific example of an operation of distance measurement in step S1700 is described.
The control circuit 230 selects, from among the clusters selected in step S1600, a cluster yet to be subjected to distance measurement.
The control circuit 230 refers to the storage device 320 and extracts a predetermined number of relative velocity vectors, e.g. not more than five relative velocity vectors, from among one or more relative velocity vectors corresponding to the cluster selected in step S1701. As a standard of extraction, for example, five relative velocity vectors that include a relative velocity vector with the highest degree of risk and whose terminal points are furthest away from one another may be selected.
As shown in
The control circuit 230 outputs, to the light emitting device 210 and the light receiving device 220, controls signals that control, for example, the direction of emission of the light beam determined in step S1703, the timing of emission, the timing of exposure of the light receiving device 220, and the timing of data readout. Upon receiving the control signal, the light emitting device 210 emits the light beam. Upon receiving the control signal, the light receiving device 220 performs exposures and data output. Upon receiving a signal indicating a result of detection by the light receiving device 220, the processing circuit 240 calculates a distance to the physical object by the aforementioned method.
Next, a specific example of a data integration process in step S1800 is described.
The surrounding information generation module 370 refers to the storage device 320 and extracts, from the data shown in
The surrounding information generation module 370 refers to the storage device 320 and extracts, from the data shown in
The surrounding information generation module 370 refers to the storage device 320 and extracts, from the data shown in
On the basis of information on the position and angle of view of the image sensor stored in advance in the storage device 350, the surrounding information generation module 370 converts, into data expressed in a coordinate system of the movable body mounted with the distance measurement system 10, coordinate data representing an area of the cluster extracted in step S1801 and the distance data determined in step S1803.
As noted above, a distance measurement system 10 of the present embodiment includes an imaging apparatus 100, a distance measurement apparatus 200, and a processing apparatus 300. The distance measurement apparatus 200 includes a light emitting device 210 capable of changing a direction of emission of a light beam along a horizontal direction and a vertical direction, a light receiving device 220 including an image sensor, a control circuit 230, and a processing circuit 240. The processing apparatus 300 generates a motion vector of one or more physical objects in a scene from a plurality of two-dimensional luminance images acquired by the imaging apparatus 100 taking a series of consecutive shots. The processing apparatus 300 calculates a degree of risk of the physical object on the basis of the motion vector and own-vehicle movement information acquired from the movable body including the distance measurement system 10. The control circuit 230 selects, on the basis of the degree of risk calculated by the processing apparatus 300, a physical object to be subjected to distance measurement. By emitting the light beam in a direction toward the physical object thus selected, the distance measurement apparatus 200 measures a distance to the physical object. The processing apparatus 300 outputs, to the control apparatus 400 of the movable body, data containing information on a range of coordinates of the physical object and a distance to the physical object.
The foregoing configuration makes it possible to select, in a scene to be subjected to distance measurement by the distance measurement system 10, a physical object having a high degree of risk such as collision and measure the distance to the physical object. This makes it possible, with a few distance measurement actions, acquire distance information that is effective in risk avoidance.
Although, in Embodiment 1, the distance measurement system 10 includes an imaging apparatus 100 that acquires a luminance image, a distance measurement apparatus 200 that performs distance measurement, and a processing apparatus 300 that calculates a degree of risk, the present disclosure is not limited to such a configuration. For example, the processing apparatus 300 may be a constituent element of a movable body including the distance measurement system 10. In that case, the distance measurement system 10 includes an imaging apparatus 100 and a distance measurement apparatus 200. The imaging apparatus 100 acquires an image and outputs it to the processing apparatus 300 of the movable body. The processing apparatus 300 calculates, on the basis of the image acquired from the imaging apparatus 100, a degree of risk of one or more physical objects in the image, identifies a physical object to be subjected to distance measurement, and outputs, to the distance measurement apparatus 200, information indicating a predicted position of the physical object. The control circuit 230 of the distance measurement apparatus 200 controls the light emitting device 210 and the light receiving device 220 on the basis of the information on the predicted position of the physical object acquired from the processing apparatus 300. The control circuit 230 outputs, to the light emitting device 210, a control signal that controls the direction and timing of emission of a light beam, and outputs, to the light receiving device 220, a control signal that controls the timing of exposure. The light emitting device 210 emits the light beam in a direction toward the physical object in accordance with the control signal. The light receiving device 220 makes exposures for each separate pixel in accordance with the control signal and outputs, to the processing circuit 240, a signal indicating electric charge accumulated during each exposure period. The processing circuit 240 generates distance information on the physical object by calculating distances for each separate pixel on the basis of the signal.
The functions of the processing apparatus 300 and the control circuit 230 and processing circuit 240 of the distance measurement apparatus 200 may be integrated into a processing apparatus (e.g. the aforementioned control apparatus 400) of the movable body. In that case, the distance measurement system 10 includes an imaging apparatus 100, a light emitting device 210, and a light receiving device 220. The imaging apparatus 100 acquires an image and outputs it to the processing apparatus of the movable body. The processing apparatus of the movable body calculates, on the basis of the image acquired from the imaging apparatus 100, a degree of risk of one or more physical objects in the image, identifies a physical object to be subjected to distance measurement, and controls the light emitting device 210 and the light receiving device 220 so that the physical object is subjected to distance measurement. The processing apparatus outputs, to the light emitting device 210, a control signal that controls the direction and timing of emission of a light beam, and outputs, to the light receiving device 220, a control signal that controls the timing of exposure. The light emitting device 210 emits the light beam in a direction toward the physical object in accordance with the control signal. The light receiving device 220 makes exposures for each separate pixel in accordance with the control signal and outputs, to the processing apparatus of the movable body, a signal indicating electric charge accumulated during each exposure period. The processing apparatus generates distance information on the physical object by calculating distances for each separate pixel on the basis of the signal.
In Embodiment 1, the operation from step S1100 to S1900 shown in
Case where the physical object has gone out of the angle of view of the imaging apparatus 100, or
Case where a measured distance to the physical object has exceeded a predetermined value.
The tracking may be refreshed every two or more predetermined fames. Alternatively, in a case where the rate of acceleration of forward movement is greater than the threshold Th1 in step S1543 shown in
Embodiment 1 has been described with a focus on a case where the distance measurement system 10 is installed at the center front of the movable body. The following describes examples of the process of relative velocity vector calculation in step S1400 in a case where the distance measurement system 10 is installed at the right front of the movable body, a case where the distance measurement system 10 is installed on the right side of the movable body, and a case where the distance measurement system 10 is installed at the center rear of the movable body.
The following describes examples of the process for calculating a degree of risk according to rate of acceleration in step S1504 shown in
In each of these examples, the processing apparatus 300 calculates, on the basis of the movement plan information acquired in step S1401, a degree of risk according to rate of acceleration. The processing apparatus 300 refers to the storage device 320, obtains the difference between a vector representing the movement of the own vehicle during the period from the immediately preceding frame f0 to the current frame f1 and a vector representing the movement of the own vehicle during the period from the current frame f1 to the next frame f2, and generates an acceleration vector.
In the foregoing embodiment, the processing apparatus 300 obtains a relative velocity vector and a relative position with respect to a physical object on the basis of a plurality of images acquired at different times by the imaging apparatus 100. Furthermore, the processing apparatus 300 obtains the rate of acceleration of the movable body on the basis of a plan of movement of the movable body including the distance measurement system 10 and determines the degree of risk of a physical object on the basis of the rate of acceleration. The distance measurement apparatus measures distances to physical objects in priority order of decreasing degree of risk. In order to measure a distance for each physical object, the distance measurement apparatus 200 configures the settings so that the light emitting device 210 emits the light beam in a direction toward each physical object.
In the foregoing operation, the distance measurement apparatus 200 may determine the numbers of occurrences of emission of the light beam and exposure during the distance measurement operation according to how high the degree of risk is. Alternatively, the distance measurement apparatus 200 may determine the time length of emission of the light beam and the time length of exposure during the distance measurement operation according to how high the degree of risk is. Such an operation makes it possible to adjust the accuracy of distance measurement or the distance range on the basis of the degree of risk.
The control circuit 230 refers to the storage device 250 and determines, according to a degree of risk calculated by the processing apparatus 300, the time length of the light beam that the light emitting device 210 emits and the number of occurrences of emission. Furthermore, the control circuit 230 determines the time length of exposure of the light receiving device 220 and the number of occurrences of exposure according to the degree of risk. With this, the control circuit 230 controls the operation of distance measurement and adjusts the accuracy of distance measurement and the distance range.
In the aforementioned embodiment, as shown in
The control circuit 320 refers to the storage device 320 and extracts a degree of risk corresponding to the cluster selected in step S1701. The control circuit 230 refers to the storage device 250 and determines a distance range corresponding to the degree of risk, i.e. the time length for which to emit the light beam and the time length of a period of exposure of the light receiving device 220. For example, the settings are configured so that the higher the degree of risk is, the shorter and longer distances the distance range includes. That is, the higher the degree of risk is, the longer the time length of emission of the light beam that is emitted from the light emitting device 210 and the time length of exposure of the light receiving device 220 become.
The control circuit 230 refers to the storage device 250 and determines, on the basis of the degree of risk extracted in step S1711, the distance measurement accuracy corresponding to the degree of risk, i.e. the number of occurrences of an operation of emission and exposure. For example, the settings are configured such that the distance measurement accuracy is increased as the degree of risk becomes higher. That is, the number of occurrences of an operation of emission and light reception is increased as the degree of risk becomes higher.
The control circuit 230 outputs, to the light emitting device 210 and the light receiving device 220, control signals that control the direction of emission of the beam determined in step S1703, the timing and time length of emission determined in step S1711, the timing and time length of exposure of the light receiving device 220 determined in step S1711, and the number of occurrences of a combined operation of emission and exposure determined in step S1712 and performs distance measurement. The method of distance measurement is as mentioned above.
According to the present modification, a physical object having a higher degree of risk can be subjected to distance measurement over a wider range and with a higher degree of accuracy. For distance measurement over a wide range and with a high degree of accuracy, a longer measurement time is required. For distance measurement of a plurality of physical objects within a certain period of time, for example, the duration of distance measurement of a physical object having a high degree of risk may be made relatively long, and the duration of distance measurement of a physical object having a low degree of risk may be made relatively short. Such an operation makes it possible to appropriately adjust the duration of a distance measurement operation as a whole.
The technologies disclosed here are widely applicable to distance measurement apparatuses or systems. For example, the technologies disclosed here may be used as constituent elements of a lidar (laser direction and distance measurement) system.
Number | Date | Country | Kind |
---|---|---|---|
2020-067522 | Apr 2020 | JP | national |
Number | Date | Country | |
---|---|---|---|
Parent | PCT/JP2021/008435 | Mar 2021 | US |
Child | 17931146 | US |