The present disclosure relates to a recognition technique for recognizing a moving object.
Recognition techniques for recognizing a moving object disposed in a scanning space scanned with irradiation light from a scanning device are widely known. A conceivable technique teaches a recognition technique with eliminating a situation in which a virtual image caused by reflection of a laser beam, which is an irradiation light, is generated and the virtual image is erroneously recognized as a moving object.
According to an example, a target moving object movable in a scanning space scanned by illumination light is recognized. Scanning data including a far side point group is acquired as a scanning point group on a far side of a reflector in a scanning direction in which the reflector is disposed in a focus range having a reflection characteristic on a high reflection side. Recognition data is generated by excluding, from the far side point group for a recognition target, a part of the far side point group corresponding to a symmetrical point group in a symmetrical area of the position on the far side of the reflector. The symmetrical point group corresponds to a scanning point group of a real image that causes a virtual image to appear at the position on the far side of the reflector.
The above and other objects, features and advantages of the present disclosure will become more apparent from the following detailed description made with reference to the accompanying drawings. In the drawings:
In the recognition technique according to the conceivable technique, a pixel on a near side of the background range image, which is the scanning result when no moving object is disposed, is extracted as the moving object as a recognition target. However, when a reflector that causes a virtual image transmits the irradiation light, a moving object that is actually disposed on a far side when viewing from the scanning device may not be recognized. These issues are difficult to address with the recognition technique that uses the scanning device installed on a train platform, as in the conceivable technique, but are expected to be resolved with the recognition technique that uses the scanning device mounted on a vehicle.
An object of the present embodiments is to provide a recognition system with high recognition accuracy. Another object of the present disclosure is to provide a recognition device with high recognition accuracy. Another object of the present disclosure is to provide a recognition method with high recognition accuracy. Yet another object of the present disclosure is to provide a recognition program with high recognition accuracy.
Hereinafter, a technical solution of the present embodiments for solving the difficulties will be described.
According to a first example feature, a recognition system has a processor, and recognizes a target moving object capable of moving in a scanning space scanned by irradiation light from a scanning device in a host vehicle.
The processor is configured to:
The symmetrical point group corresponds to a scanning point group of a real image that causes a virtual image to appear at the position on the far side of the reflector.
According to a second aspect of the present embodiments, a recognition device has a processor, and recognizes a target moving object capable of moving in a scanning space scanned by irradiation light from a scanning device in a host vehicle.
The processor is configured to:
The symmetrical point group corresponds to a scanning point group of a real image that causes a virtual image to appear at the position on the far side of the reflector.
According to a third aspect of the present embodiments, a recognition method is executed by a processor for recognizing a target moving object capable of moving in a scanning space scanned by irradiation light from a scanning device in a host vehicle.
The recognition method includes:
The symmetrical point group corresponds to a scanning point group of a real image that causes a virtual image to appear at the position on the far side of the reflector.
According to a fourth aspect of the present embodiments, a recognition program includes instructions stored in a storage medium and executed by a processor for recognizing a target moving object capable of moving in a scanning space scanned by irradiation light from a scanning device in a host vehicle.
The instructions include:
The symmetrical point group corresponds to a scanning point group of a real image that causes a virtual image to appear at the position on the far side of the reflector.
According to a fifth aspect of the present embodiments, a recognition data generation method is executed by a processor for generating recognition data to recognize a target moving object capable of moving in a scanning space scanned by irradiation light from a scanning device in a host vehicle.
The recognition data generation method includes:
The symmetrical point group corresponds to a scanning point group of a real image that causes a virtual image to appear at the position on the far side of the reflector.
In this way, in the first to fifth aspects, in the scanning direction of the scanning device that scans the reflector in the focus range on the high reflection side with respect to the reflection characteristic of the irradiation light, the scanning data is acquired that includes the far side point group as a scanning point group at a position on the far side of the reflector. According to the first to fourth aspects, the recognition data is generated by excluding, from the far side point group for the target moving object as the recognition target in the scanning data, a part of the far side point group corresponding to a symmetrical point group which is disposed in a symmetrical area corresponding to the position on the far side of the reflector, and the symmetrical point group corresponding to a scanning point group of a real image that causes a virtual image to appear at the position on the far side of the reflector. According to this, the far side point group observed as a virtual image of the symmetrical point group at the far side position of the reflector can be excluded from the recognition target of the target moving body, and the far side point group observed as a real image at the far side position of the reflector can be properly recognized as the target moving body. Therefore, it is possible to suppress erroneous recognition caused by the occurrence of the virtual image and improve the recognition accuracy of the target moving object.
The following will describe embodiments of the present disclosure with reference to the drawings. It should be noted that the same reference symbols are assigned to corresponding components in the respective embodiments, and repeated descriptions may be omitted. When only a part of the configuration is described in each embodiment, the configurations of the other embodiments described above can be applied to remaining part of the configuration. Further, not only the combinations of the configurations explicitly shown in the description of the respective embodiments, but also the configurations of the multiple embodiments can be partially combined together even if the configurations are not explicitly described under a condition that there is no difficulty in the combination in particular.
The recognition system 1 of the first embodiment shown in
In the host vehicle 2, an autonomous driving mode is executed such that levels of the autonomous driving mode are determined according to the degree of manual driving intervention by the passenger in a dynamic driving task. The autonomous driving mode may be achieved with an automated driving control, such as conditional driving automation, advanced driving automation, or full driving automation, where the system in operation performs all driving tasks. The autonomous driving mode may be achieved with an advanced driving assistance control, such as driving assistance or partial driving automation, where a passenger performs partial or all of the driving tasks. The autonomous driving mode may be realized by either one or combination of automated driving control and advanced driving assistance control or switching between the automated control and advanced driving assistance control.
The host vehicle 2 is equipped with a sensor system 4, a communication system 5, and an information presentation system 6 shown in
The external sensor 40 acquires information on the external environment as the periphery environment of the host vehicle 2 as the sensor information. The external sensor 40 includes a scanning device 3 that acquires the sensor information by scanning a scanning space 30 in the external environment of the host vehicle 2 with irradiation light. Such a scanning device 3 is a three-dimensional LiDAR (i.e., Light Detection and Ranging/Laser Imaging Detection and Ranging) that scans the scanning space 30 using infrared laser light as the irradiation light. In addition, the external sensor 40 other than the scanning device 3 may include at least one of the following types of sensors that sense the external environment of the host vehicle 2 such as a camera, a sonar, and the like.
Here, the scanning device 3 generates the sensor information by scanning the scanning space 30 (see
The internal sensor 41 acquires information on the internal environment of the host vehicle 2 as the sensor information. The internal sensor 41 may include a physical quantity detection type sensor that detects a specific physical quantity of motion within the internal environment of the host vehicle 2. The physical quantity detection type internal sensor 41 is at least one type of, for example, a travelling speed sensor, an acceleration sensor, an inertial sensor, and the like. The internal sensor 41 may include a passenger detection type sensor that detects a specific state of a passenger in the internal environment of the host vehicle 2. The passenger detection type internal sensor 41 is at least one type of, for example, a driver status monitor (registered trademark), a biosensor, a seating sensor, an actuator sensor, an in-vehicle equipment sensor, and the like.
The communication system 5 acquires communication information usable for driving control of the host vehicle 2, including recognition control in the recognition system 1, via wireless communication. The communication system 5 may include a vehicle to everything (i.e., V2X) type system that transmits and receives a communication signal to and from a V2X system located outside the host vehicle 2. The communication system 5 of the V2X type may be at least one of a dedicated short range communications (i.e., DSRC) communication device, a cellular V2X (i.e., C-V2X) communication device, or the like. The communication system 5 may have a positioning type system that receives a positioning signal from an artificial satellite of a global navigation satellite system (i.e., GNSS) located outside the host vehicle 2. For example, the communication system 5 of the positioning type may be a GNSS receiver or the like. The communication system 5 may have a terminal communication type system that can transmit and receive a communication signal to and from a terminal located in the internal environment of the host vehicle 2. For example, the communication system 5 of the terminal communication type may be at least one of a Bluetooth (registered trademark) device, a Wi-Fi (registered trademark) device, an infrared communication device, or the like.
The information presentation system 6 presents notification information to the passenger of the host vehicle 2. The information presentation system 6 may be of a visual stimulation type that stimulates the passenger's vision through a display. The visual stimulus type information presentation system 6 is at least one type of, for example, a head-up display (i.e., HUD), a multi function display (i.e., MFD), a combination meter, a navigation unit, and the like. The information presentation system 6 may be of an auditory stimulation type that stimulates the auditory sense of the passenger by sound. The auditory stimulation type information presentation system 6 is, for example, at least one of a speaker, a buzzer, a vibration unit, and the like.
The recognition system 1 is connected to the sensor system 4, the communication system 5, and the information presentation system 6 via at least one of, for example, a local area network (i.e., LAN) line, a wire harness, an internal bus, or a wireless communication line. The recognition system 1 includes at least one dedicated computer.
The dedicated computer constituting the recognition system 1 may be a recognition control ECU (i.e., Electronic Control Unit) that controls object recognition in the scanning space 30 based on the scanning data Ds by the scanning device 3. Here, the recognition control ECU may have a function of integrating sensor information from multiple external sensors 40 including the scanning device 3. The dedicated computer constituting the recognition system 1 may be a driving control ECU that is responsible for driving control of the host vehicle 2. The dedicated computer constituting the recognition system 1 may be a navigation ECU that navigates the travel route of the host vehicle 2. The dedicated computer constituting the recognition system 1 may be a locator ECU that estimates a self-state amount including a self-position of the host vehicle 2. The dedicated computer constituting the recognition system 1 may be an HCU (i.e, Human Machine Interface Control Unit or HMI Control Unit) that controls information presentation by the information presentation system 6 in the host vehicle 2. The dedicated computer constituting the recognition system 1 may be a computer other than the host vehicle 2, which constitutes an external information center or a mobile terminal capable of communicating with the communication system 5, for example.
The dedicated computer constituting the recognition system 1 includes at least one memory 10 and at least one processor 12. The memory 10 is at least one type of non-transitory tangible storage medium of, for example, a semiconductor memory, a magnetic medium, and an optical medium, for non-transitory storing computer readable programs, data, and the like. For example, the processor 12 may include, as a core, at least one of a central processing unit (i.e., CPU), a graphics processing unit (i.e., GPU), a reduced instruction set computer (i.e., RISC) CPU, a data flow processor (i.e., DFP), a graph streaming processor (i.e., GSP), or the like.
In the recognition system 1, a memory 10 stores map information that can be used for driving control of a host vehicle 2. For example, the memory 10 acquires and stores the latest map information through communication with an external information center via the communication system 5 of the V2X type. In particular, the map information in the first embodiment is map data Dm (see
In the recognition system 1, the processor 12 executes a number of instructions included in a recognition program stored in the memory 10 in order to recognize a target moving object Ot in a scanning space 30 by the scanning device 3 of the host vehicle 2. As a result, the recognition system 1 constructs multiple functional blocks for recognizing the target moving object Ot in the scanning space 30. The plurality of functional blocks constructed in the recognition system 1 include a scanning block 100 and a recognition block 110 as shown in
The flow of a recognition method (hereinafter, referred to as a recognition flow) in which the recognition system 1 recognizes a target moving object Ot in the scanning space 30 by cooperation of these blocks 100 and 110 will be described below with reference to
In S100, the scanning block 100 acquires from the scanning device 3 the scanning data Ds for the entire scanning space 30 according to the viewing angle. In this case, particularly in the recognition flow of the first embodiment, the scanning data Ds is acquired so as to include at least the three-dimensional distance and/or three-dimensional position coordinates of the scanning point group as state values observed for each of the multiple pixels in the scanning device 3.
In S101, the scanning block 100 specifies, in the scanning data Ds, the overall scanning direction (i.e., the irradiation direction of the irradiation light) ψs in which the reflector Or, among the objects existing in the scanning space 30, is scanned by the scanning device 3 as shown in
In S101, the scanning block 100 reads out the map data Dm, to which identification information σi of an object existing in the scanning space 30 has been added, from the map storage area 10m of the memory 10 shown in
In the map data Dm as a read-out target in S101, the identification information σi is included as information for identifying the optical characteristics of an object in the scanning space 30 in association with the position coordinates of the object. In the first embodiment, the optical characteristics identified by the identification information σi include the above-described reflection characteristics as well as the transmission characteristics for the irradiation light. Therefore, the identification information σi is added to the map data Dm to indicate an object, such as a reflector Or, through which the transmission of irradiation light is permitted when at least one of the transmission characteristics, such as transmittance and transmission intensity, is equal to or higher than the transmission threshold or exceeds the transmission threshold. Furthermore, the identification information σi is added to the map data Dm so as to indicate an object, such as a reflector Or, for which the transmission of the irradiation light is restricted because at least one of the transmission characteristics is less than the transmission threshold value or equal to or smaller than the transmission threshold value.
The scanning block 100 in S101 specifies the scanning direction ψs in three dimensions along which the scanning device 3 scans the reflector Or as shown in
In S101, the scanning block 100 determines whether or not the scanning direction ψs in which the reflector Or exists has been specified as shown in
In S102, the scanning block 100 specifies in the scanning data Ds a far side point group Pb as a scanning point group observed at a position further side (i.e., farther away) than the reflector Or as viewed from the scanning device 3 in the entire scanning direction ψs in which the reflector Or is scanned, as shown in
In S102, the scanning block 100 determines whether or not the far side point group Pb has been specified as shown in
In S103, the recognition block 110 searches a transmission restriction point group Pbl from the scanning data Ds at a position on the far side of a reflector Or whose optical characteristic restricting the transmission of the irradiation light is represented by the identification information σi, as shown in
In S103, the scanning block 100 determines whether or not the far side point group Pb specified in S102 is the transmission restriction point group Pbl as shown in
In S105, the recognition block 110 searches a semi-transmission restriction point group Pbpl from the scanning data Ds under a condition that the transmission permission point group Pbp is disposed on the far side of the reflector Or and has the optical characteristics indicated by the identification information σi such that the transmission of the irradiation light is permitted as shown in
In S105, the scanning block 100 determines whether or not the far side point group Pb specified in S102 is the semi-transmission restriction point group Pbpl as shown in
In S107, the recognition block 110 searches a virtual image point group Pbpv and a real image point group Pbpa from the scanning data Ds, as a transmission permission point group Pbp which is disposed on the far side of the reflector Or, and has the optical characteristics indicated by the identification information σi and permitting the transmission of the irradiation light as shown in
Therefore, as shown in
The recognition block 110 in S107 sets a three-dimensional far side area Ab in the scanning space 30 for distinguishing the point groups Pbpv and Pbpa with respect to a three-dimensional symmetrical area As in which the symmetrical point group Ps is searched for, as shown in
The recognition block 110 in S107 compares the voxels 300b and 300s having a plane symmetric relationship with each other. The comparison at this time is based on the similarity of the point group distribution between the scanning point groups in each of the far side voxel 300b to which the transmission permission point group Pbp as the far side point group Pb and the symmetrical voxel 300s in which the symmetrical point group Ps is searched. Therefore, the similarity between the far side voxel 300b and the symmetrical voxel 300s is calculated using at least one of the Iterative Closest Point (i.e., ICP) algorithm, the Mahalanobis distance, and the Signature of Histograms of Orientations (i.e., SHOT) feature amount, for example.
In S107, if the similarity between the far side voxel 300b and the symmetrical voxel 300s falls within the allowable range, i.e., if it is determined that the voxels 300b, 300s are similar to each other within the allowable range, the transmission permission point group Pbp is determined to be the virtual image point group Pbpv in which the symmetrical point group Ps exists, as shown in
The recognition block 110 in S107 may further perform data analysis on the transmission permission point group Pbp belonging to the far side voxel 300b that is determined to be similar to the symmetrical voxel 300s, thereby improving the distinction accuracy of the point groups Pbpv and Pbpa. In this data analysis, for example, after a clustering process based on the distance between points and the normal direction of each point, the speed and the traveling direction are estimated by a tracking process using an extended Kalman filter for the same cluster. Based on the results of this data analysis, the recognition block 110 specifies the transmission permission point group Pbp, which moves in the opposite direction to the host vehicle 2 at a relative speed (hereinafter simply referred to as the focus relative speed) focused depending on the traveling speed of the host vehicle 2, as shown in
Here, the focus relative speed, which is the distinction criterion for the point groups Pbpv and Pbpa, is defined as the speed difference between the traveling speed of the host vehicle 2 measured by the speed sensor serving as the internal sensor 41 and the estimation speed of the transmission permission point group Pbp measured by the tracking process. Therefore, the transmission permission point group Pbp, in which the focus relative speed of the movement in the opposite direction to the host vehicle 2 is less than the speed threshold or equal to or less than the speed threshold, is determined to be the virtual image point group Pbpv.
The recognition block 110 in S107 determines whether the far side point group Pb specified in S102 is the virtual image point group Pbpv or the real image point group Pbpa, as shown in
After any of S104, S106, S108, and S109 is executed, the recognition flow proceeds to S110. Even if a negative determination is made in each of S101 and S102, the recognition flow proceeds to S110. In S110, the recognition block 110 performs the recognition process by excluding the far side point group Pb, which is determined to be outside the recognition target in the previous step among the transmission restriction point group Pbl, the semi-transmission restriction point group Pbpl, and the virtual image point group Pbpv, from the recognition target in the scanning data Ds. In this case, the recognition process may use a machine learning model such as Point Pillars, or a background subtraction method using the map data Dm.
The recognition block 110 in S110 generates the recognition data Dr representing the results of this recognition process. At this time, if the real image point group Pbpa is determined to be the recognition target in the immediately preceding step S109, the recognition data Dr that recognizes the target moving object Ot represented by the real image point group Pbpa is generated. The recognition block 110 in S110 further stores the generated recognition data Dr in a recognition storage area 10r of the memory 10 shown in
The recognition block 110 in S110 may control the display of the generated or stored recognition data Dr by the information display system 6 in the host vehicle 2. In the recognition data Dr displayed at this time, the far side point group Pb that is not the recognition target may be controlled not to be displayed, or the far side point group Pb that is not the recognition target may be displayed along with a warning label such as “virtual image”. The recognition block 110 in S110 may control the transmission of the generated or stored recognition data Dr from the host vehicle 2 to an external source (e.g., an external information center or another vehicle) via the communication system 5. Then, the current execution of the recognition flow ends.
The operation effects of the first embodiment will be described below.
In the first embodiment, in the scanning direction ψs of the scanning device 3 scanning a reflector Or in a focus range on the high reflection side of the reflection characteristics with respect to the irradiation light, the scanning data Ds is acquired that includes a far side point group Pb as a scanning point group at a position on the far side of the reflector Or. Therefore, according to the first embodiment, the recognition data Dr is generated by excluding the far side point group Pb that corresponds to the symmetrical point group Ps disposed in the symmetrical area As of the far side position with respect to the reflector Or as the scan point group of the real image la that generates the virtual image Iv to be appeared at a far side position of the reflector Or, from the far side point group Pb in the scanning data Ds as the recognition target of the target moving object Ot. According to this, the far side point group Pb observed as a virtual image Iv of the symmetrical point group Ps at the far side position of the reflector Or can be excluded from the recognition target of the target moving object Ot, while the far side point group Pb observed as a real image Ia at the far side position of the reflector Or can be properly recognized as the target moving object Ot. Therefore, it is possible to suppress erroneous recognition caused by the occurrence of the virtual image Iv and improve the recognition accuracy of the target moving object Ot.
According to the first embodiment, scanning data Ds including the far side point group Pb is acquired at a far side position of the reflector Or specified based on the identification information σi read from the memory 10 as information included in the map data Dm to identify an object disposed in the scanning space 30. This makes it possible to appropriately narrow down candidates for the far side point group Pb observed as a virtual image Iv due to the reflector Or of the symmetrical point group Ps based on the identification information σi. Therefore, it is possible to increase the speed of the processing required to suppress erroneous recognition caused by the generation of the virtual image Iv as much as possible while improving the recognition accuracy of the target moving object Ot.
According to the first embodiment, the far side point group Pb, which corresponds to the symmetric point group Ps disposed in the symmetrical area As with respect to the far side position of the reflector Or, whose optical characteristics is indicated by the identification information σi such that the transmission of the irradiation light is permitted, is excluded from the recognition target. According to this, the far side point group Pb observed at the far side position of the reflector Or as a virtual image Iv of the symmetrical point group Ps can be excluded from the recognition target of the target moving object Ot in accordance with the search for the symmetrical point group Ps when the transmission of the irradiation light through the reflector Or is permitted. Therefore, it is possible to suppress erroneous recognition caused by the occurrence of the virtual image Iv and improve the recognition accuracy of the target moving object Ot.
According to the first embodiment, the far side point group Pb located at the far side position of the reflector Or, whose optical characteristics is indicated by the identification information σi such that the transmission of the irradiation light is permitted, and located at the fare side position of the fixed object Ol, whose optical characteristics is indicated by the identification information σi such that the transmission of the irradiation light is restricted. According to this, the far side point group Pb observed at the far side position of the reflector Or as a virtual image Iv of the symmetrical point group Ps can be excluded from the recognition target of the target moving object Ot depending on the identification of a fixed object Ol that restricts the transmission of the irradiation light, even if the reflector Or permits the transmission of the irradiation light. Therefore, it is possible to increase the speed of the processing required to suppress erroneous recognition caused by the generation of the virtual image Iv as much as possible while improving the recognition accuracy of the target moving object Ot.
According to the first embodiment, the far side point group Pb located on the far side of the reflector Or, whose optical characteristics is indicated by the identification information σi such that the transmission of the irradiation light is restricted, is excluded from the recognition target. According to this, the far side point group Pb observed at a far side position of the reflector Or as a virtual image Iv of the symmetrical point group Ps can be excluded from the recognition target of the target moving object Ot depending on the identification of the reflector Or that restricts the transmission of the irradiation light. Therefore, it is possible to increase the speed of the processing required to suppress erroneous recognition caused by the generation of the virtual image Iv as much as possible while improving the recognition accuracy of the target moving object Ot.
In the first embodiment, as a plurality of three-dimensional voxels 300 obtained by dividing the scanning space 30, a far side voxel 300b to which the far side point group Pb belongs, and a symmetrical voxel 300s in which the symmetric point group Ps is searched for are defined. Therefore, according to the first embodiment, the far side point group Pb belonging to the far side voxel 300b whose point group distribution is similar to the symmetrical voxel 300s within the allowable range is excluded from the recognition target. This makes it possible to accurately specify the far point group Pb, to which the virtual image Iv of the symmetric point group Ps caused by the reflector Or is observed, based on the similarity of the point group distribution within the local range of voxels 300b and 300s. Therefore, it is possible to improve the reliability in suppressing erroneous recognition caused by the generation of the virtual image Iv, and ultimately the reliability in highly accurate recognition of the target moving object Ot.
According to the first embodiment, the far side point group Pb, which moves in the opposite direction to the host vehicle 2 at a relative speed corresponding to the traveling speed of the host vehicle 2, is excluded from the recognition target. According to this, the far side point group Pb, to which the virtual image Iv of the symmetrical point group Ps caused by the reflector Or is observed, can be accurately specified based on the relative speed and the moving direction of the far side point group Pb with respect to the host vehicle 2. Therefore, it is possible to improve the reliability in suppressing erroneous recognition caused by the generation of the virtual image Iv, and ultimately the reliability in highly accurate recognition of the target moving object Ot.
A second embodiment is a modification example of the first embodiment.
As shown in
As shown in
In this manner, according to the second embodiment, among the positions on the far side of the reflector Or, the far side point group Pb existing in the space area Ax represented by the identification information σi is excluded from the recognition target. According to this, the far side point group Pb, which is observed at a far side position of the reflector Or as a virtual image Iv of the symmetrical point group Ps, can be excluded from the recognition target of the target moving object Ot depending on the identification of its location in the space area Ax. Therefore, it is possible to suppress erroneous recognition caused by the occurrence of the virtual image Iv and improve the recognition accuracy of the target moving object Ot.
A third embodiment is another modification of the first embodiment.
As shown in
Therefore, for each pixel in the third embodiment, maximum intensity scanning data Dsm is defined as scanning data Ds including a scanning point group whose state value corresponding to the reflection echo Erm of the maximum intensity Rm among the received reflection echoes Er is the three-dimensional distance and/or the three-dimensional position coordinates. In addition, for each pixel in the third embodiment, total intensity scanning data Dsa is defined as scanning data Ds that includes a scanning point group whose state value corresponding to each of the reflection echos Erm received with all intensities is the three-dimensional distance and/or the three-dimensional position coordinates.
Under these definitions, in S3100 shown in
Furthermore, in S3107, the recognition block 110 sets the transmission permission point group Pbp excluding the semi-transmission restriction point group Pbpl in the total intensity scanning data Dsa shown in
According to the third embodiment described above, the reflection echoes Er of the irradiation light from the scanning space 30 is received for each of a plurality of pixels in the scanning device 3. Therefore, in the maximum intensity scanning data Dsm, which includes the scanning point group corresponding to the reflection echo Er with the maximum intensity Rm received at each pixel, if there is a symmetrical point group Ps with respect to the far side point group Pb, the total intensity scanning data Dsa is further used. Specifically, in the total intensity scanning data Dsa, which includes the scanning point groups corresponding to all intensities of the reflection echoes Er received at each pixel, the far side point group Pb belonging to the far side voxel 300b, whose point group distribution is similar to the symmetrical voxel 300s within the allowable range, is excluded from the recognition target.
According to this third embodiment, the number of point groups that fit into the far side voxel 300b as the far side point group Pb in which the virtual image Iv of the symmetrical point group Ps is observed due to the reflector Or can be increased in accordance with the number of reflection echoes of all intensities. Therefore, based on the similarity between the point group distributions of the voxels 300b and 300s, the accuracy of distinction of the far side point group Pb at which the virtual image Iv is observed can be improved. As a result, it is possible to ensure high reliability in suppressing erroneous recognition caused by the generation of a virtual image Iv, and therefore high reliability in highly accurate recognition of the target moving object Ot.
Furthermore, according to the third embodiment, the far side point group Pb located at a position on the far side of the objects Or, Ol, which have optical characteristics that restrict the transmission of the irradiation light, can be searched for in a short time with high distinction accuracy based on the identification information σi in the maximum intensity scanning data Dsm, which has a smaller number of point groups than the total intensity scanning data Ds. On the other hand, when the maximum intensity scanning data Dsm includes a symmetrical point group Ps with respect to a far side point group Pb located on the far side of a reflector Or having optical characteristics that permits the transmission of the irradiation light, the total intensity scanning data Dsa is further used. Specifically, the far side point group Pb belonging to the far side voxel 300b, whose point group distribution is similar to the symmetrical voxel 300s within the allowable range, can be searched for with high distinction accuracy in the total intensity scanning data Dsa, which has a larger number of point groups than the maximum intensity scanning data Dsm. As a result of the above, it is possible to increase the processing speed, while ensuring high reliability in suppressing erroneous recognition caused by the generation of a virtual image Iv, and ultimately high reliability in highly accurate recognition of the target moving object Ot.
Although multiple embodiments have been described above, the present disclosure is not construed as being limited to those embodiments, and can be applied to various embodiments within a scope that does not depart from the spirit of the present disclosure.
In another modification, a dedicated computer constituting the recognition system 1 may include at least one of a digital circuit and an analog circuit, as a processor. The digital circuit is at least one type of, for example, an application specific integrated circuit (i.e., ASIC), a field programmable gate array (i.e., FPGA), a system on a chip (i.e., SOC), a programmable gate array (i.e., PGA), a complex programmable logic device (i.e., CPLD), and the like. Such a digital circuit may also include a memory in which a program is stored.
In a modification example, at least one of the set of S103 and S104 and the set of S105 and S106 may be omitted. In a modification example, S103 to S109, S2107, S2108, and S3107 may be executed independently for each scanning direction ψs of the multiple reflectors Or specified in S101. In a modification example, the identification of the reflector Or in S101 may be realized based on the past scanning data Ds instead of or in addition to being based on the map data Dm.
In a modification example, steps S2107 and S2108 of the second embodiment may be combined to the third embodiment as shown in
In a modification example, the host vehicle 2 to which the recognition system 1 is applied may be, for example, an autonomous robot capable of transporting luggage or collecting information by autonomous driving or remote driving. In addition to the above-described embodiments and modifications, the present disclosure may be implemented in forms of a processing circuit (for example, a processing ECU, and the like) or a semiconductor device (e.g., a semiconductor chip, and the like), as a control device mountable on a host vehicle 2 and including at least one processor 12 and at least one memory 10.
The present specification discloses a plurality of technical ideas listed below and a plurality of combinations thereof.
A recognition system has a processor with a memory storing computer program code for recognizing a target moving object capable of moving in a scanning space scanned by irradiation light from a scanning device in a host vehicle.
The processor is configured to execute:
The symmetrical point group corresponds to a scanning point group of a real image that causes a virtual image to appear at the position on the far side of the reflector.
The recognition system according to the technical feature 1, further includes: a storage medium for storing map data including identification information for identifying an object present in the scanning space.
The acquiring of the scanning data includes: acquiring the scanning data including a far side point group disposed at the position on the far side f the reflector which is identified based on the identification information read out from the storage medium.
In the recognition system according to the technical feature 2, the generating of the recognition data includes: excluding, from the recognition target, the far side point group corresponding to a symmetrical point group which is disposed in a symmetrical area with respect to the position on the far side of the reflector whose optical characteristic is indicated by the identification information that a transmission of the irradiation light is permitted.
In the recognition system according to the technical feature 3, the generating of the recognition data includes: excluding, from the recognition target, a far side point group disposed at the position on the far side of the reflector whose optical characteristic is indicated by the identification information that a transmission of the irradiation light is permitted, and on the far side of a fixed object whose optical characteristic is indicated by the identification information that a transmission of the irradiation light is restricted.
In the recognition system according to the technical feature 3, the generating of the recognition data includes: excluding, from the recognition target, the far side point group disposed on the far side of the reflector whose optical characteristic is indicated by the identification information that a transmission of the irradiation light is restricted.
In the recognition system according to any one of the technical features 2 to 5, the generating of the recognition data includes: excluding, from the recognition target, the far side point group disposed in a space area represented by the identification information among far side positions of the reflector.
In the recognition system according to the technical feature 3, the generating of the recognition data includes:
defining a far side voxel to which the far side point group belongs and a symmetrical voxel for which a symmetrical point group is searched, as a plurality of three-dimensional voxels prepared by dividing the scanning space; and
In the recognition system according to the technical feature 7, the acquiring of the scanning data includes:
The generating of the recognition data includes: excluding, from the recognition target, the far side point group belonging to the far side voxel whose point group distribution in the total intensity scanning data is similar to the symmetrical voxel within an allowable range when the symmetrical point group corresponding to the far side point group exists in the maximum intensity scanning data.
In the recognition system according to the technical feature 8, the generating of the recognition data includes:
In the recognition system according to the technical feature 3, the generating of the recognition data includes: excluding, from the recognition target, the far side point group that moves in an opposite direction to the host vehicle at a relative speed corresponding to a travelling speed of the host vehicle.
In the recognition system according to the technical feature 3, the generating of the recognition data includes: storing the recognition data in a storage medium in the host vehicle.
In the recognition system according to the technical feature 3, the generating of the recognition data includes: controlling a display of the recognition data in the host vehicle.
In the recognition system according to the technical feature 3, the generating of the recognition data includes: controlling a transmission of the recognition data from the host vehicle.
The above-mentioned technical features 1 to 13 may be realized in the form of a recognition device, a recognition method, a recognition program, and a recognition data generation method.
The controllers and methods described in the present disclosure may be implemented by a special purpose computer created by configuring a memory and a processor programmed to execute one or more particular functions embodied in computer programs. Alternatively, the controllers and methods described in the present disclosure may be implemented by a special purpose computer created by configuring a processor provided by one or more special purpose hardware logic circuits. Alternatively, the controllers and methods described in the present disclosure may be implemented by one or more special purpose computers created by configuring a combination of a memory and a processor programmed to execute one or more particular functions and a processor provided by one or more hardware logic circuits. The computer programs may be stored, as instructions being executed by a computer, in a tangible non-transitory computer-readable medium.
It is noted that a flowchart or the processing of the flowchart in the present application includes sections (also referred to as steps), each of which is represented, for instance, as S100. Further, each section can be divided into several sub-sections while several sections can be combined into a single section. Furthermore, each of thus configured sections can be also referred to as a device, module, or means.
While the present disclosure has been described with reference to embodiments thereof, it is to be understood that the disclosure is not limited to the embodiments and constructions. The present disclosure is intended to cover various modification and equivalent arrangements. In addition, while the various combinations and configurations, other combinations and configurations, including more, less or only a single element, are also within the spirit and scope of the present disclosure.
Number | Date | Country | Kind |
---|---|---|---|
2022-082484 | May 2022 | JP | national |
2023-038950 | Mar 2023 | JP | national |
The present application is a continuation application of International Patent Application No. PCT/JP2023/011006 filed on Mar. 20, 2023, which designated the U.S. and claims the benefit of priority from Japanese Patent Applications No. 2022-082484 filed on May 19, 2022 and No. 2023-038950 filed on Mar. 13, 2023. The entire disclosures of all of the above applications are incorporated herein by reference.