The present disclosure relates generally to the automotive field. More particularly, the present disclosure relates to a system and method for ultrasonic sensor enhancement using a lidar point cloud.
In general, ultrasonic sensor (USS) readings are noisy and low resolution. They are one-dimensional (1D) and cannot describe small structures in the environment. Since the readings from a USS are distance, using them directly does not provide a map of the environment and point positions cannot be obtained. The present background provides an illustrative automotive context in which the concepts and principles of the present disclosure may be implemented. It will be readily apparent to those of ordinary skill in the art that the concepts and principles of the present disclosure may be implemented in other contexts equally.
The present disclosure provides a system and method for USS reading enhancement using a lidar point cloud. This provides noise reduction and enables the generation of a two-dimensional (2D) environmental map. More specifically, the present disclosure provides a system and method for generating an enhanced environmental map using USSs, and the map is enhanced using a lidar point cloud. Using the lidar point cloud has advantages because the lidar point cloud is accurate and thus can provide accurate labels for training and the like.
In one illustrative embodiment, the present disclosure provides a method for ultrasonic sensor reading enhancement using a lidar point cloud, the method including: receiving an ultrasonic sensor temporal feature using an ultrasonic sensor; inputting the ultrasonic sensor temporal feature into an autoencoder system including instructions stored in a memory and executed by a processor; wherein the autoencoder system is trained using a prior inputted ultrasonic sensor temporal feature and a corresponding prior inputted lidar feature label received from a lidar system; and, using the trained autoencoder system, outputting an enhanced ultrasonic sensor environmental mapping. The ultrasonic sensor temporal feature includes a 1D environmental map with relatively more noise and the enhanced ultrasonic sensor environmental mapping includes a 2D environmental map with relatively less noise. The prior inputted ultrasonic sensor temporal feature is formed by performing ultrasonic sensor data feature extraction using inertial measurement unit data across N frames and a kinematic bicycle model to generate an ego vehicle trajectory, and, for each position in the ego vehicle trajectory, calculating a reflection point in an environment based on a yaw angle and each ultrasonic sensor reading, thereby providing one environmental mapping across the N frames for the ego vehicle trajectory. The data feature extraction further includes, for a trajectory cut based on an ultrasonic sensor's field of view, using the environmental mapping from one ultrasonic sensor, as well as a same mapping from the lidar system. The prior inputted lidar feature label is formed by performing lidar point cloud feature generation by filtering lidar points by height and by a field of view of an ultrasonic sensor. The lidar point cloud feature generation further includes finding closest lidar points to an ego vehicle by splitting the field of view of the ultrasonic sensor into angles centered at the ultrasonic sensor and, within each angle, selecting a constant number of lidar points that are closest to the ego vehicle, wherein a third dimension of the selected points is discarded, thereby providing a the lidar feature with a total number of the selected points that matches the inputted ultrasonic sensor temporal feature. The method also includes, at a vehicle control system, receiving the outputted an enhanced ultrasonic sensor environmental mapping and directing operation of a vehicle based on the outputted an enhanced ultrasonic sensor environmental mapping.
In another illustrative embodiment, the present disclosure provides a non-transitory computer-readable medium including instructions stored in a memory and executed by a processor to carry out steps for ultrasonic sensor reading enhancement using a lidar point cloud, the steps including: receiving an ultrasonic sensor temporal feature using an ultrasonic sensor; inputting the ultrasonic sensor temporal feature into an autoencoder system including instructions stored in a memory and executed by a processor; wherein the autoencoder system is trained using a prior inputted ultrasonic sensor temporal feature and a corresponding prior inputted lidar feature label received from a lidar system; and, using the trained autoencoder system, outputting an enhanced ultrasonic sensor environmental mapping. The ultrasonic sensor temporal feature includes a 1D environmental map with relatively more noise and the enhanced ultrasonic sensor environmental mapping includes a 2D environmental map with relatively less noise. The prior inputted ultrasonic sensor temporal feature is formed by performing ultrasonic sensor data feature extraction using inertial measurement unit data across N frames and a kinematic bicycle model to generate an ego vehicle trajectory, and, for each position in the ego vehicle trajectory, calculating a reflection point in an environment based on a yaw angle and each ultrasonic sensor reading, thereby providing one environmental mapping across the N frames for the ego vehicle trajectory. The data feature extraction further includes, for a trajectory cut based on an ultrasonic sensor's field of view, using the environmental mapping from one ultrasonic sensor, as well as a same mapping from the lidar system. The prior inputted lidar feature label is formed by performing lidar point cloud feature generation by filtering lidar points by height and by a field of view of an ultrasonic sensor. The lidar point cloud feature generation further includes finding closest lidar points to an ego vehicle by splitting the field of view of the ultrasonic sensor into angles centered at the ultrasonic sensor and, within each angle, selecting a constant number of lidar points that are closest to the ego vehicle, wherein a third dimension of the selected points is discarded, thereby providing a the lidar feature with a total number of the selected points that matches the inputted ultrasonic sensor temporal feature.
In a further illustrative embodiment, the present disclosure provides a system for ultrasonic sensor reading enhancement using a lidar point cloud, the system including: an ultrasonic sensor operable for generating an ultrasonic sensor temporal feature; and an autoencoder system including instructions stored in a memory and executed by a processor, the autoencoder system operable for receiving the ultrasonic sensor temporal feature from the ultrasonic sensor and outputting an enhanced ultrasonic sensor environmental mapping; wherein the autoencoder system is trained using a prior inputted ultrasonic sensor temporal feature and a corresponding prior inputted lidar feature label generated by a lidar system. The ultrasonic sensor temporal feature includes a 1D environmental map with relatively more noise and the enhanced ultrasonic sensor environmental mapping includes a 2D environmental map with relatively less noise. The prior inputted ultrasonic sensor temporal feature is formed by performing ultrasonic sensor data feature extraction using inertial measurement unit data across N frames and a kinematic bicycle model to generate an ego vehicle trajectory, and, for each position in the ego vehicle trajectory, calculating a reflection point in an environment based on a yaw angle and each ultrasonic sensor reading, thereby providing one environmental mapping across the N frames for the ego vehicle trajectory. The data feature extraction further includes, for a trajectory cut based on an ultrasonic sensor's field of view, using the environmental mapping from one ultrasonic sensor, as well as a same mapping from the lidar system. The prior inputted lidar feature label is formed by performing lidar point cloud feature generation by filtering lidar points by height and by a field of view of an ultrasonic sensor. The lidar point cloud feature generation further includes finding closest lidar points to an ego vehicle by splitting the field of view of the ultrasonic sensor into angles centered at the ultrasonic sensor and, within each angle, selecting a constant number of lidar points that are closest to the ego vehicle, wherein a third dimension of the selected points is discarded, thereby providing a the lidar feature with a total number of the selected points that matches the inputted ultrasonic sensor temporal feature. The system also includes a vehicle control system operable for receiving the outputted an enhanced ultrasonic sensor environmental mapping and directing operation of a vehicle based on the outputted an enhanced ultrasonic sensor environmental mapping.
The present disclosure is illustrated and described herein with reference to the various drawings, in which:
Again, the present disclosure provides a system and method for USS reading enhancement using a lidar point cloud. This provides noise reduction and enables the generation of a 2D environmental map. More specifically, the present disclosure provides a system and method for generating an enhanced environmental map using USSs, and the map is enhanced using a lidar point cloud. Using the lidar point cloud has advantages because the lidar point cloud is accurate and thus can provide accurate labels for training and the like.
An autoencoder with supervised learning is used to learn the mapping between USS features 12 and lidar features 14 (
It is to be recognized that, depending on the example, certain acts or events of any of the techniques described herein can be performed in a different sequence, may be added, merged, or left out altogether (e.g., not all described acts or events are necessary for the practice of the techniques). Moreover, in certain examples, acts or events may be performed concurrently, e.g., through multi-threaded processing, interrupt processing, or multiple processors, rather than sequentially. It should be noted that the algorithms of the present disclosure may be implemented on an embedded processing system running a real time operating system (OS), which provides an assured degree of availability and low latency. As discussed below, processing in a cloud system may also be implemented if such availability and latency problems are addressed.
Again, the cloud-based system 100 can provide any functionality through services, such as software-as-a-service (SaaS), platform-as-a-service, infrastructure-as-a-service, security-as-a-service, Virtual Network Functions (VNFs) in a Network Functions Virtualization (NFV) Infrastructure (NFVI), etc. to the locations 110, 120, and 130 and devices 140 and 150. Previously, the Information Technology (IT) deployment model included enterprise resources and applications stored within an enterprise network (i.e., physical devices), behind a firewall, accessible by employees on site or remote via Virtual Private Networks (VPNs), etc. The cloud-based system 100 is replacing the conventional deployment model. The cloud-based system 100 can be used to implement these services in the cloud without requiring the physical devices and management thereof by enterprise IT administrators.
Cloud computing systems and methods abstract away physical servers, storage, networking, etc., and instead offer these as on-demand and elastic resources. The National Institute of Standards and Technology (NIST) provides a concise and specific definition which states cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. Cloud computing differs from the classic client-server model by providing applications from a server that are executed and managed by a client's web browser or the like, with no installed client version of an application required. Centralization gives cloud service providers complete control over the versions of the browser-based and other applications provided to clients, which removes the need for version upgrades or license management on individual client computing devices. The phrase “software as a service” is sometimes used to describe application programs offered through cloud computing. A common shorthand for a provided cloud computing service (or even an aggregation of all existing cloud services) is “the cloud.” The cloud-based system 100 is illustrated herein as one example embodiment of a cloud-based system, and those of ordinary skill in the art will recognize the systems and methods described herein are not necessarily limited thereby.
The processor 202 is a hardware device for executing software instructions. The processor 202 may be any custom made or commercially available processor, a central processing unit (CPU), an auxiliary processor among several processors associated with the server 200, a semiconductor-based microprocessor (in the form of a microchip or chipset), or generally any device for executing software instructions. When the server 200 is in operation, the processor 202 is configured to execute software stored within the memory 210, to communicate data to and from the memory 210, and to generally control operations of the server 200 pursuant to the software instructions. The I/O interfaces 204 may be used to receive user input from and/or for providing system output to one or more devices or components.
The network interface 206 may be used to enable the server 200 to communicate on a network, such as the Internet 104 (
The memory 210 may include any of volatile memory elements (e.g., random access memory (RAM, such as DRAM, SRAM, SDRAM, etc.)), nonvolatile memory elements (e.g., ROM, hard drive, tape, CDROM, etc.), and combinations thereof. Moreover, the memory 210 may incorporate electronic, magnetic, optical, and/or other types of storage media. Note that the memory 210 may have a distributed architecture, where various components are situated remotely from one another but can be accessed by the processor 202. The software in memory 210 may include one or more software programs, each of which includes an ordered listing of executable instructions for implementing logical functions. The software in the memory 210 includes a suitable operating system (O/S) 214 and one or more programs 216. The operating system 214 essentially controls the execution of other computer programs, such as the one or more programs 216, and provides scheduling, input-output control, file and data management, memory management, and communication control and related services. The one or more programs 216 may be configured to implement the various processes, algorithms, methods, techniques, etc. described herein.
It will be appreciated that some embodiments described herein may include one or more generic or specialized processors (“one or more processors”) such as microprocessors; central processing units (CPUs); digital signal processors (DSPs); customized processors such as network processors (NPs) or network processing units (NPUs), graphics processing units (GPUs), or the like; field programmable gate arrays (FPGAs); and the like along with unique stored program instructions (including both software and firmware) for control thereof to implement, in conjunction with certain non-processor circuits, some, most, or all of the functions of the methods and/or systems described herein. Alternatively, some or all functions may be implemented by a state machine that has no stored program instructions, or in one or more application-specific integrated circuits (ASICs), in which each function or some combinations of certain of the functions are implemented as custom logic or circuitry. Of course, a combination of the aforementioned approaches may be used. For some of the embodiments described herein, a corresponding device in hardware and optionally with software, firmware, and a combination thereof can be referred to as “circuitry configured or adapted to,” “logic configured or adapted to,” etc. perform a set of operations, steps, methods, processes, algorithms, functions, techniques, etc. on digital and/or analog signals as described herein for the various embodiments.
Moreover, some embodiments may include a non-transitory computer-readable medium having computer-readable code stored thereon for programming a computer, server, appliance, device, processor, circuit, etc. each of which may include a processor to perform functions as described and claimed herein. Examples of such computer-readable mediums include, but are not limited to, a hard disk, an optical storage device, a magnetic storage device, a Read-Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Programmable Read-Only Memory (EPROM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), flash memory, and the like. When stored in the non-transitory computer-readable medium, software can include instructions executable by a processor or device (e.g., any type of programmable circuitry or logic) that, in response to such execution, cause a processor or the device to perform a set of operations, steps, methods, processes, algorithms, functions, techniques, etc. as described herein for the various embodiments.
The processor 302 is a hardware device for executing software instructions. The processor 302 can be any custom made or commercially available processor, a CPU, an auxiliary processor among several processors associated with the user device 300, a semiconductor-based microprocessor (in the form of a microchip or chipset), or generally any device for executing software instructions. When the user device 300 is in operation, the processor 302 is configured to execute software stored within the memory 310, to communicate data to and from the memory 310, and to generally control operations of the user device 300 pursuant to the software instructions. In an embodiment, the processor 302 may include a mobile optimized processor such as optimized for power consumption and mobile applications. The I/O interfaces 304 can be used to receive user input from and/or for providing system output. User input can be provided via, for example, a keypad, a touch screen, a scroll ball, a scroll bar, buttons, a barcode scanner, and the like. System output can be provided via a display device such as a liquid crystal display (LCD), touch screen, and the like.
The radio 306 enables wireless communication to an external access device or network. Any number of suitable wireless data communication protocols, techniques, or methodologies can be supported by the radio 306, including any protocols for wireless communication. The data store 308 may be used to store data. The data store 308 may include any of volatile memory elements (e.g., random access memory (RAM, such as DRAM, SRAM, SDRAM, and the like)), nonvolatile memory elements (e.g., ROM, hard drive, tape, CDROM, and the like), and combinations thereof. Moreover, the data store 308 may incorporate electronic, magnetic, optical, and/or other types of storage media.
Again, the memory 310 may include any of volatile memory elements (e.g., random access memory (RAM, such as DRAM, SRAM, SDRAM, etc.)), nonvolatile memory elements (e.g., ROM, hard drive, etc.), and combinations thereof. Moreover, the memory 310 may incorporate electronic, magnetic, optical, and/or other types of storage media. Note that the memory 310 may have a distributed architecture, where various components are situated remotely from one another, but can be accessed by the processor 302. The software in memory 310 can include one or more software programs, each of which includes an ordered listing of executable instructions for implementing logical functions. In the example of
Although the present disclosure is illustrated and described herein with reference to illustrative embodiments and specific examples thereof, it will be readily apparent to those of ordinary skill in the art that other embodiments and examples may perform similar functions and/or achieve like results. All such equivalent embodiments and examples are within the spirit and scope of the present disclosure, are contemplated thereby, and are intended to be covered by the following non-limiting claims for all purposes.
The present disclosure claims the benefit of priority of co-pending U.S. Provisional Patent Application No. 63/277,678, filed on Nov. 10, 2021, and entitled “SYSTEM AND METHOD FOR ULTRASONIC SENSOR ENHANCEMENT USING LIDAR POINT CLOUD,” the contents of which are incorporated in full by reference herein.
Number | Date | Country | |
---|---|---|---|
63277678 | Nov 2021 | US |