Light Detection and Ranging (LiDAR) is a remote sensing technology that emits laser light to illuminate and detect objects and map their distance measurements. Specifically, a LiDAR device targets an object with a laser and then electronically measures the time for the reflected light to return to a receiver. LiDAR has been utilized for many different types of applications such as making digital 3-D representations of areas on the earth's surface and ocean bottom.
LiDAR sensors have been used in the intelligent transportation field because of their powerful detection and localization capabilities. For example, LiDAR sensors have been installed on autonomous vehicles (or self-driving vehicles) and used in conjunction with other sensors, such as digital video cameras and radar devices, to enable the autonomous vehicle to safely navigate along roads.
It has recently been recognized that LiDAR sensor systems could potentially be deployed as part of the roadside infrastructure, for example, incorporated into a traffic light system at intersections or otherwise positioned at roadside locations as a detection and data generating apparatus. An advantage of LiDAR sensor systems is that they can be used to collect three-dimensional traffic data without being affected by light conditions, and the detected traffic data can then be used by connected vehicles (CVs) and by other infrastructure systems to aid in preventing collisions and to protect non-motorized road users (such as pedestrians). The traffic data may also be used to evaluate the performance of autonomous vehicles, and for the general purpose of collecting traffic data for analysis. For example, roadside LiDAR sensor data at a traffic light can be used to identify when and where vehicle speeding is occurring, and it can provide a time-space diagram which shows how vehicles slow down, stop, speed up and go through the intersection during a light cycle. In addition, roadside LiDAR sensor data can be utilized to identify “near-crashes,” where vehicles come close to hitting one another (or close to colliding with a pedestrian or a bicyclist), and thus identify intersections or stretches of roads that are potentially dangerous.
A common misconception is that the application of a roadside LiDAR sensor system is similar to the application of an on-board vehicle LiDAR sensor, and that therefore the same processing procedures and/or algorithms utilized by on-board LiDAR systems could be applicable to roadside LiDAR sensor systems (possibly with minor modifications). However, on-board LiDAR sensors mainly focus on the surroundings of the vehicle and the goal is to directly extract objects of interest from a constantly changing background. In contrast, roadside LiDAR sensors must detect and track all road users in a traffic scene against a static background. Thus, infrastructure-based, or roadside LiDAR sensing systems have the capability to provide behavior-level multimodal trajectory data of all traffic users, such as presence, location, speed, and direction data of all road users gleaned from raw roadside LiDAR sensor data. In addition, low-cost sensors may be used to gather such real-time, all-traffic trajectories for extended distances, which can provide critical information for connected and autonomous vehicles so that an autonomous vehicle traveling into the area covered by a roadside LiDAR sensor system becomes aware of potential upcoming collision risks and the movement status of other road users while the vehicles are still at a distance away from the area or zone. Thus, the tasks of obtaining and processing trajectory data are different for a roadside LiDAR sensor system than for an on-board vehicle LiDAR sensor system.
Accordingly, for infrastructure-based or roadside LiDAR sensor systems, it is important to detect target objects in the environment quickly and efficiently because fast detection speeds provide the time needed to determine a post-detected response, for example, by an autonomous vehicle to avoid a collision with other road users in the real-world. Detection accuracy is also a critical factor to ensure the reliability of a roadside LiDAR based sensor system. Thus, roadside LiDAR sensor systems are required to exclude the static background points and finely partition those foreground points as different entities (clusters).
Intelligent Transportation Systems (ITS) is an advanced application that can offer innovative services relating to different modes of transportation and enable road users to be better informed and make safer, more coordinated, and ‘smarter’ use of transport networks. To obtain accurate traffic data to serve the ITS, many different types of sensors have been used such as cameras, loop detectors, radar, Bluetooth sensors and the like. All these sensors can provide the basic and necessary data for ITS, but there are some limitations on these data as traditional sensors installed on the road or roadside only provide traffic flow rates, spot speed, average speeds, and occupancy, and such macro traffic data cannot fully meet the requirements of the ITS. In addition, advanced camera systems that can provide high-resolution micro traffic data (HRMTD) may be adversely affected by light conditions. Thus, LiDAR sensor systems are becoming more popular in transportation field applications.
In addition to supporting connected and autonomous vehicles, the all-traffic trajectory data generated by a roadside LiDAR system may be valuable for traffic study and performance evaluation, advanced traffic operations, and the like. For example, analysis of lane-based vehicle volume data can achieve an accuracy above 95%, and if there is no infrastructure occlusion, the accuracy of road volume detection can generally be above 98% for roadside LiDAR sensor systems. Other applications for collected trajectory data include providing conflict data resources for near-crash analysis, including collecting near-crash data (especially vehicle-to-pedestrian near-crash incidents) that may occur during low-light level situations such as during rainstorms and/or during the night hours when it is dark. In this regard, roadside LiDAR sensors deployed at fixed locations (e.g., road intersections and along road medians) provide a good way to record trajectories of all road users over the long term, regardless of illumination conditions. Traffic engineers can then study the historical trajectory data provided by the roadside LiDAR sensor system at multiple scales to define and extract near-crash events, identify traffic safety issues, and recommend countermeasures and/or solutions.
As mentioned above, LiDAR sensor systems offer an advantage over traffic cameras and/or video-based infrastructure systems in that accurate and complete LiDAR system sensor data can be generated even under bad or suboptimal lighting conditions that would adversely affect the quality of video recordings. For example, a LiDAR system sensor can generate accurate and complete vehicle data at night and/or under low-light conditions and during other conditions that would adversely affect the quality of the data generated by a video-based roadside infrastructure system. Furthermore, the analysis of infrastructure-based video data requires significantly more processing and computing power than what is needed to process LiDAR system sensor data. Roadside LiDAR systems also have an advantage over other sensing and detection technologies (such as inductive loop, microwave radar, and video camera technologies) in the ability to obtain trajectory-level data and provide improved performance in the accurate detection and tracking of pedestrians and vehicles.
When a roadside LiDAR sensor system generates all-road user trajectory data and other traffic performance measurement data, this data is spatially located within a LiDAR sensor local coordinate system having x-y-z coordinates (cartesian coordinates) with the LiDAR sensor at the center point. However, real-time data users, such as connected and autonomous vehicles, cannot easily use data that is represented by local x-y-z coordinates. In addition, such real-time local data is also difficult for traffic data analysts to interpret. Thus, the inventors recognized that there is a need for an improved data mapping method for roadside LiDAR sensor data which is accurate, inexpensive to implement and that provides data which is easy to use and/or interpret.
Features and advantages of some embodiments of the present disclosure, and the manner in which the same are accomplished, will become more readily apparent upon consideration of the following detailed description taken in conjunction with the accompanying drawings, which illustrate preferred and example embodiments and which are not necessarily drawn to scale, wherein:
Reference will now be made in detail to various novel embodiments, examples of which are illustrated in the accompanying drawings. The drawings and descriptions thereof are not intended to limit the invention to any particular embodiment(s). On the contrary, the descriptions provided herein are intended to cover alternatives, modifications, and equivalents thereof. In the following description, numerous specific details are set forth in order to provide a thorough understanding of the various embodiments, but some or all of the embodiments may be practiced without some or all of the specific details. In other instances, well-known process operations have not been described in detail in order not to unnecessarily obscure novel aspects. In addition, terminology used in the Detailed Description is intended to be interpreted in its broadest reasonable manner, even though it is being used in conjunction with certain examples. The terms used in this specification generally have their ordinary meanings in the art, within the context of the disclosure, and in the specific context where each term is used.
In general, and for the purposes of introducing concepts of embodiments of the present disclosure, presented is an improved data mapping method for traffic data analysts and real-time road data users involving the use of roadside LiDAR sensor data and the use of geographic information system (GIS)-based software. Specifically, in some embodiments GIS-based software and a roadside LiDAR sensor system data are both used to collect reference points. Examples of GIS-based software include Google Maps™, Bing Maps™ Google Earth™, and ArcGIS™, and in embodiments disclosed herein, data obtained from Google Earth™ software is utilized. It has been found that Google Earth™ software provides high precision longitude and latitude information for generating a base map without a user having to pay any additional fees. In addition, the operability of the Google Earth™ software has been found to be better and/or easier to use than Google Map™ or Bing Map™. However, these and other types of GIS-based software could be utilized. As explained herein, the improved data mapping process includes three main steps: reference points matching, transformation matrix calculations, and LiDAR data mapping. In an example process disclosed herein, the number and the distribution of the reference points are analyzed and then the best number of reference points and their distribution are identified, which data then can be used as recommendations for users for mapping a roadside LiDAR point cloud.
Referring again to
Referring again to
The roadside LiDAR sensing systems described above with reference to
Embodiments disclosed herein include data collection and preparation steps, wherein the geographic coordinates for reference objects are first collected using GIS-based software, such as Google Earth™ software.
In an example embodiment, a VLP-32C LiDAR sensor system (manufactured by the Velodyne Company as the “LiDAR Ultra Puck”) generates data for use in mapping reference points within a LiDAR point cloud. The VLP-32C sensor system uses thirty-two (32) laser beams paired with detectors to scan the surrounding environment, and the detection range of the LiDAR sensor is up to two-hundred meters (200 m) with a 360-degree horizontal field of view (FoV) and a 40-degree vertical field of view. The Velodyne VLP-32 LiDAR sensor system returns readings of objects in spherical coordinates, wherein a point is defined by a distance (D) and two angles (azimuth and polar angle).
In some implementations, the data generated by the Velodyne VLP-32 LiDAR sensor system during use may be processed by using “VeloView™” software to generate a LiDAR data point cloud and display the point cloud on a display screen of, for example, a laptop computer or desktop computer. Thus, in some embodiments the VeloView™ software may be utilized to generate data representing roadside features or objects that correspond to the reference points selected by a user using GIS-based software, for example, by using Google Earth™ software.
In an implementation, reference points are measured in two coordinate systems: the cartesian coordinate system and the World Geodetic System (WGS) 1984 coordinates system. The WGS 1984 is the latest version and it is a standard for use in cartography, geodesy, and satellite navigation including global positioning systems (GPS).
In the above equation, N is the Curvature radius of the ellipsoidal ring; e is the first eccentricity of the ellipsoid; X is the x-coordinate value of the reference point in the ECEF coordinate system; Y is the y-coordinate value of the reference point in the ECEF coordinate system; Z is the z-coordinate value of the reference point in the ECEF coordinate system; ϕ is the latitude of the reference point; λ is the longitude of the reference point; and H is the elevation of the reference point.
The influence of elevation on the results is discussed below for two different methods for collecting reference points. The curvature radius of the ellipsoidal ring N can be calculated based on Equation (2), below:
Wherein a represents the long radius of the ellipsoid, which is 6378137 m, and W represents the first auxiliary coefficient. W can be obtained from equation (3) below.
W=√{square root over ((1−e2×sin2 ϕ))} (3)
Where e represents the first eccentricity of the ellipsoid and ϕ presents the latitude of the point, wherein e=0.00332 and sin2 ϕ ∈[0,1], the thresholds of W are [0.999994,1]. The values of the W and N can be rounded to:
Accordingly, equation (1) can now be simplified to:
It should be noted that the elevations (H) in any city in the United States are much smaller than the Curvature radius of the ellipsoidal ring, so elevation values have little effect on the calculation results of Equation 4. Assuming an elevation measurement error σ m, related ECEF coordinate errors can be calculated by using Equation (5).
When σ=1000 m, which means there is a 1000 m elevation measurement offset, there is a relative error of 1.5 cm for the outputs, which can be ignored. Based on the analysis, the tolerance for the elevation (H) measurement of reference points is high.
[Xi Yi Zi 1]=[xi yi zi 1]*T (6)
There are four main transformation steps that make up the combined transformation matrix T: scaling, rotation, shear mapping, and translation. T is a square matrix of order four (4) and can be expressed as shown below:
According to the calculation rules of the matrix, the solution of the transformation matrix T can be obtained from Equation (7), which is shown below. Due to the order of T, at least four reference points are needed based on the equation.
When the number of the reference points is greater than four, the Least-squares methods is applied to solve the overdetermined problem. Least-squares is an optimization mathematical technique to optimize the function fitting a dataset, which can be expressed as shown in equation (8) below.
According to the characteristics of the earth, the data in the ECEF can be converted into the WGS 1984 coordinate system by using the Equations (9):
In the above equations, e is the second eccentricity of the ellipsoid; a is the Long radius of the ellipsoid and b is the short radius of the ellipsoid.
For roadside LiDAR sensing systems, Equation 8 and Equation 9 shown above are calculated with selected reference points and applied to either all of the cloud points' LiDAR cartesian coordinates, or in some implementations are applied to the LiDAR cartesian coordinates associated with trajectory data. In both cases the x, y, and z values from the LiDAR sensor system are utilized to obtain the corresponding points in WGS 1984 geography coordinates. Thus, the cloud points can then be used for GIS data analysis, and the trajectories data can be used to serve connected vehicles and intelligent transportation systems in real-time.
Next, a sensitivity analysis was undertaken wherein the number and distribution of reference points are analyzed to determine the mapping accuracy of the disclosed methods.
As shown in
The influence of different distributions and numbers of the reference points on mapping accuracy may be analyzed in accordance with these points. In an implementation, some of the measured GPS points were used as reference points and others were used to verify the accuracy of the reference points obtained according to the processes disclosed herein, by calculating the offset between measured GPS locations and the calculated GPS locations. In particular, the offset between the measured GPS point A (LatA, LonA) and the calculated GPS point B (LatB, LonB) can be calculated based on the Great Circle Distance Equation (10) shown below.
ΔD=R×arccos(sin(LatA)×sin(LatB)+cos(LatA)×cos(LatB)×cos(LonA−LonB)) (10)
Where, R is the average radius of the earth which equals 6,371,004 m, and ΔD is the offset between two points(m).
In an embodiment, four to eight GPS points were randomly selected and used as reference points, and unselected points were used as validation points.
Where, y is the average offset and x is the number of reference points.
Based on the regression function, when the number of the reference points is greater than or equal to 13, the offset between the measured GPS locations and the calculated GPS locations is minimal (it equals 0.138 m).
The distribution of the reference points is shown in
In an example, ten groups of data were analyzed for each scenario listed above, and in each group, 13 reference points were selected.
Thus, when selecting reference points, the suggested reference point locations are those located around the roadside LiDAR sensor rather than along the same direction of the sensor. For example, if all of the reference points are located east of the LiDAR sensor, then the calculated WGS 1984 coordinates will include higher errors or more offsets from the actual coordinates. Thus, for best results the reference points should be distributed around the LiDAR sensor in various directions rather than to one side. In addition, the distances of the reference points in relation to the LiDAR sensor should be different and should be more than twenty-five meters (25 m) away.
In summary, after reference points are selected using GIS-related software and generated by the LiDAR sensor, three main steps are performed: reference points matching, transformation matrix calculation, and data mapping. As explained above, in an example following the data mapping step, a sensitivity analysis was used to determine the best number and distribution for the collection of reference points. Data was thus collected in the physical world and then used to verify the proposed method, and the results showed a 0.138 m offset between measured GPS location points and the calculated location points. In addition, another data mapping method was selected for comparison, and based on that result the average offset for one frame of data was calculated as being 2.21 m (wherein the total number of LiDAR points is 37,832) so the method disclosed herein is superior.
LiDAR sensor system for the roadway section, and receives 1206 via an input device, selection by a user of a plurality of reference objects defined by the geographic coordinates data and by the LiDAR cartesian coordinates data. The computer processor then calculates 1208 transition matrixes for transforming the LiDAR cartesian coordinates data into geographic coordinates data, and next transforms 1210 the LiDAR cartesian coordinates data into LiDAR geographic coordinates data using the transition matrixes. In some embodiments, the computer processor then transmits 1212 the LiDAR geographic coordinate data to a user computer for analysis and/or displays the LiDAR geographic coordinate data of the roadway section on a display screen for analysis by a user.
In some embodiments of the method 1200 for mapping roadside LiDAR sensor data, the step of converting the LiDAR cartesian coordinates data into LiDAR geographic coordinates data may include the computer processor utilizing the transition matrixes to first convert the LiDAR cartesian coordinate data into LiDAR Earth-Centered, Earth-Fixed (ECEF) coordinate data, and then to convert the LiDAR ECEF coordinate data into geographic coordinate data. In addition, the geographic coordinates data may WGS 1984 coordinates data, the roadside LiDAR sensor data may be trajectory data, and the selected reference objects may include roadside features having fixed locations, such as at least one of a traffic sign, a utility pole, a corner of a building, a fire hydrant, a light pole, a traffic light pole, a start of a median, and a boulder. The GIS-based software used in the process may be Google Earth™ software, and the the detection range of the LiDAR sensor system may be up to two hundred meters (200 m) with a three hundred and sixty-degree (360°) horizontal field of view (FoV) and a forty-degree (40°) vertical field of view.
The roadside LiDAR data computer 1300 may constitute one or more processors, which may be special-purpose processor(s), that operate to execute processor-executable steps contained in non-transitory program instructions described herein, such that the traffic data processing computer 1000 provides desired functionality.
Communication device 1304 may be used to facilitate communication with, for example, electronic devices such as roadside LiDAR sensors, traffic lights, transmitters and/or remote server computers and the like devices. The communication device 1304 may, for example, have capabilities for engaging in data communication (such as traffic data communications) over the Internet, over different types of computer-to-computer data networks, and/or may have wireless communications capability. Any such data communication may be in digital form and/or in analog form.
Input device 1306 may comprise one or more of any type of peripheral device typically used to input data into a computer. For example, the input device 1306 may include a keyboard, a computer mouse and/or a touchpad or touchscreen. Output device 1308 may comprise, for example, a display screen (which may be a touchscreen) and/or a printer and the like.
Storage device 1310 may include any appropriate information storage device, storage component, and/or non-transitory computer-readable medium, including combinations of magnetic storage devices (e.g., magnetic tape and hard disk drives), optical storage devices such as CDs and/or DVDs, and/or semiconductor memory devices such as Random Access Memory (RAM) devices and Read Only Memory (ROM) devices, as well as flash memory devices. Any one or more of the listed storage devices may be referred to as a “memory”, “storage” or a “storage medium.”
The term “computer-readable medium” as used herein refers to any non-transitory storage medium that participates in providing data (for example, computer executable instructions or processor executable instructions) that may be read by a computer, a processor, an electronic controller and/or a like device. Such a medium may take many forms, including but not limited to, non-volatile media and volatile media. Non-volatile media may include, for example, optical or magnetic disks and other persistent memory. Volatile media may include dynamic random-access memory (DRAM). Common forms of computer-readable media include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, a solid state drive (SSD), any other magnetic medium, a CD-ROM, DVD, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, a RAM, a PROM, an EPROM, a FLASH-EEPROM, any other memory chip or cartridge, or any other medium from which a computer can read.
Various forms of computer readable media may be involved in providing sequences of computer processor-executable instructions to a processor. For example, sequences of instruction (i) may be delivered from RAM to a processor, (ii) may be wirelessly transmitted, and/or (iii) may be formatted according to numerous formats, standards or protocols, such as Transmission Control Protocol, Internet Protocol (TCP/IP), Wi-Fi, Bluetooth, TDMA, CDMA, and 3G.
Referring again to
The storage device 1310 may also include one or more roadside Lidar sensor data database(s) 1318 which may store, for example, traffic trajectory data and the like, and which may also include computer executable instructions for controlling the roadside LiDAR data computer 1300 to process raw LiDAR sensor data and/or information for tracking vehicles and the like. The storage device 1310 may also include one or more other database(s) 1320 and/or have connectivity to other databases (not shown) which may be required for operating the traffic data processing computer 1300.
Application programs and/or computer readable instructions run by the roadside LiDAR data processing computer 1300, as described above, may be combined in some embodiments, as convenient, into one, two or more application programs. Moreover, the storage device 1310 may store other programs or applications, such as one or more operating systems, device drivers, database management software, web hosting software, and the like.
Accordingly, the processes disclosed herein solves the technical problem of how to provide an improved data mapping method for roadside LiDAR sensor data that is accurate and inexpensive to implement, and that provides location data (i.e., traffic data) that can easily be used by traffic data analysts and real-time data users. These goals are achieved by having a computer processor convert roadside LiDAR sensor system data for a road segment from LiDAR cartesian coordinates data to geographic coordinates data (WGS 1984) for real-time applications or for further data analysis. As explained herein, in order to convert the LiDAR cartesian coordinate data to data in the WGS 1984 format, the LiDAR cartesian coordinate data must first be converted to Earth-Centered, Earth-Fixed (ECEF) coordinate system data, and then from the ECEF coordinate system data to WGS 1984 format data. Thus, embodiments disclosed herein complete the coordinate conversions by utilizing transition matrixes, which are calculated by utilizing the reference points' WGS 1984 coordinates and cartesian coordinates.
As used herein, the term “computer” should be understood to encompass a single computer or two or more computers in communication with each other.
As used herein, the term “processor” should be understood to encompass a single processor or two or more processors in communication with each other.
As used herein, the term “memory” should be understood to encompass a single memory or storage device or two or more memories or storage devices.
As used herein, a “server” includes a computer device or system that responds to numerous requests for service from other devices.
As used herein, the term “module” refers broadly to software, hardware, or firmware (or any combination thereof) components. Modules are typically functional components that can generate useful data or other output using specified input(s). A module may or may not be self-contained. An application program (sometimes called an “application” or an “app” or “App”) may include one or more modules, or a module can include one or more application programs.
The above descriptions and illustrations of processes herein should not be considered to imply a fixed order for performing the process steps. Rather, the process steps may be performed in any order that is practicable, including simultaneous performance of at least some steps and/or omission of steps.
Although the present disclosure has been described in connection with specific example embodiments, it should be understood that various changes, substitutions, and alterations apparent to those skilled in the art can be made to the disclosed embodiments without departing from the spirit and scope of the disclosure.