The present invention relates to a non-invasive automated system and method for measuring a vehicle's weight, dimension, noise, speed, license plate number, the type of the vehicle, and/or the vehicle's Department of Transportation number in the case of a commercial vehicle, simultaneously without interfering with traffic. Moreover, the same system determines in real-time the dynamics of the monitoring bridge.
Trucks are routinely weighed at weigh stations to determine if the trucks are overweight and liable to cause damage to the roadways. However, conventional systems requrire stopping the trucks at weighstations, a time consuming ane expensive procedure. What is needed is a less invasive method for measuring truck weight distributions. The present invention satisfies this need.
Example methods, devices and systems according to embodiments described herein include, but are not limited to, the following:
1. A vehicle monitoring system for determining one or more identifying characteristics of one or more vehicles traversing a bridge, comprising:
2. The system of example 1, wherein:
3. The system of example 2, wherein:
4. The system of example 1, wherein:
5. The system of example 1, wherein:
6. The system of example 1, further comprising one or more targets attached to the bridge, wherein:
7. The system of example 6, wherein:
8. The system of example 1, wherein:
9. The system of example 8, wherein the at least one sensor measuring the training displacements comprises:
10. The system of example 1, wherein:
11. The system of example 10, wherein the computer system:
13. The system of example 1, wherein:
14. The system of example 1,
15. The system of example 14, wherein:
16. The system of example 15, wherein the computer system determines at least one of the number of contact points, the speed, the distance of the markers, and the separation of the point loads using a machine learning algorithm or computer vision analysing the video images outputted from the traffic camera.
17. The system of example 15, wherein at least one of the sensor devices comprises a rangefinder determining the distance of the markers and the separation of the point loads.
18. The system of example 1, wherein the sensor devices automatically capture the signals and the computer automatically determines the weight distribution from the signals once the system is activated.
19. The system of example 1, further comprising:
An internet of things (IoT) system comprising the system of example 1,
21. The system of example 1,
22. A system deducing. the weight of vehicles traversing a bridge by combining the time-series displacement of point target(s) on the bridge with the location of each vehicle at each instant on the bridge. It may uses super-high resolution camera system and laser system to measure the time-series displacement of visual target(s) and uses traffic camera(s) and microphone(s) to detect the location of vehicles.
23. A system measuring vehicles' gross and axles weight.
24. A system using open-source and custom-trained AI and deep learning algorithms to the image of the vehicle traversing the bridge to determine vehicle sizes, dimensions, and visual characteristics. In parallel with this, a second vehicle classification is accomplished by applying artficial intelligence (AI) and deep learning algorithms to the deflection time-series of the bridge caused by the vehicle's passage.
25. A system classifies a vehicle based on its noise pattern using open-source and custom trained AI and deep learning algorithms.
26. In one or more examples, the computer implements a training a neural network algorithm having inputs comprising the locations of the vehicle on the bridge and the outputs comprising the deflection(s) of the visual target(s)
Referring now to the drawings in which like reference numbers represent corresponding parts throughout:
In the following description of the preferred embodiment, reference is made to the accompanying drawings which form a part hereof, and in which is shown by way of illustration a specific embodiment in which the invention may be practiced. It is to be understood that other embodiments may be utilized and structural changes may be made without departing from the scope of the present invention.
In various examples, the vehicle characteristic is deduced from the displacement of one or more targets mounted on the bridge. A variety of sensors 102 may be used to deduce the target displacement. In one or more examples, multiple sensor types are aggregated to the system. Each type of sensor may be selected based on its suitability for deducing target(s)' displacement in a specific frequency range. Moreover, different sensor types may be used to expand the working frequency range of the target's time-series displacement. These sensors could be invasive as well as non-invasive type device.
A computer system is used to determine the identifying characteristic from the measurement data outputted from the sensors. In various examples, the computer system comprises a distributed computer system comprising a client computer 104 attached to the sensor or edge devices and a cloud or server 106 for determining the weight distribution from the sensor data. There are different ways of assigning a deflection pattern to a vehicle as it was the only vehicle traversing the bridge. Monitoring visual targets at different locations on the bridge and combining the information with the geolocation of vehicles on the bridge is one method. A second method is to use an algorithm based on neural networks.
Various example sensor and computation modalities are described below.
In one or more examples, the computer 210 calculating the displacement of the targets comprises multiple processing cores capable of parallel processing (e.g., a graphics processing unit, GPU). The multiple processing cores (e.g., GPU) may use parallel processing to find the location of the target(s) in the image frame and calculate their displacements as a function of time.
The system of
In one or more examples, the sensor comprises the electropotic device (for measuring deflections of the target) described in [1].
The times-series deflection of the bridge can be measured using a range finder aiming a beam at a target on the bridge and recording the beam reflected back from the target. Using the rangefinder, it is possible to determine the time-series displacement of the target (e.g., stain on bridge, as illustrated in
Example rangefinder beams emitted from the rangefinder include, but are not limited to, electromagnetic beams (microwave, radar, etc.), optical beams (e.g., a laser beam), or acoustic beams (e.g., sonar, ultrasound, etc.).
In another example, a series of acoustic and/or ultrasonic sensors may be used detect the position of each vehicle on the bridge, and track the vehicles using an acoustic trilateration method. The acoustic pattern and noise level of each vehicle can be extracted and differentiated from the ambient noise. Each vehicle traversing the bridge emits an acoustic signature detected by the array of synchronized acoustic sensors. By comparing the signal received by the acoustic sensors, the relative phase delays to multiple microphones can be calculated. In this way, time of flight trilateration can be performed, enabling the geo-location of each source of sound and especially the geo location of vehicles traversing the bridge. Then, typical existing and demonstrated tracking algorithms can be applied to predict future positioning of the vehicle based on the history of the individual positions being reported.
In one or more examples, an acoustic signature and intensity can be associated to each vehicle. Large scale deployment of the technology can enable updates and use of a database to identify vehicles using acoustic signature.
In one or more examples, the microphone comprises the microphone described in [2].
(iii) Traffic Camera
One or more autonomous-and-self-contained traffic camera systems can be used to streaming live views of the traffic to detect and track each vehicle on a bridge and know each vehicle's location at each instant, as shown in
In various examples, the data generated by each of the on-premise devices is further processed remotely on the computer system linked to the on premise (on site) sensors.
In a typical example, deflection measurements (time series displacement measurement) are made on site using the on premise sensors (e.g., a camera or a range finder). The sensors may each comprise local or embedded computers or processors configured to perform the processing of the data to obtain the deflection measurements. Moreover, a time tag (using Coordinated Universal Time (UTC) accessible through the internet) is applied to each data recording at the sensor on the site. Due to the time tag on each recording, the data from different sensor can be matched even if different sensors send their data to the remote computer for aggregation and processing at different times. However, in other examples, all sensors can be physically synchronized on site.
The deflection measurement data is then sent to an edge device where the data is collected and further processed.
In various examples, the edge device is a device that provides an entry point to the core of a cloud or a remote computing system. In some examples, edge device perform processing of the data from the sensor prior to being sent to the cloud core. For example, live video streams of traffic captured by the traffic camera can be processed in an edge device using vehicle detection and tracking algorithms. In one or more examples, the edge devices host a processor capable of parallel computing configured for open-source object detection, classification, and tracking, e.g., using different versions of Yolov.
In yet further examples, the processors in the edge device can be configured to implement and train neural networks (e.g., convolution neural network) using custom made and pre-existing training data, e.g., to fine tune the previous classification and detection of vehicles, to detect vehicle size and other characteristics such as, but not limited to, the type of vehicle, color, model year, plate number, number of axles, and axle separations. Open-source data and custom-made training data can be combined to detect object and differentiate pedestrian, bicycle, motorcycle, passenger car, utility car and trucks, small truck, large truck, trailers, truck-tractors, etc.
IoT edge devices can also be used to detect and track the position of each vehicle on the bridge using acoustic trilateration methods, e.g., by receiving the noise pattern from the on premise acoustic sensor and extracting and differentiating the noise pattern of each vehicle from the ambient noise.
In one or more examples, the edge devices are physical devices. In other examples, the edge devices are virtual machines in the cloud. In one or more examples, the edge device performing object detection in the traffic camera video images comprises multiple processing cores capable of parallel processing (e.g., a graphics processing unit, GPU). Thus the multiple processing cores (e.g., GPU) may use parallel processing to find the identify the objects (e.g., vehicles) in the traffic video footage.
The hub is a bidirectional communications interface that allows communication between the on premise devices, the edge devices and the back end (e.g., core) of the IoT cloud computer. The hub may connect, manage and monitor the IoT devices, for example, managing workflow, data transfer, request-reply, synchronization information, device authentication (per device security keys).
The hub transfers the data from on premise and edge devices to the core of the cloud or remote computing system. Data may be aggregated in the core or the hub based on the UTC time codes time tagged on each data packet sent by devices. In one or more examples, the time tags are synchronized to each other using the most accurate and reliable time references available on the internet or a global positioning system (GPS).
The cloud computing system may comprise on-demand availability of computer resources, including data storage (cloud storage) and computing power, which does not require direct management by the user. In various examples, the cloud may comprise hardware computers and/or server computers, or virtual computing systems or machines, for example. Virtual machines (VMs) may function as virtual computer systems with their own CPUs, memory, network interfaces, and everything else that a physical machine has. Using the cloud, the virtual machines can be provisioned in a few minutes and used immediately. Similarly, we can deprovision them immediately, allowing efficient control of resources. In one or more examples, the deflection (e.g., time series displacement of the bridge measured using a sensor and an onsite computer with GPU) can be streamed in real time to the cloud computing system so that the weight distrubution can be calculated from the displacement data using a web application. In one or more examples, this computation can be performed in real time by provisioning multiple computing services online, but physically located across the country.
The computer system may send the data to a web application for viewing by a user. The web app may show all information (e.g., weight distribution, identifying characteristics) in real time to subscriber end users. In one embodiment, motorists could weigh their vehicle without stopping by subscribing to the system.
In one or examples, extraction of segments of the visual target's displacement time-series (corresponding to each of one or more vehicles crossing the bridge) is achieved by:
In one or more examples, neural networks are used to associate a times-series deflection pattern to individual vehicles and then deduce the vehicle's weight. Typically the time-series deflection of the point target(s) (e.g., stain as illustrated in
In other words,
In one or more examples, each vehicle-axle is detected. In this case, the pixel location of each axle is an input to the neural network and each vehicle axle is considered an external point-load to the bridge. This approach allows measurement of the vehicle axle weight in addition to the vehicle's gross weight.
Weight and bias and other network characteristics are calculated using training data. In one or more examples, the data collection rate equals the traffic camera's frame rate. If the traffic camera streams video images at a rate of 30 frames per second, then the data collection rate is 108,000 sets of data after an hour and 2,592,000 sets of data after one day. Network parameters can be defined by assigning an average weight to the vehicles traversing a traffic lane, since the number of random vehicles traversing the bridge is very large over the time training data is collected. The calculation may be simplified by using different schemes to group neighboring pixels or regions and associate the groups to the input nodes of neural network.
The size of each pixel inputted to the neural network can be selected in a variety of ways. In one or more examples, each pixel inputted to the neural network is a traffic lane on the bridge. In one or more examples, the weights and biases of each layer in the neural network is configured and trained with the assumption that a length of the same lane of the bridge receives the same amount of traffic over time (i.e., the weight at each location along one lane is on average the same). This is achieved by using sufficiently long training times (in some examples, the training time may be a plurality of hours) so that on average, the traffic and weight experienced by all locations along the length of the lane is the same. Discrepancies in traffic between lanes can be identified by camera accounted for using weighting factors.
The weights and biases for each neural network can also be determined/calibrated by training the neural network using a known vehicle (of known weight) traveling on every lane of the bridge.
Once the neural network is trained, the neural network can be reversed to deduce vehicle relative loads for each recorded point target displacement. If this method leads to multiple solutions corresponding to different input configurations, the solution corresponding to the location of vehicles on the traffic video image may be selected. With this method, it is unnecessary to calibrate the system to measure vehicle relative weight.
In or more examples, at the same time the system is operating, the neural network may continuously update and train itself so that the system becomes more accurate and reliable as time goes by.
Translating relative weight measurement to weight measurement requires calibration. System calibration can be performed as soon a known-weight vehicle traverses the bridge. In one or more examples, the system is calibrated each time a truck having a known weight is traversing the bridge. Such a known weight truck may be a truck that has been pulled over for weight inspection and whose weight (measured on a scale) has been entered to the system (e.g., via an IoT or cloud system).
In one or more examples, the system utilizing the neural network detects a truck-tractor (without trailer) traversing the bridge, and deduces the vehicle model and characteristics of the truck tractor using AI and deep learning algorithms applied to the traffic video-images. The neural network system then uses the vehicle estimated weight to self-calibrate.
In one or more examples, the system utilizing the neural network at one station is connected to similar systems monitoring other bridges at other stations. If a first one of the stations is adequately calibrated and a vehicle that traversed it then drives on another bridge at a second station, the system monitoring the second station could use the vehicle weight from the first station to self-calibrate.
In one or more examples, the system is connected to nearby weigh stations and uses vehicle weights obtained at the weigh station and traversing the bridge to self-calibrate.
In one or more examples, the system is calibrated correctly to work on a specific type of bridge and therefore it will be calibrated to work on alike bridges.
Block 900 represents collecting a set of images of vehicles on a bridge.
Block 902 represents, for each of the images, identifying which pixels in the image contain a vehicle, to obtain a distribution of pixels representing the positioning of the vehicles.
Block 904 represents collecting a set of deflections (time series of displacements of a target) of the bridge caused by the vehicles traversing the bridge, for each of the images.
Block 906 represents creating a training set comprising the distributions of pixels and the deflections.
Block 908 represents training the neural network by associating the distributions of pixels with the deflection obtained for that distribution, to obtain a trained neural network, so that the neural network is trained to output the deflection in response to the distribution of pixels at the inputs to the neural network.
Block 1000 represents collecting a set of deflections of a bridge in response to vehicles traversing the bridge; Block 1002 represents inputting the set of deflections as inputs to the trained neural network so that the neural network outputs a distribution of pixels identifying locations of the vehicles on the bridge and associating the locations with a magnitude of the deflections so as to determine the load distribution (resulting from the passage of the one or more of the vehicles).
Block 1004 represents outputting a comparative weight of the vehicle (e.g., as compared to other vehicles on the bridge, or a comparative weight at each of the pixels (e.g., pixel X corresponds to a location experiencing more than weight than pixel Y because the deflection associated with pixel X is larger than the deflection associated with pixel Y).
Block 1004 represents optionally calibrating the load distribution using a known weight of a vehicle so that the weight of each of the vehicles can be determined using the calibration.
e. Diverse Vehicle Classification Method Using Only the Time-Series Deflection Data of Visual Targets
Vehicle classification method involves applying feature detection/pattern recognition algorithms to time-series displacements of the visual targets. One of those approaches consists of knowing the characteristics of time series deflection of different types of vehicles when traversing a bridge. Knowing the displacement pattern caused by a particular vehicle type when traversing a bridge, cross-correlation can be employed to detect the passage of those types of vehicles.
At some point, as a specific vehicle passes through a bridge, the cross-correlation between its deflection time series characteristics of that specific vehicle type, and the time series of the visual target that is being monitored reaches a maximum peak.
In this example, the deflection times series of the bridge associated with the vehicle's passage is fitted to a mathematical function modeling the deflection of the bridge. By choosing the best coefficients for the mathematical model to fit the recorded data, the vehicle's characteristics, such as its axle weights, separations, and speed, can be determined.
In this model, vehicle loads are transferred to the pavement on the bridge through the vehicle's points of contact with the pavement. Each tire-pavement interaction can be represented as a point load.
Bridge deflection caused by live loads can be modelled at different levels of complexity. Some advanced numerical models perform a dynamic analysis of the bridge.
In one example, a static model of the bridge can be used to describe the relationship between the bridge's deflection and the static applied loads. In other examples, as more data is collected and a better understanding of the deflections is obtained, the model can be refined, optimized, updated and/or improved to account for both bridge and vehicle dynamics. For example, if the bridge's deflection caused by a single point load is known, a superposition method can be used to describe the deflection of a bridge caused by an ensemble of point loads representing a truck or an ensemble of trucks.
There are a number of factors that affect the shape of the deflection pattern caused by a truck's passage on a bridge, including the bridge length, truck speed, point loads of the truck, point-load separation distances, and location of the bridge where the deflection is measured.
Most highway bypasses are stiff beam bridges which can be modeled as beams simply supported at ends (as shown in
The same static model can be used to calculate the deflection time-series caused by a single point load traversing the bridge. To do so, a function of time is used to adjust the point load location. In this case, the location of P, expressed with the help of a and b, becomes a function of time. As a result, we can calculate the time-series deflection of the bridge y(t) at the location x caused by the passage of the point-load P.
Assuming P moves at speed ν, b(t) and α(t) are functions of ν.
where, t0 is the time at which the point load enters the bridge, and t1 is the time at which it exits.
This is the deflection model of a beam simply suspended at both ends. Alternatively, we could use a more advanced displacement bridge model that is based on a numerical structural analysis of the bridge using software such as Etabs or SolidWorks. In other examples, the a dynamic model of the response of the bridge can be used. Thus, other response models of the bridge can be used or optimized. In other examples, the a dynamic model of the response of the bridge can be used. Thus, other response models of the bridge can be used or optimized.
In one or more examples, the traffic camera is synchronized with the bridge deflection measurements. A object detection and tracking algorithm is applied in to the traffic video in real time. The traffic camera also records the location of each vehicle on the bridge during each recording with a time stamp that is synchronized with the bridge's deflection measurements. Therefore, every time a bridge displacement is recorded, the exact location and lane of the vehicles responsible is known.
More specifically, synchronization of traffic cameras and bridge deflection measurements can be implemented by associating a time tag with each recording, whether it is a bridge displacement measurement or a traffic video-image recording. Traffic video is processed in real-time on an IoT edge device to detect and track objects. Real-time processing is possible on a GPU-enabled device.
As an initial guess, this information is used to fit a vehicle's deflection pattern to its mathematical model using known curve fitting methods.
A curve fitting technique is used to decompose the measurement data (the deflection pattern) into a series of deflections caused by single point loads. Each single point load deflection corresponds to a deflection of a truck axle. The intensity of each peak is proportional to the weight of each axle.
Curve fitting provides a more accurate measure of the vehicle's speed than the initial guess. As a result, the time between different deflections can be used to determine the vehicle's axle distance separations. The data obtained in
In this case, we were only monitoring the deflection of the bridge at one location. Nevertheless, simultaneously monitoring bridge displacement at multiple locations would allow us to interfere with the weight of the car on every one of its wheels.
When multiple vehicles drive on the bridge simultaneously, determining the deflection pattern associated with one vehicle is a typical linear inverse problem. This requires simultaneously solving a set of linear equations. The problem is formally overdetermined and has only one solution available if there are more knowns than unknowns.
The knowns are the total number of observation data points associated with the passage of a vehicle. It is the number of times we record the deflection of the bridge while the vehicle is on it, multiplied by the number of visual targets we observe. Unknowns are the deflection patterns caused by each vehicle traversing the bridge as if it were the only one on the bridge.
The traffic camera also records the location of each vehicle on the bridge during each recording with a time stamp that is synchronized with the bridge's deflection measurements. Therefore, every time we record a bridge displacement, we know the exact location and lane of the vehicles responsible.
Using this information, a deflection pattern can be associated with each vehicle if we consider a single-lane bridge. However, the presence of multiple measurements along the length of the bridge will facilitate the solution, since it will increase the number of knowns.
The deflection of the bridge caused by both trucks on the bridge is given by:
K is proportional to the stiffness o the bridge, T1(t−t1) represents the ensemble of moving point loads characterizing a vehicle entering the bridge at time t1 and driving in lane 1. T2(t−t2) represents the ensemble of moving point loads characterizing a vehicle entering the bridge at time t2 and driving in lane 2.
Let's suppose we calibrate our measurement with a known weight truck T. When no other vehicles are on the bridge, T drives on it in the first lane. We measure the deflection on the first lane D1(t)T=First Lane, and on the second lane
We can assume a11=1. Consequently, the first equation is
Then we can use the second equation,
When T drives in the second lane, we measure the deflection on the lane D1(t)τ=Second Lane, and on the second lane D2(t)τ=Second Lane
The first equation,
The second equation, D2(t)τ=Second Lane=K·α22T(t−2) leads to determine α22.
Other methods are described in [1].
The further away the vehicle is from the target, the lower the measurement signal-to-noise ratio and the lower the measurement precision. In one or more examples, extending the single lane problem to multiple lanes may be achieved by monitoring each lane separately. In practice, this would involve monitoring visual targets beneath each bridge lane.
More specifically, an example curve fitting process may comprise:
Block 2000 represents obtaining a deflection of the bridge caused by the vehicle traversing the bridge.
Block 2002 represents identifying a peak in the deflection above a threshold value indicating that the stress on the bridge exceeds an acceptable value.
Block 2100 represents obtaining the deflection associated with one of the vehicles traversing the bridge, the deflection obtained by observation of a marker on the bridge.
Block 2102 represents obtaining a number of contact points of the point loads in the vehicle traversing the bridge (number of axles).
Block 2104 represents obtaining a distance of the marker from supports on the bridge and a separation of the point loads.
Block 2104 represents obtaining a plurality of curves representing a response of the bridge to each of the point loads.
Block 2104 represents obtaining an estimate of the speed of the vehicle.
Block 2106 represents curve fitting the deflection as a function of time by summing each of the curves, using a temporal distance between the curves set by the separation divided by the speed, and each of the curves having a spread and maximum peak scaled by the distance of the marker to supports on the bridge; and
Block 2108 represents using the curve fitting to identify each of the point loads in the deflection and determine which of the point loads causes the most stress on the bridge.
Block 2200 represents using traffic cameras, the locations and number of point loads can be estimated on the bridge at any time.
Block 2202 represents using this information as an initial guess to start the process of curve fitting the curve fit of Block 2204.
Block 2204 represents extracting the segment associated with the time-series displacement at which the vehicle was on the bridge, from the time series displacement of each visual target
Block 2204 represents begin fitting the curve using Blocks 2202 and 2204.
Block 2206 represents when the curve-fit converges, it gives accurate weight, speed, and separation distance for each load traversing the bridge.
Blocks 2200-2206 can be reiterated/repeated each time a vehicle of interest finish traversing the bridge.
Based on the superposition method, we model the deflection time series of the bridge caused by the passage of a vehicle as the sum of the deflections caused by its individual axles:
The displacement time series can be fitted using various methods, and various methods can be used to determine N.
Then, the fitting coefficients are Ai, ti, and deltaT (if knowing N).
Calibrating the system determines the stiffness coefficient k. The length of the bridge is known or can be measured easily. That is:
Multiple methods could be used. Here, we use here a multi-peak detection method based on Gaussian to locate significant peaks in the time series. The time, the width, and the height of each peak are returned as shown.
We can determine the maximum peak deflection that a truck is allowed to induce on the bridge using the peak deflection of a reference truck with the maximum weights and the shortest axle separations moving at the highest speed, whenever a peak deflection exceeds this limit, the truck causes an excessive amount of stress on the pavement, violating the bridge's formulas, (see appendix for more detail).
A deflection pattern associated with the passage of a truck extracted from the deflection time-series shown above.
Based on each peak's location, width, and height, we can develop an algorithm to extract segments of the time series corresponding to deflection patterns larger than a threshold automatically and in real-time. The following deflection is extracted from the deflection time-series data.
The red graph shows the measurement, and the blue curve shows the deflection caused by the same truck as predicted by the theory.
Even though the measurement matched the theory well, the measurement also included an oscillation. (this oscillation could be included in the theory). Bridge vibration is responsible for the oscillation; it is one of the bridge's structural characteristics. Using the power spectrum density of the displacement time-series, we can determine resonance frequencies of the bridge; corresponding to the bridge's various vibration modes. A change in those frequencies may be an indication of structural changes. An early warning system for structural health could be one of its applications.
In another example, our bridge deflection measurement directly distinguishes the maximum stress caused by trucks traversing the bridge. The maximum peak deflection of a vehicle corresponds to the maximum stress it induces on the bridge.
Just observing the deflection of the bridge in functions of time, similar to the ones attached here, makes it possible to identify trucks violating the bridge formula weights without calculating their weight, dimension, or speed.
Suppose we know the maximum peak deflection of an 18-wheeler with a gross weight of lbs. 80,000, with the maximum axle weights and smallest axle separations allowed, induced on the bridge, which we monitor as it traverses at the maximum speed=>This number will correspond to the maximum peak deflection allowed.
Any vehicle traversing the bridge and inducing deflection that exceeds that number violates the bridge formula. To begin, we can require trucks to move on the right lane to mitigate the case where multiple trucks move on multiple lanes. However, the system will eventually assign a deflection pattern to each truck as though it were the only vehicle on the bridge (even though multiple trucks move simultaneously on different lanes).
Monitoring the intensity of deflections can also be used to estimate changes in the stiffness of the bridge.
Stiffness is the coefficient that determines the displacement of the bridge under a load. Stiffness is a structural characteristic of a bridge, and it can indicate structural changes if it changes. An early warning system for structural health could be one of its applications.
Block 2400 represents obtaining bridge resonances by monitoring displacement of the bridge as a function of time (e.g., when trucks pass).
Block 2402 reprsents monitoring the health of the bridge by monitoring changes in the displacement.
The present invention is not limited by the type of bridge used. Example bridges include, but are not limited to overpasses, freeway bypasses (e.g., as illustrated in
In one embodiment, the computer 2602 operates by the hardware processor 2604A performing instructions defined by the computer program 2610 (e.g., a vehicle classification application) under control of an operating system 2608. The computer program 2610 and/or the operating system 2608 may be stored in the memory 2606 and may interface with the user and/or other devices to accept input and commands and, based on such input and commands and the instructions defined by the computer program 2610 and operating system 2608, to provide output and results.
Output/results may be presented on the display 2622 or provided to another device for presentation or further processing or action. In one embodiment, the display 2622 comprises a liquid crystal display (LCD) having a plurality of separately addressable liquid crystals. Alternatively, the display 2622 may comprise a light emitting diode (LED) display having clusters of red, green and blue diodes driven together to form full-color pixels. Each liquid crystal or pixel of the display 2622 changes to an opaque or translucent state to form a part of the image on the display in response to the data or information generated by the processor 2604 from the application of the instructions of the computer program 2610 and/or operating system 2608 to the input and commands. The image may be provided through a graphical user interface (GUI) module 2618. Although the GUI module 2618 is depicted as a separate module, the instructions performing the GUI functions can be resident or distributed in the operating system 2608, the computer program 2610, or implemented with special purpose memory and processors.
In one or more embodiments, the display 2622 is integrated with/into the computer 2602 and comprises a multi-touch device having a touch sensing surface (e.g., track pod or touch screen) with the ability to recognize the presence of two or more points of contact with the surface. Examples of multi-touch devices include mobile devices (e.g., IPHONE, NEXUS S, DROID devices, etc.), tablet computers (e.g., IPAD, HP TOUCHPAD, SURFACE Devices, etc.), portable/handheld game/music/video player/console devices (e.g., IPOD TOUCH, MP3 players, NINTENDO SWITCH, PLAYSTATION PORTABLE, etc.), touch tables, and walls (e.g., where an image is projected through acrylic and/or glass, and the image is then backlit with LEDs).
Some or all of the operations performed by the computer 2602 according to the computer program 2610 instructions may be implemented in a special purpose processor 2604B. In this embodiment, some or all of the computer program 2610 instructions may be implemented via firmware instructions stored in a read only memory (ROM), a programmable read only memory (PROM) or flash memory within the special purpose processor 2604B or in memory 2606. The special purpose processor 2604B may also be hardwired through circuit design to perform some or all of the operations to implement the present invention. Further, the special purpose processor 2604B may be a hybrid processor, which includes dedicated circuitry for performing a subset of functions, and other circuits for performing more general functions such as responding to computer program 2610 instructions. In one embodiment, the special purpose processor 2604B is an application specific integrated circuit (ASIC) or field programmable gate array (FPGA) or graphics processing unit (GPU), or multi core processor for parallel processing.
The computer 2602 may also implement a compiler 2612 that allows an application or computer program 2610 written in a programming language such as C, C++, Assembly, SQL, PYTHON, PROLOG, MATLAB, RUBY, RAILS, HASKELL, or other language to be translated into processor 2604 readable code. Alternatively, the compiler 2612 may be an interpreter that executes instructions/source code directly, translates source code into an intermediate representation that is executed, or that executes stored precompiled code. Such source code may be written in a variety of programming languages such as JAVA, JAVASCRIPT, PERL, BASIC, etc. After completion, the application or computer program 2610 accesses and manipulates data accepted from I/O devices and stored in the memory 2606 of the computer 2602 using the relationships and logic that were generated using the compiler 2612.
Example opensource neural network libraries that can be used for implementing the neural networks include, but are not limited to, TensorFlow, Openn, Keras, Caffe, PyTorch
The computer 2602 also optionally comprises an external communication device such as a modem, satellite link, Ethernet card, or other device for accepting input from, and providing output to, other computers 2602.
In one embodiment, instructions implementing the operating system 2608, the computer program 2610, and the compiler 2612 are tangibly embodied in a non-transitory computer-readable medium, e.g., data storage device 2620, which could include one or more fixed or removable data storage devices, such as a zip drive, floppy disc drive 2624, hard drive, CD-ROM drive, tape drive, etc. Further, the operating system 2608 and the computer program 2610 are comprised of computer program 2610 instructions which, when accessed, read and executed by the computer 2602, cause the computer 2602 to perform the steps necessary to implement and/or use the present invention or to load the program of instructions into a memory 2606, thus creating a special purpose data structure causing the computer 2602 to operate as a specially programmed computer executing the method steps described herein. Computer program 2610 and/or operating instructions may also be tangibly embodied in memory 2606 and/or data communications devices, thereby making a computer program product or article of manufacture according to the invention. As such, the terms “article of manufacture,” “program storage device,” and “computer program product,” as used herein, are intended to encompass a computer program accessible from any computer readable device or media.
Of course, those skilled in the art will recognize that any combination of the above components, or any number of different components, peripherals, and other devices, may be used with the computer 2602.
A network 2704 such as the Internet connects clients 2702 to server computers 2706. Network 2704 may utilize ethernet, coaxial cable, wireless communications, radio frequency (RF), etc. to connect and provide the communication between clients 2702 and servers 2706. Further, in a cloud-based computing system, resources (e.g., storage, processors, applications, memory, infrastructure, etc.) in clients 2702 and server computers 2706 may be shared by clients 2702, server computers 2706, and users across one or more networks. Resources may be shared by multiple users and can be dynamically reallocated per demand. In this regard, cloud computing may be referred to as a model for enabling access to a shared pool of configurable computing resources.
Clients 2702 may execute a client application or web browser and communicate with server computers 2706 executing web servers 2710. Such a web browser is typically a program such as MICROSOFT INTERNET EXPLORER/EDGE, MOZILLA FIREFOX, OPERA, APPLE SAFARI, GOOGLE CHROME, etc. Further, the software executing on clients 2702 may be downloaded from server computer 2706 to client computers 2702 and installed as a plug-in or ACTIVEX control of a web browser. Accordingly, clients 2702 may utilize ACTIVEX components/component object model (COM) or distributed COM (DCOM) components to provide a user interface on a display of client 2702. The web server 2710 is typically a program such as MICROSOFT'S INTERNET INFORMATION SERVER.
Web server 2710 may host an Active Server Page (ASP) or Internet Server Application Programming Interface (ISAPI) application 2712, which may be executing scripts. The scripts invoke objects that execute business logic (referred to as business objects). The business objects then manipulate data in database 2716 through a database management system (DBMS) 2714. Alternatively, database 2716 may be part of, or connected directly to, client 2702 instead of communicating/obtaining the information from database 2716 across network 2704. When a developer encapsulates the business functionality into objects, the system may be referred to as a component object model (COM) system. Accordingly, the scripts executing on web server 2710 (and/or application 2712) invoke COM objects that implement the business logic. Further, server 2706 may utilize MICROSOFT'S TRANSACTION SERVER (MTS) to access required data stored in database 2716 via an interface such as ADO (Active Data Objects), OLE DB (Object Linking and Embedding DataBase), or ODBC (Open DataBase Connectivity).
Generally, these components 2700-2716 all comprise logic and/or data that is embodied in/or retrievable from device, medium, signal, or carrier, e.g., a data storage device, a data communications device, a remote computer or device coupled to the computer via a network or via another data communications device, etc. Moreover, this logic and/or data, when read, executed, and/or interpreted, results in the steps necessary to implement and/or use the present invention being performed.
Although the terms “user computer”, “client computer”, and/or “server computer” are referred to herein, it is understood that such computers 2702 and 2706 may be interchangeable and may further include thin client devices with limited or full processing capabilities, portable devices such as cell phones, notebook computers, pocket computers, multi-touch devices, and/or any other devices with suitable processing, communication, and input/output capability.
In one or more examples, the one or more processors, memories, and/or computer executable instructions are specially designed, configured or programmed for performing machine learning or neural networks. The computer program instructions may include a object detection, identification, or computer vision module or apply a machine learning model (e.g., for analyzing data or training data input from a data store to perform the neural network processing described herein). In one or more examples, the processors may comprise a logical circuit for performing object detection, or for applying a machine learning model for analyzing data or train data input from a memory/data store or other device (e.g., an image from a camera). Data store/memory may include a database.
In some examples, the machine learning logical circuit may be a machine learning model, such as a convolutional neural network, a logistic regression, a decision tree, or other machine learning model.
Of course, those skilled in the art will recognize that any combination of the above components, or any number of different components, peripherals, and other devices, may be used with computers 2702 and 2706. Embodiments of the invention are implemented as a vehicle tracking application on a client 2702 or server computer 2706. Further, as described above, the client 2702 or server computer 2706 may comprise a thin client device or a portable device that has a multi-touch-based display.
In various examples, the central processing unit (CPU) contains all the circuitry needed to process input, store data, and output results. The CPU is constantly following instructions of computer programs that tell it which data to process and how to process it.
However, the CPU (central processing unit) is a general-purpose processor that can perform a variety of tasks. The CPU is suitable for a wide variety of workloads, especially those requiring low latency or high performance per core. The CPU uses its smaller number of cores to carry out individual tasks efficiently. It typically relies on sequential computing, the type of computing where one instruction is given at a particular time. The next instruction has to wait for the first instruction to execute. Parallel processing contrasts with sequential processing. It is possible to reduce processing time by using parallelism, which allows multiple instructions to be processed simultaneously.
Parallelism can be implemented by using parallel computers, i.e., a computer with many processors or multiple cores of a CPU. But most consumer CPUs feature between two and twelve cores. GPUs, on the other hand, typically have hundreds of cores or more. This massively parallel architecture is what gives the GPU its high computing performance.
GPU-accelerated computing offloads computer-intensive portions of the application to the GPU, while the remainder of the code still runs on the CPU. So, the GPU works and communicate with the CPU and is used to reduce the workload of the CPU, especially when running parallel-intensive software. More precisely, GPU-accelerated computing offloads compute-intensive portions of the application to the GPU, while the remainder of the code still runs on the CPU. From a user's perspective, applications simply run much faster.
A GPU may be found integrated with a CPU on the same electronic circuit, or discrete (e.g., separate from the processor). Discrete graphics has its own dedicated memory that is not shared with the CPU. In typical examples, the host is the CPU available in the system, the system memory associated with the CPU is called host memory, the GPU is called a device, and GPU memory is called device memory.
In one or more examples, to execute a CUDA program, there are three main steps:
Example methods, devices and systems according to embodiments described herein include, but are not limited to, the following (referring also to
1. A vehicle monitoring system 100, 400 for determining one or more identifying characteristics of one or more vehicles 2500 traversing (e.g., crossing, driving across, moving across) a bridge 2502, comprising:
2. A device, comprising:
3. A method of monitoring or identifying one or more characteristics of one or more vehicles, comprising:
4. The system, device, or method of any of the examples 1-3, wherein:
5. The system, method, or device of any of the examples 1-4, wherein:
6. The system, method, or device of any of the examples 1-5, wherein:
7. The system, method, or device of any of the examples 1-5, wherein:
8. The system, device, or method of any of the examples 1-7, further comprising one or more targets attached to the bridge, wherein:
9. The system, device, or method of example 8, wherein:
10. The system, device, or method of any of the examples 1-9, wherein:
11. The system, device, or method of example 10, wherein the at least one sensor measuring the training displacements comprises:
12. The system, device, or method of any of the examples 1-11, wherein:
13. The system, method, or device of example 12, wherein the computer system:
14. The system, device, or method of any of the examples 1-11, wherein:
15. The system, method, or device of any of the examples 1-11, wherein:
16. The system, method, or device of example 15, wherein:
17. The system, method, or device of example 15 or 16, wherein the computer system determines at least one of the number of contact points, the speed, the distance of the markers, and the separation of the point loads using a machine learning algorithm or computer vision analysing the video images outputted from the traffic camera.
18. The system, device, or method of example 15, 16 or 17 wherein at least one of the sensors comprises a rangefinder 300, 308 determining the distance of the markers and the separation of the point loads.
19. The system, device, or method of any of the examples 1-19, wherein the sensor devices automatically capture the signals and the computer automatically determines the weight distribution from the signals once the system is activated.
20. The system, device, or method of any of the examples 1-19, further comprising:
An internet of things (IoT) system 400 comprising the system of any of the examples 1-19, or the devices of any of the examples 1-19 configured to be linked in the IoT using the transmitter, or the method of identifying of any of the examples 1-19 using the IoT comprising the sensor devices and the computer system, wherein optionally:
21. The system, method, or device of any of the examples 1-20, comprising
22. The system, method, or device of any of the examples 1-21, wherein the bridge comprises a freeway bypass.
23. The system, method, or device of any of the examples 1-22, wherein a length of the bridge (e.g., freeway bypass) is only long enough for passage of one vehicle (e.g., truck) at a time.
24. The system, method, or device of any of the examples 1-23, wherein the vehicles comprise one or more trucks.
25. The system, method, or device of any of the examples 1-24, wherein the displacement as a function of time comprises a time series of displacements.
26. The system, method, or device of any of the examples 1-25, wherein the targets or markers on the bridge comprise visual targets or markers such as a bolt or visible feature on the bridge, one or more small holes, one or more stains, discoloration, or any visual mark, or a mark designed with a specific shape that is then attached to the bridge.
27. The system, method, or device of any of the examples, wherein the weight distribution or the load distribution is the amount of the total vehicle weight imposed on the ground at an axle, group of axles, or an individual wheel or plurality of wheels.
28. The method, system or device of any of the examples, wherein the deflection of one vehicle is extracted from the deflection caused by a plurality of vehicles by solving a linear equation. For example, for the case of two trucks on a bridge and two targets attached to the bridge (target 1 attached to lane 1 and target 2 attached to lane 2) the deflection D1 of target 1 and deflection D2 of target 2 due to the presence of both trucks is given by
Alternatively, we can define wherein D1(t) as the timeseries displacement measured with the visual target X on lane 1, and D2 (t) as the timeseries displacement measured using the visual target X on lane 2, and then set DeflectionLane1Alone(t) as the Deflection of Lane1 without the other lane and DeflectionLane2Alone(t) is the Deflection of Lane2 without other lane. Then:
and
29. The system, method, or device of any of the examples, wherein the deflection caused by one vehicle is extracted from the deflection caused by multiple vehicles, by
30. In one or more examples, a large number of systems are monitoring simultaneously a large number of bridges in real-time. In this configuration, each WeighCam station is part of a larger WeighCam network. Data collected from all stations can be used to track freight movement within a region. Using such a system, freight movements and flows can be analyzed and characterized throughout the region.
31. The system, method, or device of any of the examples 1-30, wherein the vehicle comprise autonomous trucks to determine their weight and other characteristics since they cannot be stopped.
32. The system, method, or device of any of the examples 1-31 wherein the weight distribution comprises a measure of vehicle axle weight in addition to the gross weight, when structures (local minimum and maximum values) associated with the vehicle's axles are visible inside the time series of deflection caused by the vehicle as it traverses the bridge. Using this information, the weight of each axle group of the vehicle can be determined.
33, The system, method, or device of any of the examples 1-31, wherein each axle group can be considered as a separate point load. Each axle can be assigned a weight if there are more independent measurements than unknowns, i.e. axle groups from all vehicles, and the vehicle detection and tracking can identify and track each axle group.
34. The system, method, or device of any of the examples 1-33, wherein the system requests input by driver and ship owners and third parties to weigh their vehicles without stopping as the vehicle traverses a weighcam inspection site. As an example, Regulation and inspection of autonomous trucks are challenging due to the fact that are not easy to stop. Law enforcement and regulators seek to monitor these trucks independently from the OEM and fleet owners sharing all trucks' electronic data with them. This problem is solved by the system described here since vehicles are weighed as they pass through the stations.
35. The system, method, or device of any of the examples 1-34 wherein the system is used to trade merchandise and goods without stopping the freight movement. Due to the fact that multiple independent parties have access to a station's output information through subscription, they could use this platform to trade the freight carried by a specific truck based on its weight.
36. The system, method, or device of any of the examples 1-35, wherein the computer system feed a decision-making algorithm with all collected pieces of information about a vehicle to determine whether the vehicle is suspicious or not.
37. The system, method or device of any of the examples 1-36, wherein the weight distribution comprises identification of point loads and the point loads comprise contact points between the vehicle and the road (e.g., pairs of wheels connected to an axle), for example.
38. The system, method, or device of any of the examples 1-37, wherein the computer system executes a web application allowing a subscriber or user to request measurement and viewing of the identifying characteristic (e.g, weight distribution) of a vehicle (e.g., in real time).
39. The system, method or device of any of the examples, wherein the weight distribution, e.g., measured in newtons, comprises a weight in newtons, kg, tons, or other unit.
40. The system, method or device of any of the relevant examples wherein curve fitting methods are be used to classify and characterize vehicles. We can associate each vehicle traversing the bridge with a time-series displacement caused by the vehicle on the bridge if it were the only vehicle. We can fit a mathematical function to the series of data points characteristics of the passage of the vehicle. Vehicle characteristics would be determined by the parameters of the function of the best fit.
41. The system, method, or device for classifying vehicles using acoustic time-series wherein sensors are used to continuously record acoustic time series, wherein the computer system associates the acoustic time series with individual vehicles. Vehicles can be identified by their acoustic pattern characteristics regardless of their appearance.
42. The system, method, or device of any of the examples 1-41, wherein the weight distribution comprises a comparison/output of the relative magnitude of each of the point loads/contact points in the distribution (e.g., P1 is 2 times larger than P2).
43. The system of any of the examples, using curve fitting to identify point loads in the displacement as a function of time obtained for one or more vehicles and associate point loads with specific vehicles using synchronized traffic video footage.
44. A method of making the system or device of any of the examples 1-43, comprising providing or manufacturing the one or more sensor devices and coupling the one or more sensor devices to the computer system, and optionally providing a user interface for providing inputs and outputs to an end user.
The following references are incorporated by reference herein.
This concludes the description of the preferred embodiment of the present invention. The foregoing description of one or more embodiments of the invention has been presented for the purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise form disclosed. Many modifications and variations are possible in light of the above teaching. It is intended that the scope of the invention be limited not by this detailed description, but rather by the claims appended hereto.
This application claims the benefit under 35 U.S.C. Section 119(e) of co-pending and commonly-assigned U.S. provisional patent application Serial Nos. 63/302,964, filed on Jan. 25, 2022, by Shervin Taghavi, entitled “NEW NON-INVASIVE AUTOMATED SYSTEM TO MEASURE VEHICLE'S WEIGHT, DIMENSION, AND NOISE IN REAL TIME WITHOUT INTERFERING WITH THE TRAFFIC,” and63/368,652, filed on Jul. 17, 2022, by Shervin Taghavi Larigani, entitled “NEW NON-INVASIVE FULLY AUTOMATED SYSTEM IDENTIFYING AND CLASSIFYING VEHICLES AND MEASURING EACH VEHICLE'S WEIGHT, DIMENSION, VISUAL CHARACTERISTICS, ACOUSTIC PATTERN AND NOISE IN REAL-TIME WITHOUT INTERFERING WITH THE TRAFFIC; and63/407,662, filed on Sep. 18, 2022, by Shervin Taghavi Larigani, entitled “METHOD FOR DETERMINING THE NUMBER OF AXLES, AXLE WEIGHTS, AXLE SEPARATIONS, AND VEHICLE SPEED AS WELL AS A METHOD FOR DETERMINING IF A VEHICLE'S MAXIMUM STRESS ON THE ROAD EXCEEDS THE PERMITTED LIMIT USING THE MOTION THAT THE VEHICLE INDUCED ON THE BRIDGE AS IT TRAVERSES”; and All of which applications are incorporated by reference herein.
This invention was made with government support under NSF SBIR Phase II 2051992 awarded by the National Science Foundation. The government has certain rights in the invention.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US23/61291 | 1/25/2023 | WO |
Number | Date | Country | |
---|---|---|---|
63407662 | Sep 2022 | US | |
63368652 | Jul 2022 | US | |
63302964 | Jan 2022 | US |