The present disclosure relates to solutions for locating a vehicle, for example a personal-mobility vehicle or some other type of automated-guided vehicle (AGV), such as an autonomous mobile robot (AMR), in an environment, for instance an airport, a railway station, a hospital, or a shopping mall.
Known to the art are numerous types of personal-mobility vehicles (PMVs). A sub-group of these PMVs are electric vehicles that enable a person with disabilities and/or motor difficulties, the so-called vehicles for persons with reduced mobility (PRMs), such as a disabled or elderly person, to move more easily. For instance, this group of vehicles comprises wheelchairs with electric propulsion means, electric wheelchairs, or electric scooters.
Typically, these PMVs comprise a seat for a user/passenger and a plurality of wheels 40. Typically, the PMV comprises (at least) four wheels, but also known are vehicles that comprise only three wheels or self-balancing electric wheelchairs that comprise only two axial wheels (similar to a hoverboard).
As illustrated in
The vehicle 1 further comprises a control circuit 20 and a user interface 10. In particular, the control circuit 20 is configured to drive the electric actuators 30 as a function of one or more control signals received from the user interface 10. For instance, the user interface 10 may comprise a joystick, a touchscreen, or some other human-computer interface (HCI), such as an eye-tracker, i.e., an oculometry device (i.e., a device for eye monitoring/eye tracking), or a head-tracking device, i.e., a device for monitoring the position and/or displacement of the head of a user. In particular, the user interface 10 is configured to supply a signal S1 that identifies a direction of movement and possibly a speed of movement. The control unit 20 hence receives the signal S1 from the user interface 10 and converts the signal into driving signals for the electric actuators 30.
In particular, in the case of an assisted-driving PMV, the control signal S1 is not supplied directly to the control circuit 20, but to a processing circuit 60, which is configured to supply a signal, possibly modified, S1′ to the control circuit. Instead, in an autonomous-driving PMV, the processing circuit 60 generates the signal S1′ directly. Consequently, in both cases, the control circuit 20 is configured to generate the driving signals D for the actuators 30 as a function of the signal S1′.
In particular, in many applications, a PMV also comprises a navigation system. Typically, with reference to assisted-driving or autonomous-driving vehicles, a distinction is made between a global route for reaching a given destination and a local route used for avoiding obstacles, such as pedestrians. For instance, with reference to planning of the global route, the processing system 60 may have, associated to it, a communication interface 64 to communicate with a remote server 3, which has, stored within it, a map of the environment in which the vehicle 1 moves. For instance, the communication interface 64 may comprise at least one of the following:
In general, at least a part of the map of the environment in which the vehicle 1 is moving may be stored also within a memory 62 of the processing circuit 60 or at least associated to the processing circuit 60. In addition, instead of storing a map, the server and/or the memory 62 may also store directly a plurality of routes between different destinations.
Consequently, in many applications, the processing circuit 60 has, associated to it, one or more sensors 50 that make it possible to determine the position of the vehicle 1. For instance, with reference to an outdoor environment, the sensors 50 typically comprise a satellite-navigation receiver 500, for example, a GPS, GALILEO, and/or GLONASS receiver. However, in an indoor environment, the satellite signals are frequently not available. Consequently, in this case, the sensors 50 typically comprise sensors that supply data S2 that can be used for odometry, i.e., for estimating the position of the vehicle 1 on the basis of information of displacement of the vehicle 1. For instance, these sensors 50 may comprise at least one of the following:
However, frequently the position obtained via odometry is not very precise, in particular after long periods, because errors in the measurement of the displacement of the vehicle 1 accumulate. Hence, typically, the position of the vehicle 1 should be recalibrated using information that identifies an absolute position of the vehicle 1. Similar problems arise also for other types of autonomous-driving vehicles 1, such as AMR vehicles.
For instance, in an outdoor environment the satellite-navigation receiver 502 can be used for this purpose. Instead, in an indoor environment (but the same solution could be used also out of doors), the sensors 50 may comprise a wireless receiver 508 configured to determine the distance of the vehicle 1 from a plurality of mobile communication radio transmitters installed in known positions, for example, as a function of the power of the mobile communication radio signal, which can be used for a triangulation. Additionally or alternatively, one or more cameras 506 installed on the vehicle 1 may be used for detecting the distance of the vehicle 1 from characteristic objects, where the positions of the characteristic objects are known, and stored, for example, in the server 3 and/or the memory 62, for example, together with the data of the maps.
The above solutions hence require installation of additional mobile communication radio transmitters, and/or provision and learning of visual features, for a subsequent calibration of the position of the vehicle 1. Consequently, these solutions are frequently not easy to use and costly, in particular in the case where just a few vehicles 1 circulate in a wide indoor environment, such as an airport, a railway station, a hospital, or a shopping mall.
As mentioned previously, to plan the local route, the sensors 50 may comprise also sensors 510 for detecting possible obstacles, which enables the processing circuit 60 to implement a local planning of the route by carrying out a (local) mapping of the environment surrounding the vehicle 1. For instance, the sensors 510 may include a SONAR (Sound Navigation and Ranging) system, comprising, for example, one or more ultrasonic transceivers, and/or a LiDAR (Light Detection and Ranging) system. Consequently, also these data may be used for identifying the absolute position of the vehicle 1, for example, by comparing the mapping data with the data stored in the memory 62 and/or in the remote server 3. However, in environments that undergo changes, in particular in crowded environments, these data cannot be used easily to determine the absolute position of the vehicle 1.
The object of the present disclosure is to provide solutions that make it possible to determine the position of a vehicle, such as an assisted-driving or autonomous-driving PMV.
In order to achieve the above object, the subject of the invention is a method for determining the position of a vehicle presenting the characteristics specified in the annexed claim 1. The invention also regards a corresponding locating system.
The claims form an integral part of the teaching provided herein in relation to the invention.
As mentioned previously, various embodiments of the present disclosure regard solutions for locating a vehicle, such as a personal-mobility vehicle, an automated-guided vehicle, or an autonomous mobile robot, in an environment. In particular, in various embodiments, the vehicle is located using a plurality of surveillance cameras installed in the environment.
In various embodiments, the vehicle comprises a plurality of sensors configured to detect data that identify a displacement of the vehicle, wherein the vehicle is configured to estimate a position of an odometry centre of the vehicle via odometry as a function of the data that identify a displacement of the vehicle. Moreover, a plurality of visual patterns are applied to the vehicle.
During a learning phase, a processor receives a map of the environment and for each camera an image acquired by the respective camera. Next, the processor generates data that enable association of a pixel of a floor/ground in the image to respective co-ordinates in the map. For instance, for this purpose, the processor can pre-process the image by means of an edge-detection/edge-extraction algorithm and identify a floor/ground in the image using the pre-processed image.
During a localization phase, the processor performs a sequence of operations for at least one of the surveillance cameras. For instance, in various embodiments, the processor receives, for each camera, respective co-ordinates in the map and the estimated position of the vehicle. Consequently, the processor can select a sub-set of cameras as a function of the estimated position of the vehicle and the co-ordinates of the cameras and receive the obfuscated images of the cameras of the sub-set of cameras.
In particular, in various embodiments, the processor receives an obfuscated image from the camera and checks whether the obfuscated image presents one or more of the visual patterns applied to the vehicle. For instance, the patterns may have one or more predetermined colours, and the obfuscated image may be obtained by means of a filtering operation that keeps only the one or more predetermined colours.
In the case where the obfuscated image presents one or more of the visual patterns applied to the vehicle, the processor computes the position of an odometry centre in the obfuscated image as a function of the positions and optionally of the dimensions of the visual patterns appearing in the obfuscated image. Next, the processor determines a position of the odometry centre in the map by mapping the position of the odometry centre in the obfuscated image into co-ordinates in the map, using the data that enable association of a pixel of a floor/ground in the image to respective co-ordinates in the map.
Finally, the processor sends the position of the odometry centre in the map to the vehicle, where the vehicle is configured for setting the estimated position of the odometry centre at the position received.
In various embodiments, a plurality of vehicles can circulate in the environment. In this case, each vehicle may comprise a combination of univocal patterns. In this case, the processor can thus store data that associate a combination of univocal patterns to a respective vehicle identified via a respective univocal vehicle code. Consequently, in the case where the obfuscated image presents one or more of the visual patterns, the processor can determine the univocal vehicle code associated to the respective combination of patterns and send the position of the odometry centre in the map to the vehicle identified via the univocal vehicle code. For instance, each combination of univocal patterns may comprise patterns with different shapes and/or colours. For instance, in various embodiments, the patterns may comprise a two-dimensional bar code, such as a QR code, where the two-dimensional bar code identifies the respective univocal vehicle code.
Additionally or alternatively, each vehicle may include a dynamic pattern comprising a plurality of indicators. In this case, the processor can receive the estimated positions of the vehicles, determine a sub-set of vehicles that are located nearby, and configure the dynamic pattern of each vehicle of the sub-set of vehicles in such a way as to switch on a different combination of the indicators for each vehicle of the sub-set of vehicles. Consequently, in this case, the processor can store data that associate to the estimated position and to the combination of the indicators of each vehicle of the sub-set of vehicles a respective vehicle identified via a univocal vehicle code. Hence, in the case where the obfuscated image presents one or more of the visual patterns, the processor can compare, for each vehicle of the sub-set of vehicles, the respective estimated position with the given position and the detected combination of patterns with the combination of the indicators in such a way as to select a vehicle of the sub-set of vehicles and send the position of the odometry centre in the map to the vehicle selected.
Embodiments of the present disclosure will now be described in detail with reference to the attached drawings, which are provided purely by way of non-limiting example and in which:
In the ensuing description, various specific details are illustrated aimed at enabling an in-depth understanding of the embodiments. The embodiments may be obtained without one or more of the specific details, or with other methods, components, materials, etc. In other cases, known structures, materials, or operations are not illustrated or described in detail so that various aspects of the embodiments will not be obscured.
Reference to “an embodiment” or “one embodiment” in the framework of the present description is intended to indicate that a particular configuration, structure, or characteristic described in relation to the embodiment is comprised in at least one embodiment. Hence, phrases such as “in an embodiment” or “in one embodiment” that may be present in different points of the present description do not necessarily refer to one and the same embodiment. Moreover, particular conformations, structures, or characteristics may be combined in any adequate way in one or more embodiments.
The references used herein are provided merely for convenience and hence do not define the sphere of protection or the scope of the embodiments.
In the following
As mentioned previously, the present disclosure provides solutions for determining the position of a vehicle, such as a PMV or an AMR. In general, this position may be absolute (for example, expressed in terms of latitude and longitude) or relative with respect to a map (for example, expressed with cartesian co-ordinates with respect to the map).
In particular, the inventors have noted that in many environments surveillance cameras 2 are provided. Consequently, a computer 3a, such as a remote server and/or a cloud platform, for example via an appropriate programming software, can receive from each camera 2 installed in the environment (or at least a sub-set of such cameras 2) a respective image 306 (or a sequence of images) and verify whether this image 306 comprises at least one of the vehicles 1a. Consequently, knowing the position of each camera 2, the processor 3a is able to identify the position POS of the vehicle or vehicles 1a that is/are captured in the respective image 306.
However, frequently these images 306 supplied by surveillance cameras 2 can be used only for reasons of security, and not for further processing operations, for example, in order to guarantee privacy of other people that may appear in the images.
In particular, as compared to
In the embodiment considered, the processor 20 is configured to send one or more images 312 of each camera 2 also to the processor 3a. However, in various embodiments, the processor 20 is configured not to transmit the original images 306 acquired, but pre-processes the images 306 in such a way as to obfuscate the images, in particular in such a way as to render the faces of persons that appear in the images unrecognizable. Preferably, communication between the processors 20 and 3a is implemented by means of an encrypted protocol, for example, using one of the versions of the TLS (Transport Layer Security) protocol, for example, applied to the TCP (Transmission Control Protocol) or UDP (User Datagram Protocol).
Consequently, in the embodiment considered, the processor 3a is configured to receive from the processor 20 obfuscated images 312 and detect one or more vehicles 1a in the obfuscated images 312. Consequently, in the embodiment considered, the processor 3a should be able to identify a vehicle 1a also in the obfuscated images 312. In particular, for this purpose, each vehicle 1a is provided with given patterns P that enable location of the vehicle itself, in particular in order to determine the odometry centre of the vehicle 1a. Consequently, in various embodiments, the vehicle 1a is configured, for example via appropriate programming of the processing system 60, to receive a position POS from the processor 3a, reset the position of the odometry centre of the vehicle 1a to the above received position POS, and then compute the position of the vehicle 1a, once again via odometry, to determine the displacements of the vehicle 1a.
Consequently, in various embodiments, the patterns and the obfuscating algorithm are configured in such a way as to enable an identification of the patterns P also in the obfuscated image 312. In general, all the vehicles may have the same pattern P, or preferably each vehicle 1a has applied to it a different pattern P. For instance, in various embodiments, the patterns P applied to different vehicles may have different shapes and/or colours. For instance, in various embodiments, one or more of the patterns P applied to a vehicle 1a comprise a two-dimensional bar code, such as a QR code, where this two-dimensional bar code identifies a univocal code of the respective vehicle 1a. Preferably, the patterns P are configured to enable determination not only of the position POS of the respective vehicle 1a, but also of the orientation of the vehicle 1a.
In particular, once the learning phase 1100 has been started, the processor 1102 receives, in a step 1102, data 300 that identify a map of the environment in which the vehicles 1a are moving, for example, a map that comprises the corridors and rooms of the structure, in particular at least the area to which the vehicles 1a can gain access. Preferably, this map 300 is a two-dimensional (2D) map. In general, the data 300 may also comprise data that identify points of interest, for example, the terminals of an airport, the wards of a hospital, the shops of a shopping mall, etc. Consequently, these data 300 may also correspond to or comprise the data used for planning the global routes used for navigation of the vehicles 1a (see also the description of
In a step 1104, the processor then receives data 302 that identify the position of a given camera 2 in the map 300. For instance, the position data of the map 300 and/or of the camera 2 may be expressed in terms of latitude and longitude (absolute position) or in cartesian co-ordinates x and y (relative position with respect to a reference of the map). Consequently, in step 1104, the processor can add the position of the camera 2 (possibly converted into the co-ordinates of the map) to a list 304, which hence comprises the positions of the cameras 2. In general, the data 302 can be received directly as position data, or the processor 3a could also display a screenful that shows the map 300, and an operator could position the camera 2 directly in the map using a graphic interface. Moreover, the data 302 and likewise the list 304 may also comprise the orientation of the camera 2. Finally, the list 304 further comprises data that enable identification of the respective camera 2, for example, a univocal camera code, and possibly respective access data for acquiring an image of the respective camera 2 through the processor 20.
Consequently, at the end of step 1104, the processor knows the position of the camera 2 in the map 300. For instance,
In a step 1106, the processor 3a then receives an image 306 of the respective camera 2. In particular, this image 306 is not obfuscated. An example of an image 306 for the scenario of
Moreover, the processing system 3a is configured to generate, in step 1106, mapping data 308, for example in the form of a look-up table, which make it possible to associate the co-ordinates in the image 306, in particular co-ordinates in the plane of the floor/ground, to co-ordinates in the map 300.
For instance, in various embodiments, this operation is performed manually by an operator, who selects a first position in the image 306 and a corresponding position in the map 300 (or vice versa). Alternatively, the processor can determine automatically the correspondences, for example by identifying characteristic points, such as corners, etc.
In either case, the processor 3a can then identify the floor/ground in the image 306. For this purpose, the processor 3a can pre-process the image 306, for example by means of an edge-detection/edge-extraction algorithm. For instance,
Next, the processor can calculate a vanishing point of the pre-processed image and use this information to find correspondences between the map 300 and the image 306′, in particular to identify the floor/ground 310. The person skilled in the art will appreciate that solutions of this type are known in the field of navigation of autonomous-driving vehicles, where a similar camera is mounted on the vehicle itself, which here renders any detailed description superfluous. For instance, for this purpose, there may be cited the document by Chai, Wennan & Chen, C & Edwan, Ectzaldeen, “Enhanced Indoor Navigation Using Fusion of IMU and RGB-D Camera”, 2015, 10.2991/cisia-15.2015.149, or the document by François Pasteau, Vishnu Karakkat Narayanan, Marie Babel, François Chaumette, “A visual servoing approach for autonomous corridor following and doorway passing in a wheelchair” Robotics and Autonomous Systems, Elsevier, 2016, 75, part A, pp. 28-40, ff10.1016/j.robot.2014.10.017ff.ffhal01068163v2, the contents of which are incorporated herein for reference.
Consequently, once the processor 3a has identified the floor/ground 310 in the image 306′ (see, for example,
Additionally or alternatively, the processor 3a can determine, in step 1106, the parameters of a model of the camera 2, which enables calculation of the distance of a point in the image 306 from the camera 2, which can then be used for finding automatically the correspondences and/or for determining directly the position of a given pixel of the floor/ground 310 in the map 300. For instance, for this purpose, there may be cited the document U.S. Pat. No. 10,380,433 B2, the contents of which are incorporated herein for reference. In particular, this document describes, with reference to the respective
Consequently, at the end of step 1106, the processor 3a has saved a list and/or the parameters of a model of the camera 308, which enables association of each pixel of the floor/ground 310 of the image 306 to respective co-ordinates in the map 300.
The processor 3a then proceeds to a verification step 1108, in which it checks whether all the cameras 2 have been uploaded. For instance, in the case where the position data 302 already comprise the data of a plurality of cameras 2, the processor can use this list to determine automatically whether a further camera 2 is to be processed. Otherwise, the processor 3a can display, in step 1108, a screenful that makes it possible to complete the procedure or enter a further camera 2. Consequently, in the case where a further camera 2 is to be added (output “Y” from the verification step 1108), the processor returns to step 1104. Instead, in the case where all the cameras 2 have been added (output “N” from the verification step 1108), the processor proceeds to an end step 1110, and the learning step 1100 terminates.
In particular, in the embodiment considered, the processor 3a is configured to receive, in a step 1202, a request of location REQ by a vehicle 1a. For instance, in a way similar to what has been described with reference to
In particular, in step 1206, the processor 3a reads the list of cameras 304 and selects a first camera 2. Next, the processor 3a obtains an obfuscated image 312 from the processor 20 for the respective camera 2, for example using, for this purpose, the identifier of the camera 2 and possibly the respective access data. Consequently, by identifying the position of one or more vehicles 1a in the obfuscated image 312, in particular a respective position on the floor/ground 310, the processor 3a can use the data 308 for determining the respective position POS of a vehicle 1a in the map 300. In various embodiments, and as will be described in greater detail hereinafter, the processor determines, in step 1208, also the identifier of the vehicle 1a that is included in the image. For instance, as mentioned previously, for this purpose there may be applied different combinations of patterns P to the vehicles 1a, and the processor can store also data that associate to each combination of patterns P a respective univocal vehicle code ID, i.e., a respective vehicle 1a.
In the embodiment considered, the processor 3a then checks, in a step 1210, whether further cameras 2 are to be processed. In the case where further cameras 2 are to be processed (output “Y” from the verification step 1210), the processor selects a next camera 2 and returns to step 1210. Instead, in the case where all the cameras 2 have been processed (output “N” from the verification step 1210), the processor 3a proceeds to a step 1212, in which it sends the position POS to the vehicle 1a. In particular, in various embodiments, the processor selects, in step 1212, the position POS of the vehicle 1a that corresponds to the vehicle 1a that has sent the request REQ. For instance, for this purpose, the processor can determine the univocal vehicle code ID associated to the combination of patterns P detected and compares this univocal vehicle code ID with the univocal vehicle code ID received with the request REQ. Finally, the location method terminates in an end step 1214.
In general, the processor 3a can request, via steps 1208 and 1210, the obfuscated images 312 for all the cameras 2. Alternatively, the processor 3a can receive, in step 1202, together with the request REQ, also a position POS′ that the vehicle 1a has estimated, for example, through odometry. Consequently, knowing the estimated position POS′ of the vehicle 1a, the processor 3a can determine, in step 1206 (using the data 304), only the list of the cameras 2 that acquire images that will cover the respective position POS′, using for this purpose the list 304 and/or the data 308. Consequently, in this way, step 1208 is repeated only for the camera or cameras 2 that can potentially record the vehicle 1a that has sent the request REQ.
Consequently, in
In particular, in the embodiment considered, using the verification step 1210, the processor 3a repeats step 1208 for all the cameras 2 included in the list 304. Consequently, in the embodiment considered, the processor 3a determines, via steps 1208 and 1210, the positions POS of all the vehicles 1a that appear in the obfuscated images 312.
In the embodiment considered, the processor 3a then sends, in step 1212, to each vehicle 1a that has been identified, the respective position POS, and the procedure terminates in step 1214. For instance, for this purpose, the processor can determine the univocal vehicle code ID associated to each combination of patterns P detected and send the respective position POS to the vehicle 1a identified via said univocal vehicle code ID.
Optionally, the processor 3a can also in this case verify whether the position POS is plausible. For instance, for this purpose, the processor can send, in a step 1202′, a request REQ to each vehicle 1a (identified via the respective code ID) to request the estimated position POS′ of the vehicle. Next, the processor 3a can compare, in a step 1206′, the estimated position POS′ with the position POS, for example checking whether the position POS′ can be recorded by the respective camera 2 as indicated, for example, via the list 304 and/or the data 308.
Consequently, in the embodiments considered, the processor 3a is configured for analysing, in step 1208, an obfuscated image 312 supplied by the processor 20 to identify the position POS of one or more vehicles 1a that appear in the image 312.
In particular, once the procedure 1208 has been started for a given camera 2, the processor 3a receives from the processor 20, in a step 1250, the obfuscated image 312 for the aforesaid camera 2.
In the embodiment considered, the processor 3a then determines, in a step 1252, whether the image 312 comprises one or more vehicles 1a and determines, for each vehicle 1a, the position of the odometry centre of the vehicle 1a in the image 312. In particular, the odometry centre refers to the co-ordinates around which the vehicle 1a turns and moves. The odometry centre is hence used by the vehicle 1a, in particular by the processing circuit 60, to estimate the position POS′ as a function of the displacement data S2. Typically, it is located at the centre of the drive wheels, at the height of the ground for a vehicle that moves in 2D.
For instance, this is illustrated schematically in
As explained previously, the images 312 supplied by the processor 20 are obfuscated. Consequently, to enable a location of the vehicle 1a also in the obfuscated image 312, in various embodiments, each vehicle 1a comprises purposely provided visual patterns P (e.g., LEDs or specific images) that can be easily identified and help in obtaining an understanding of the shape, the orientation of the vehicle, and consequently the odometry centre OC. For instance, as illustrated in
In this context, it is useful to apply a number of visible patterns P for each side, because some patterns P might be covered. Preferably, these patterns P present a high contrast, in such a way that the processor 20 can easily filter the image 306 received from the camera 2 so as to leave only the patterns P in the obfuscated image 312. For instance, for this purpose, the patterns P may have one or more specific colours, which enables the processor 20 to filter the image, keeping only the pixels that have the aforesaid colour or colours. Consequently, the combinations of the patterns P applied to the various vehicles 1a can be distinguished by a different combination of colours of the patterns P applied to the vehicles 1a, and/or by varying the shape of the patterns P, for example, by applying a two-dimensional bar code to each vehicle 1a. For instance, this bar code, such as a QR code, could be provided on the patterns P2s, P2d, and P4.
For instance,
Consequently, once the patterns P are known, it is possible to define the position of the odometry centre OC and optionally the direction D of the vehicle 1a on the basis of the patterns P. For instance, this is illustrated in
In particular,
As illustrated in
Consequently, by detecting the distance dBR between the pattern P1d and the centre of the pattern P2d, the processor 3a can calculate proportionally the distances hB and dR. As illustrated in
Instead,
Consequently, by detecting the distance La between the pattern P1d and the pattern P1s, the processor 3a can calculate proportionally the distances la, and dS and/or dP.
Finally,
Consequently, by detecting, for example, the distance La between the pattern P1d and the pattern P1s, the processor 3a can calculate proportionally the distances la and hS (or hB).
Consequently, in various embodiments, as also illustrated in
In this context, the inventors have noted that the position OC′ can frequently be estimated also in an approximate way via the following steps:
To the above estimated position there can then be added again the vector la′.
Consequently, in the embodiment, when a vehicle 1a is detected in the image 312, the processor 3a can determine the position (Ox, Oy) of the odometry centre OC in the image 312 and preferably also the direction β of the vehicle 1a as a function of the patterns P that are detected, and in particular as a function of the distance between them.
Consequently, once the processor has determined, in step 1252, the co-ordinates of the odometry centre OC as a function of the distances between the patterns (and possibly of their dimensions), the processor 3a uses, in a step 1254, the data 308 to calculate the position POS of the vehicle 1a by mapping the co-ordinates (Ox, Oy) of the image in the map 300. Finally, step 1208 terminates at an end step 1256.
As explained previously, in various embodiments, the processor 3a determines, in step 1208, the identification of the vehicle 1a that is comprised in the image 312. For instance, as mentioned previously, for this purpose, different combinations of patterns P may be applied to the vehicles 1a, and the processor 3a may also store data that associate to each combination of patterns P a respective univocal vehicle code ID, i.e., a respective vehicle 1a. Consequently, to associate a given combination of patterns P to a respective univocal vehicle code ID, the patterns may be static and univocal. For instance, as mentioned previously, the patterns P applied to different vehicles may have different shapes and/or colours. For instance, in various embodiments, one or more of the patterns P applied to a vehicle 1a comprise a two-dimensional bar code, such as a QR code, where this two-dimensional bar code identifies a univocal code of the respective vehicle 1a.
However, in the case where a large number of vehicles 1a are circulating, identification via static patterns could become inefficient. Consequently, in various embodiments, the vehicles 1a (or at least some of the vehicles 1a) comprise one or more dynamic patterns. For instance, the vehicle 1a may comprise at least one dynamic pattern P5 on the left-hand side of the vehicle 1a and one dynamic pattern P5 on the right-hand side of the vehicle 1a. For instance, the pattern P5 could be used instead of, or be integrated in, the pattern P1s and/or the pattern P1d.
For instance,
In particular, in various embodiments, a dynamic pattern P5 comprises a control circuit, for example implemented by the processor 60, configured to activate or de-activate each indicator L as a function of data received from the processor 3a, for example using, for this purpose, the communication interface 64. In general, the number of the indicators L could hence be chosen to enable a univocal identification of each vehicle 1a. However, in various embodiments, the number of the indicators L is low and chosen, for example, between 3 and 10, preferably between 4 and 6. Consequently, in this case, it is not possible to identify all the vehicles 1a univocally.
However, as explained previously, the processor 3a can also receive, in step 1202/1202′, the estimated position POS′ of each vehicle 1a. Consequently, the processor 3a is able to determine which vehicles 1a may be included in a given image 312. Consequently, in the case where the image 312 only shows a single vehicle 1a and no other vehicles 1a are nearby (as indicated by the estimated position POS′), the processor 3a can determine, in step 1208, in a univocal way the vehicle code ID using the estimated position POS′ of the vehicles. For instance, the processor 3a can classify two vehicles 1a as being close to one another if the distance between them is less than a given threshold, for example, when the distance is less than 10 m.
Instead, in the case where there exist ambiguities, for example, because two vehicles 1a are included in one and the same image 312 and/or two vehicles 1a are in estimated positions POS′ that are close to one another, the processor 3a can configure the dynamic patterns P5 of the vehicles 1a that are close (as indicated by the estimated positions POS′), for example sending commands to the processor 60 in such a way that these vehicles 1a use different dynamic patterns P5, which at this point are not necessarily univocal for all the vehicles 1a. For instance, in the embodiment considered, a first vehicle could use the pattern “11001” for the indicators L, and a second vehicle could use the pattern “10101” for the indicators L.
Consequently, in various embodiments, the processor can receive, in step 1202, the positions POS′ of all the vehicles 1a, determine for each vehicle 1a a sub-set of vehicles 1a that are near the vehicle 1a, and configure the dynamic patterns P5 of the vehicles 1a of the sub-set in such a way that each vehicle 1a of the sub-set uses a different activation/de-activation pattern (univocal for the sub-set) for the indicators L. Consequently, in this way, the subsequent step 1208 can identify, once again univocally, each vehicle 1a detected, using for this purpose the estimated positions POS′ and the profiles of the dynamic patterns P5. Likewise, step 1202′ could be modified. In this case, step 1202′ should be carried out prior to step 1208, or step 1208 should be repeated.
In general, instead of using a dynamic pattern P5 that identifies a different code via a spatial distribution of the indicators L, the dynamic pattern P5 may also comprise a single indicator L, or in general one or more indicators L, configured to be activated and de-activated in time, thus identifying the respective vehicle 1a with a modulation in time. For instance, to implement the identification pattern “11001”, the processor 60 could switch on an indicator L for two time periods, then switch off the indicator L for two time periods, and then switch on the indicator L for one time period. The person skilled in the art will appreciate that the duration of the time period should be chosen on the basis of the maximum image-acquisition time 312. In this case, the processor 3a could then repeat step 1208 a plurality of times to identify for each image 312 the respective on/off state of the indicator, which thus makes it possible to identify, once again univocally, the respective pattern and hence the respective vehicle 1a.
Of course, without prejudice to the principle of the invention, the details of construction and the embodiments may vary widely with respect to what has been described and illustrated herein purely by way of example, without thereby departing from the scope of the present invention, as defined by the ensuing claims.
Number | Date | Country | Kind |
---|---|---|---|
102022000006230 | Mar 2022 | IT | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/IB2023/053168 | 3/30/2023 | WO |