The present invention relates to monitoring, localization, and controlling of objects using ground mounted sensors. If an object is not controllable, for instance a human being, the monitoring and partial localization (position only) is accomplished. The system has capability to do the monitoring within an implicitly defined geofenced area. For an object such as a vehicle that is controllable, localization and controlling is possible and is achieved through perception of the objects from ground mounted sensors. A vehicle may not have sensors to perceive the surrounding; even with sensors mounted on, a vehicle may not be able to perceive the surrounding of a particular place adequately so that it can determine its own position and orientation. The presented invention makes the determination possible using ground mounted sensors.
Robotic control of an object needs the position and orientation, i.e. localization, of the object. Localization can be done using sensor data captured by the sensors mounted on an object itself and the map of the surrounding of the object. The sensor data delivered by sensors on the object may not capture enough data or the surrounding may not have enough features to capture so that localization can be performed reliably. If sensors are mounted on the ground or on structures on the ground at known positions, localization can be performed by capturing image, 3D surface points, and distances of the object.
In addition, monitoring of objects in geofenced area can be done by marking site images and letting the marked images known to a controller. Absolute positioning and controlling of a vehicle can be done by including three or more land mark points of known positions into the site and by sending current and desired positions and orientations of the vehicle to the vehicle controller. Sending the localization info, dimensions of the objects, and directions of travel to display devices such as mobile phone display or vehicle display can make an object aware of objects in the surroundings.
The present invention is a system and method to monitor, localize, and control objects in an implicitly defined geofenced area and to determine object position and orientation (vehicle only) by capturing the object's image, 3D surface data, or position using camera, LiDAR, and/or RADAR sensors that are installed on structures mounted on the ground. The sensors capture image, 3D surface data points, and distance of the surface points that are processed to ultimately obtain 3D data of the surface points of the object. The 3D data points from different sensors are then combined or fused by a controller to obtain a single set of 3D surface points, called fusion data, under one coordinate system, such as, the GPS coordinate system. The single set of 3D surface points is then processed by the controller using deep neural network and/or other algorithms to obtain position and orientation of the object. Additionally, the controller or sensors can send current and desired object positions and orientations to controllable objects. Controller and/or sensors can send site image data to scene marking device and receive marked site image data to be used for geofenced monitoring of objects. Controller or sensors send alert to devices if objects are detected or abnormal behavior of objects are detected within the geofenced area.
System and method of the present invention are illustrated as an example and are not limited by the figures of the accompanying diagrams and pictures, in which:
The terminology used herein for the purpose of describing the system and method is not intended to be limiting the invention. The term ‘and/or” includes any and all combinations of one or more of the associated listed items. As used herein, the singular forms “a,” “an”, and “the” are intended to include the plural forms as well as singular forms, unless the context clearly indicates otherwise. The term “comprising” and/or “comprises” specify the presence, when used in this specification, specify the presence of stated features, steps operations, elements, and/r components, but do not preclude the presence or addition of one or more other features, steps, operations, elements, components, and/or groups/thereof.
If not otherwise defined, all terms used herein have the same meaning as commonly understood by one having ordinary skill in the art to which this invention belongs. Furthermore, terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and the present invention and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
In the description of the invention, it will be understood that a number of techniques and steps are disclosed. Each of these has individual benefit and each can also be used in conjunction with one or more, or in some cases all, of the other disclosed techniques. Accordingly, for the sake of clarity, this description will refrain from repeating every possible combination of the individual steps in an unnecessary fashion. However, the specification and claims should be read with the understanding that such combinations are entirely within the scope of the invention and the claims.
The present invention, a system and method for object monitoring, localization, and control will now be described by referencing the appended figures,
The present invention is a system and method to monitor, localize, and control objects in an implicitly defined geofenced area and to determine object position and orientation (vehicle only) by capturing the object's image, 3D data, or position using optical 1, LiDAR 2, and/or RADAR 3 sensors that are installed on structures mounted on the ground. The sensors capture image, 3D surface data points, and distance of the surface points of the object; all the captures are processed by a controller 6 with memory 5 to hold processing logics to ultimately obtain a more complete 3D data of the surface points of the object. The 3D surface data points from different sensors are then combined or fused by the controller to obtain a single set of 3D surface points, called fusion data, under one coordinate system such as the GPS coordinate system. The single set of 3D surface points is then processed by the controller using deep neural network and/or other algorithms to obtain position and orientation of the object.
Additionally, the controller or sensors can send current and desired future object positions and orientations to controllable objects such as vehicle 8. Controller and/or sensors can send their image data to scene marking device 4 and receive marked image data for geofenced monitoring of objects. Controller or sensors send alert to display devices 7 if objects are detected or abnormal behavior of objects are detected within the geofenced area.
This application is a continuation application of U.S. Provisional Patent Application No. 63/241,064, filed Sep. 6, 2021 with conformation number 9855, the contents of which are incorporated herein by reference almost in their entirety, with claims unchanged.
Number | Date | Country | |
---|---|---|---|
63241064 | Sep 2021 | US |