The present invention relates to a personal mobility vehicle. Specifically, the invention relates to the autonomous functioning of a personal mobility vehicle based on sensor inputs. More specifically, the invention relates to the arrangement of sensors on the personal mobility vehicle for the optimal autonomous function of the personal mobility vehicle.
Personal mobility vehicles are generally driven by people with restricted or limited mobility of those with disabilities. However, to drive them sometimes requires a set of skills that takes time to master. It can be challenging for a novice user and there is a high probability that due to a lack of skill to drive the vehicle, the vehicle might collide with an obstacle. Even after appropriate time with the vehicle, the vehicle may be required to be driven in a challenging environment, either due to the layout of the airport or the congestion involved. The environment may have multiple moving obstacles, obstacles that are narrowly spaced with respect to each other, etc. These environments pose challenges to even skilled drivers, as the driver may have a perception for the obstacle which may not be appropriate, and which may result in the driver colliding with the obstacle.
To solve such issues of manual driving of the personal mobility vehicles, some safety mechanisms may be in place which gives lesser reliance on manual inputs and have protective measures through automated means. These personal mobility vehicles rely on certain sensor inputs to make the decision for providing protective measures. However, there is a strong reliance on the quality of sensor inputs received with respect to obstacles in the way of the vehicles. Such vehicles somehow fail to provide optimal safety measures because of a lack of proper area coverage by these sensors, and hence the inputs received by the automatic system of such machines have incomplete information to make decisions. Sometimes, along with lack of coverage, the inputs provided are unreliable and may have a lot of false positives, or true negatives, which further makes the functioning of the automatic system of these vehicles more unreliable.
Hence, a mechanism is desired where automatic systems of these personal mobility vehicles shall be fed with data from a properly covered traversing area, and such data fed should be more reliable.
The objective of the invention is to provide a mechanism for providing reliable sensor inputs for the maximum possible traversing area of the personal mobility vehicle.
The objective of the invention is achieved by a sensor system to be placed onto a personal mobility vehicle. The vehicle includes a structured light sensor that senses one or more obstacles and generates a first sensor data, and a first mechanical coupling that couples the structured light sensor to either a base frame onto which the wheels of the vehicle are attached or a skirt of the vehicle. The system also includes a processing unit that receives and processes the first sensor data and determines the depth of one or more obstacles, and further generates a location information of one or more obstacles.
According to one embodiment of the sensor system, wherein the structured light sensor senses the obstacles placed in a distance range of 50 centimeters to 800 centimeters. This embodiment is helpful, as it allows for depth measurement for obstacles that are placed in long range.
According to another embodiment of the sensor system, the system includes a vision sensor that senses one or more obstacles and generates a second sensor data, and a second mechanical coupling adapted to couple the vision sensor to either an armrest of the vehicle or a torso of the vehicle. The processing unit receives and processes the first sensor data and the second sensor data and generates the location information of one or more obstacles. This embodiment is helpful as it helps in the fusion of depth data, and image data of the obstacles which are placed at a long range, and provide for the optimal determination of the location information of the obstacles.
According to yet another embodiment of the sensor system, wherein the vision sensor senses the obstacles placed in a distance range of 50 centimeters to 800 centimeters. This embodiment is helpful, as it provides images of obstacles that are placed in long range.
According to one embodiment of the sensor system, the system includes an ultrasonic sensor that senses one or more obstacles and generates a third sensor data. The system also includes a third mechanical coupling which couples the ultrasonic sensor to either a base frame onto which the wheels of the vehicle are attached or a skirt of the vehicle. The processing unit receives and processes the first sensor data and the third sensor data and generates the location information of one or more obstacles. This helps to provide optimal coverage of sensors that are placed in short-range, as well as long-range. Also, the placement of ultrasonic sensors at lower heights helps in reducing bling spots near the vehicle.
According to another embodiment of the sensor system, wherein the ultrasonic sensor senses the obstacles placed in a distance range of 0 centimeters to 300 centimeters. This embodiment is helpful, as it provides location information of obstacles that are placed in short-range, including some obstacles that are close to the vehicle.
According to yet another embodiment of the sensor system, the system includes more than one ultrasonic sensor, and the processing unit activates the ultrasonic sensors based on a predefined logic where at least one of the ultrasonic sensors is activated at a different time frame with respect to activation of other ultrasonic sensors. This embodiment is helpful, as it fires only a few ultrasonic sensors at a time, and thus helps to run the sensor system with lower computational power also.
According to one embodiment of the sensor system, wherein the predefined logic defines for activation of the ultrasonic sensors which has a field of view in a direction of movement of the vehicle, and accordingly, the processing unit activates the ultrasonic sensors that have the field of view in the direction of the vehicle. This embodiment is helpful, as it only activates ultrasonic sensors which are relevant during a particular movement of the vehicle, and hence, saves computational power used for running the sensor system
According to another embodiment of the sensor system, the system includes one or more elevated sensors which include one or more depth sensors, or one or more image sensors, or both. The elevated sensor senses a fourth sensor data comprising either a depth information of the obstacles, or a location information of the obstacles, or both. The system also includes a fourth mechanical coupling which couples the elevated sensors to an elevated structure of the vehicle above the height of the head of a user of the vehicle when the user is seated on the vehicle. The processing unit receives and processes the first sensor data and the fourth sensor data and generates the location information of one or more obstacles. This embodiment is helpful, as additional vision-based sensors which are placed at such height provide a wider and unobstructed view of the environment, and can be helpful in the identification of obstacles placed far away, and thus helps in better navigation planning.
According to yet another embodiment of the sensor system, wherein the elevated sensors are mechanically coupled to the elevated structure, such that the elevated sensors are vertically tilted downwards. Tilting of the elevated sensors helps in reducing blind spots close to the vehicle,
The figures depict embodiments of the disclosure for purposes of illustration only. One skilled in the art will readily recognize from the following description that alternative embodiments illustrated herein may be employed without departing from the principles of the disclosure described herein.
The best and other modes for carrying out the present invention are presented in terms of the embodiments, herein depicted in the drawings provided. The embodiments are described herein for illustrative purposes and are subject to many variations. It is understood that various omissions and substitutions of equivalents are contemplated as circumstances may suggest or render expedient, but are intended to cover the application or implementation without departing from the spirit or scope of the present invention. Further, it is to be understood that the phraseology and terminology employed herein are for the purpose of the description and should not be regarded as limiting. Any heading utilized within this description is for convenience only and has no legal or limiting effect.
The terms “a” and “an” herein do not denote a limitation of quantity, but rather denote the presence of at least one of the referenced items.
The terms “comprises”, “comprising”, or any other variations thereof, are intended to cover a non-exclusive inclusion, such that a process or method that comprises a list of steps does not include only those steps but may include other steps not expressly listed or inherent to such a process or method. Similarly, one or more sub-systems or elements or structures or components preceded by “comprises . . . a” does not, without more constraints, preclude the existence of other, sub-systems, elements, structures, components, additional sub-systems, additional elements, additional structures or additional components. Appearances of the phrase “in an embodiment”, “in another embodiment” and similar language throughout this specification may, but not necessarily do, all refer to the same embodiment.
Please note throughout the disclosure, “powered wheelchair”, “wheelchair”. “personal mobility vehicle” are interchangeably used. All the embodiments mentioned in the current disclosure are applicable on personal mobility vehicles, as well as wheelchairs.
The invention envisages various arrangements of various sensors which feed data to the automatic system of a personal mobility vehicle or wheelchair. The sensors used are long-range sensors and short-range sensors, with optimal numbers of each of them, and with optimal placement of each of them, so that a proper coverage of the traversing area of the personal mobility vehicles is carried out, and the data captured is reliable for the automatic system to determine obstacles, and further based on the determination of obstacles, planning of traversing a path of the personal mobility vehicle.
This invention further elaborates various possible arrangements of long-range sensors and short-range sensors in a wheelchair in order to obtain complete and reliable coverage of the scene. The arrangements are also applicable for any Personal Mobility Devices (PMD).
In order to determine the space available for the movement of a wheelchair, the space around an autonomous or semi-autonomous wheelchair needs to be monitored for obstacles, stationary or moving. In order to achieve this, the types of sensors used and their placement is critical.
The space of solutions can be divided into those sensors that have a long range and those sensors that have a short range.
The goal of the long-range sensors is to capture scene information about obstacles that are at relatively long distances, typically greater than 50 cms and up to about 800 cms, with high spatial resolution. These long-range sensors provide a high-fidelity version of the scene. Exemplar sensors in this solution space include a 1D or 2D LIDAR sensor, a stereo-vision-based sensor, a depth sensor using structured light, or simply a monocular vision sensor with a wide field of view.
The goal of the short-range sensors is to capture scene information about obstacles that are shorter distances away, typically from ˜0 cm to about 300 cms with low spatial resolution. These short-range sensors provide a lower fidelity view of the scene but are able to capture details of obstacles and objects are much closer proximity. Exemplary sensors in this solution space include infrared proximity sensors, infrared distance sensors, ultrasonic sensors, LIDAR range sensors, Time of Fight sensors, etc.
The goal of spatial coverage using both long- and short-range sensors is to ensure that the obstacles that are farther away from the vehicle are available in higher resolution to enable wayfinding and path planning and the same time ensure that the vehicle does not collide with obstacles that are close to the vehicle. It is this combination of sensors with varying spatial coverage that is of critical importance for the wheelchair or PMD.
In furtherance, various possible placements of the sensors and their field of view are discussed from
In the plan view as shown in
Given that these sensors, ultrasonic, TOF or proximity sensors, are based on the emission of a signal and measuring the time needed for the return, in such an arrangement, it is useful to have the various short-range sensors fire their transmit signals at different times so as to not cause false estimates of obstacles. The following specific means for firing the short-range sensors can be implemented:
A similar example can be described for backward movement. Again, which and how many short-range sensors to fire simultaneously is entirely dependent on the field of view of the short-range sensors and the numbers of short-range sensors used.
In the case of a time of flight or IR-based proximity sensor, the time periods get to be significantly faster given the fact that the speed of light is 3*10{circumflex over ( )}10 m/s; but the fields of view might be smaller which will potentially result in more sensors being used. In the case of radar sensors, while the range detected by a radar sensor is inversely proportional to the frequency of the signal used, the biggest challenge of using radar sensors is their power budgets. With the conventional automotive radars operating at a range of 24-80 GHz, the ranges observed span hundreds of meters to 0.5 m. Even at the shortest ranges, of 0.5 m (and 80 GHz), obstacles distances at a range of 0.5 m can be detected in a similar arrangement of sensors.
While it is possible to make the blind spot close to zero, it is essential to also understand the ergonomics of the wheelchair, i.e., the location of placement of arms, legs, and clothing of the users. Also, it is significant to account for the manner of interaction of the wheelchair with the world, manufacturing tolerances, movement of the components of the wheelchair during and after repeated use. These all factors shall impact the accuracy desired.
The placement of long-range sensors is a function of the available computing power onboard on the vehicle. For example, while one can envision the use of multiple vision-based solutions, or multiple radar sensors, flash LIDAR systems, multiple 1D LIDARs, to get complete coverage, the more cost-effective solutions involve the use of cameras or alternatively 2D flash LIDARs. The placement of these sensors is described in this invention.
For illustration purposes, one camera sensor is shown in
The choice of the FOV is going to be a function of the amount of computing power available on the on-vehicle systems. The larger the FOV, the larger the sensor (and possibly resolution) needs to be, which in turn increases the demand for transmission and computation capabilities. Further, the choice of single or multiple cameras also plays a role as each view requires its own computational complexity.
In another embodiment,
In another embodiment, as shown in
These solutions are compact and relatively straightforward to implement and come with minor disadvantages of vertical FOV 124 as shown in
For implementations that require a precise calculation of depth for obstacle avoidance, depth cameras are used. These come in at least two types, i.e., depth cameras which use structured light patterns, and stereo cameras that use disparity maps to determine distance.
Depth cameras that use structured light patterns work by projecting structured light patterns (generally in near-infrared light) onto a scene and capturing them with a camera to compute the disparity in the scene. This is a common approach taken by these kinds of cameras to determine distances to objects. However, there are certain shortcomings of these cameras, such as:
These depth cameras typically have a depth camera FOV 126 between 60° and 90°. If installed along the skirt of the vehicle, in order to get full coverage of the scene, at least six depth cameras must be used, as shown in
However, this comes with a significant shortcoming in that the passenger on the wheelchair and the wheelchair itself have a high likelihood of blocking the dot patterns or the returning signals and hence creating a larger blind spot for the system, as shown in
An abstraction of structured light camera systems is a 2D flash LIDAR that has similar performance characteristics except that given that it is a high-power laser, it is able to project the dots to a much farther distance. In the application of a wheelchair, while feasible is not practical as it provides us capabilities that we do not have a need for: long ranges, high power, high computational complexity.
Stereo camera solutions for depth estimation come with increased challenges with the computational complexity of having to process disparity maps between stereo pairs, requiring them to be assembled in a robust manner relative to each other, and most importantly, needing to have some level of self-calibration in the field. These solutions generally work well when the physical assembly of the camera modules is not impacted by the movement of the vehicle (automotive grade assemblies) or when the cameras themselves are stationary.
While the advantages, challenges, and design constraints to a stereo pair solution are generally similar to structured light solutions, one added advantage to stereo pair solutions is the fact that it works under high ambient lighting conditions (where structured light solutions are challenged).
Given the challenges of blind spots that occur with long-range sensors and the challenges of limited range with the short-range sensors, the combination of the two types of sensors is of particular interest where the long-range sensors allow for data that can be used for path planning and the short-range sensors can be used for collision avoidance. The placement of these two types of sensors, away from the rider's arms, legs, and clothing, combined with the use cases (possible obstacles to identify and avoid) that need to be addressed give rise to a few embodiments.
The key use cases that are being addressed are:
In all these examples, the choice of the number of long-range sensors is a function of the computation and cost budgets of the product. Given that the short-range sensors are relatively low cost and have low computational complexity, the choice of the number and type of short-range sensors is a function of the coverage desired.
In one embodiment, as shown in
In another embodiment, long-range sensors such as structured light sensors 102, and vision sensor 105 are placed on the torso 107 of the vehicle 101, and short-range sensors, like ultrasonic sensor 108 can be placed on the skirts 104 of the vehicle 101, as shown in
In yet another embodiment, both long-range sensors such as structured light sensors 102, and vision sensor 105, and short-range sensors, like ultrasonic sensor 108 can be placed on the skirt 104 of the vehicle 101, as shown in
In yet another embodiment, the long-range sensors (herein names as elevated sensor 109) shall be placed on an elevated structure 110, while the short-range sensors, such as ultrasonic sensor 108 shall be placed on the torso 107 of the vehicle 101 or the wheelchair, as shown in
In another embodiment of
In yet another embodiment, the long-range elevated sensors 109 shall be placed on the flagpole 110, and the short-range sensors, like ultrasonic sensor 108 shall be placed on the skirt 104 of the vehicle or the wheelchair 101, as shown in
In another embodiment, the elevated sensor 109, which is a long-range vision sensor like a 360° camera is placed on the flagpole 110 with short-range sensors, such as ultrasonic sensor 108 is placed on the skirt 104 of the vehicle 101 is shown in
In one preferred embodiment, as shown in
Sensor placement on the vehicle is critical as it allows for the choice of sensors and desired field of view to be evaluated such that the best trade-off can be made. In
In one embodiment, a sensor system can be provided which can be attached to any existing powered wheelchair or personal mobility vehicle. In such a scenario, various mechanical coupling arrangements can be provided to couple the sensors to various locations of the vehicle. A first mechanical coupling can be provided to couple the structured light sensor to the base frame, or a skirt of the vehicle. A second mechanical coupling is adapted to couple the vision sensor to an armrest of the vehicle or the torso of the vehicle. A third mechanical coupling can be provided to couple the ultrasonic sensor to the base frame of the vehicle, or a skirt of the vehicle. The fourth mechanical coupling can be provided to couple the elevated sensors to the elevated structure of the vehicle above the height of the head of a user of the vehicle when the user is seated on the vehicle. In one scenario, all the sensors may not be used, hence all the mechanical couplings may not be required. In another scenario, some of the sensors are already fixed to the relevant locations of the vehicle, and hence not all the sensors are not required to be placed externally, and accordingly, not all the mechanical couplings may be required. These sensors may be coupled to an existing computational processing unit of the vehicle, or a separate processing unit can be loaded to handle and process data from the sensors to determine the location of the obstacles.
To optimize utilization of the computational resources, all the ultrasonic sensors 108 are not fired in one go, rather they are fired based on a predefined logic 117. The processing unit 111 activates the ultrasonic sensors 108 based on the predefined logic 17 where at least one of the ultrasonic sensors 108 is activated at a different time frame with respect to activation of other ultrasonic sensors 108. In one scenario, where the predefined logic 117 defines for activation of the ultrasonic sensors 108 which has a field of view in a direction of movement of the vehicle, the processing unit 111 activates the ultrasonic sensors 108 which has the field of view in the direction of the vehicle.
Of particular importance in this disclosure, is the combination of data from the long-range and short-range sensors so that there is complete coverage of the scene in the vicinity of the wheelchair.
Specifically, the inclusion of the long-range sensors on the flagpole, or torso, or the skirt of the vehicle gives the vehicle the ability to see objects and obstacles in varying degrees of distance: the flagpole being the longest range possible followed by the torso position, followed by the skirt position. However, each also comes with its shortcomings with possible occlusion by the rider or their clothing and different sizes of blind spots.
The inclusion of short-range sensors is specifically envisioned to be on the torso or the skirt of the vehicle for two reasons. Firstly, the sensors are better able to “see” the area that is not visible to the long-range sensors and ensure collision avoidance. Secondly, given the range of distances measurable by these short-range sensors, the importance of inclusion near the perimeter of the vehicle as opposed to the vantage points of the long-range sensors is critical.
The combination of these two different types of sensors in potentially two different locations is of significance, as it shall provide the diversity in the type of data available for collision avoidance, path planning, and wayfinding is of unique value.
Number | Date | Country | |
---|---|---|---|
63083133 | Sep 2020 | US |