This disclosure relates to driver assist systems and methods for marine vessels.
Driver assist systems for automobiles may not be suitable for marine vessels because, for example, lane markers are not typically encountered in a marine environment, and marine vessels typically do not follow each other in line as automobiles typically do on a road.
This disclosure provides, according to at least one embodiment, a driver assist system that may overcome such disadvantages. Embodiments of this disclosure may provide an improved driver assist system adapted for marine vessels.
There is accordingly provided, according to at least one embodiment, a system for obtaining and analysing information from its surroundings. The system includes a sensor for obtaining data from the surroundings and a data processor which receives data from the sensor and analyses the data to obtain relevant information. The data processor utilizes machine learning to process the data. The sensor may be a camera with the data images of the surroundings being taken by the camera.
The system may be installed on a marine vessel with the data processor being programmed to distinguish between portions of an image which represent sea and/or sky and a portion of the image representing an object near the marine vessel. The system disregards the portions which represent sea and/or sky and focuses on the portion representing the object.
There is further provided, according to at least one embodiment, a method for obtaining and analysing information from a surrounding view. A sensor is used to obtain data from the view. A data processor receives data from the sensor and analyses the data to obtain relevant information. The data processor utilizes machine learning to process the data. The sensor may be a camera and the data being images of the view are taken by the camera.
The system may be used for a marine vessel and distinguishes between portions of a view which represent sea and/or sky and a portion of the view representing an object near the marine vessel. The system disregards the portions which represent sea and/or sky and focuses on the portion representing the object.
There is further provided, according to at least one embodiment, a system for obtaining and analysing information from its surroundings, the system including a sensor for obtaining data from the surroundings and a data processor which receives data from the sensor and analyses the data to obtain relevant information, the data processor utilizing machine learning to process the data.
In some embodiments, the sensor is a camera and the data is images of the surroundings taken by the camera.
In some embodiments, the data processor is programmed to obtain information on objects in the surroundings.
In some embodiments, the system is adapted for installation on a marine vessel and the data processor is programmed to distinguish between portions of an image which represent sea and/or sky and a portion of the image representing an object near the marine vessel, the system disregarding the portions which represent sea and/or sky and focusing on the portion representing the object.
In some embodiments, the system can distinguish another boat from other objects in the surroundings.
In some embodiments, the system can determine a direction of movement of said another boat.
In some embodiments, the system interfaces with controls for said marine vessel to avoid collisions between said marine vessel and surrounding objects.
In some embodiments, the system includes adaptive cruise control.
There is further provided, according to at least one embodiment, a method for obtaining and analysing information in a surrounding view, the method including using a sensor to obtain data from the view, processing data from the sensor and analysing the data utilizing machine learning to obtain relevant information.
In some embodiments, the sensor is a camera and the data represents images of the view taken by the camera.
In some embodiments, the data is processed to obtain information on objects appearing in the view.
In some embodiments, the method is used for a subject marine vessel and the data is processed to distinguish between portions of the view which represent sea and/or sky and a portion of the view representing an object near the subject marine vessel, the system disregarding the portions which represent sea and/or sky and focusing on the portion representing the object.
In some embodiments, the system can distinguish another marine vessel from other objects in the surroundings.
In some embodiments, the method can determine a direction of movement of said another marine vessel.
In some embodiments, the method operates controls for said subject marine vessel to avoid collisions between said subject marine vessel and surrounding objects.
In some embodiments, images from the camera are segmented into different segments and, when another marine vessel is detected in one segment, the segment with said another marine vessel is prioritized during repeated cycles, thereby assisting in recognition of small vessels.
There is further provided, according to at least one embodiment, a system which obtains and analyses information from its surroundings. The system includes a sensor for obtaining data from the surroundings and a data processor receives data from the sensor and analyses the data to obtain relevant information. The data processor utilizes machine learning to process the data. The sensor may be a camera and the data images of the surroundings may be images taken by the camera. The system may be mounted on a boat and connected to the controls of the boat to assist navigation of the boat.
There is further provided, according to at least one embodiment, a driver-assist system for a marine vessel, the system comprising a camera operable to obtain data comprising images of a view of the camera. The system further comprises a data processor programmed to, at least, distinguish between: portions of the view representing water and/or sky; and a portion of the view representing an object.
In some embodiments, the data processor is further programmed to, at least, utilize machine learning trained to identify water, sky, and anything that is not water or sky.
In some embodiments, the data processor is further programmed to, at least, utilize a convolutional neural network trained to identify water, sky, and anything that is not water or sky.
In some embodiments, the data processor is further programmed to, at least, identify the object as anything that is not sky or water.
In some embodiments, the data processor is further programmed to, at least, adjust settings of the camera only according to image quality metrics values of only portions of the view not representing water and/or sky.
In some embodiments, the data processor is further programmed to, at least, identify a horizon in the view.
In some embodiments, the data processor is programmed to identify the object near the horizon.
In some embodiments, the data processor is further programmed to, at least, detect the object by, at least, causing at least some object detectors of a plurality of object detectors to search for objects only in respective different subregions of a plurality of different subregions of the view.
In some embodiments, the data processor is further programmed to, at least, cause the at least some object detectors to search for objects only in portions of the view not representing water and/or sky.
There is further provided, according to at least one embodiment, a driver-assist system for a marine vessel, the system comprising: a camera operable to obtain data comprising images of a view of the camera; and a data processor programmed to, at least, detect an object by, at least, causing at least some object detectors of a plurality of object detectors to search for objects only in respective different subregions of a plurality of different subregions of the view.
In some embodiments, the data processor is further programmed to, at least, implement the plurality of object detectors.
In some embodiments, at least one object detector of the plurality of object detectors is a single-shot detector (SSD).
In some embodiments, the plurality of subregions comprises a plurality of different predefined subregions.
In some embodiments, the predefined subregions of the plurality of predefined subregions are distributed evenly across the entire view.
In some embodiments, the predefined subregions of the plurality of predefined subregions overlap horizontally.
In some embodiments, the predefined subregions of the plurality of predefined subregions are centered vertically within the view.
In some embodiments, the data processor is further programmed to, at least: cause available object detectors of the plurality of object detectors to search only some predefined subregions of the plurality of predefined subregions in a first object detection cycle; and cause the available object detectors of the plurality of object detectors to search other predefined subregions of the plurality of predefined subregions in a second object detection cycle after the first object detection cycle.
In some embodiments, the data processor is further programmed to, at least, cause at least one object detector of the plurality of object detectors to search for objects in a dynamic subregion of the plurality of subregions, the dynamic subregion placed such that the object is within the dynamic subregion.
In some embodiments, the data processor is further programmed to, at least, cause the dynamic subregion to move in response to movement of the object.
In some embodiments, the data processor is further programmed to, at least, cause the dynamic subregion to have a size in response to a size of the object.
In some embodiments, the data processor is further programmed to, at least, cause at least one object detector of the plurality of object detectors to search for objects in the entire view.
In some embodiments, each object detector of the plurality of object detectors, in operation, detects objects at a resolution smaller than a resolution of the images.
In some embodiments, the data processor is further programmed to, at least, alert a user in response to detecting the object.
In some embodiments, the system is programmed to, at least, take collision avoidance measures in response to detecting the object.
In some embodiments, the system is programmed to take the collision avoidance measures by, at least, preventing the object from being less than a safety distance at any time from the marine vessel.
In some embodiments, the system is programmed to, at least, vary the safety distance according to, at least, a size of the marine vessel.
In some embodiments, the system is programmed to, at least, vary the safety distance according to, at least, a speed of the marine vessel.
In some embodiments, the system is programmed to, at least, vary the safety distance according to, at least, a size of the object.
In some embodiments, the system is programmed to, at least, vary the safety distance according to, at least, a speed of the object.
In some embodiments, the data processor is further programmed to, at least, identify a distance from the system to the object.
In some embodiments, the data processor is programmed to identify the distance from the system to the object according to, at least, radar data from a radar apparatus.
In some embodiments, the data processor is programmed to identify the distance from the system to the object according to, at least, lidar data from a lidar apparatus.
In some embodiments, the data processor is programmed to identify the distance from the system to the object according to, at least, data from a sensor separate from the camera.
In some embodiments, the data processor is further programmed to, at least, identify a speed of the object.
In some embodiments, the data processor is programmed to identify the speed of the object according to, at least, radar data from a radar apparatus.
In some embodiments, the data processor is programmed to identify the speed of the object according to, at least, lidar data from a lidar apparatus.
In some embodiments, the data processor is programmed to identify the speed of the object according to, at least, data from a sensor separate from the camera.
In some embodiments, the data processor is further programmed to, at least, identify a direction of movement of the object.
In some embodiments, the data processor is further programmed to, at least, identify a size of the object.
In some embodiments, the object is a detected boat.
There is further provided, according to at least one embodiment, a driver-assist system for a marine vessel, the system comprising: a camera operable to obtain data comprising images of a view of the camera; and a data processor programmed to, at least, detect a detected boat in the view.
In some embodiments, the data processor is further programmed to, at least, identify a direction of movement of the detected boat from a shape of the detected boat.
In some embodiments, the shape of the detected boat comprises a bow of the detected boat.
In some embodiments, the shape of the detected boat comprises a stern of the detected boat.
In some embodiments, the data processor is further programmed to, at least, identify one of a plurality of different classes of direction of movement in response to, at least, the direction of movement of the detected boat.
In some embodiments, the system is programmed to take the collision avoidance measures in response to, at least, the direction of movement of the detected boat.
In some embodiments, the system is programmed to, at least, cause the marine vessel to follow the detected boat.
In some embodiments, the system is programmed to, at least, cause the marine vessel to follow the detected boat at a set distance.
In some embodiments, the system is programmed to, at least, set the set distance according to, at least, a user-programmable follow sensitivity.
In some embodiments, the system is programmed to, at least, set the set distance according to, at least, a speed of the marine vessel.
There is further provided, according to at least one embodiment, a driver-assist system for a marine vessel, the system comprising: a sensor operable to obtain data from surroundings of the sensor; and a data processor programmed to, at least, cause the marine vessel to follow a detected boat.
In some embodiments, the sensor is a camera, and the data comprise images of a view of the camera.
In some embodiments, the data processor is programmed to cause the marine vessel to follow the detected boat only if the detected boat is a closest object detected by the system within an adaptive cruise control range.
In some embodiments, the data processor is programmed to cause the marine vessel to follow the detected boat only if the detected boat is at least a safe-following distance of the from the marine vessel.
In some embodiments, the data processor is programmed to cause the marine vessel to follow the detected boat only if the detected boat is moving in a same general direction as the marine vessel.
In some embodiments, the data processor is programmed to cause the marine vessel to follow the detected boat only if the system does not detect any risk of collision with other boats detected by the system.
In some embodiments, the data processor is programmed to cause the marine vessel to follow the detected boat only if the system does not detect any objects, other than the detected boat, within a safety region.
In some embodiments, the safety region is a safety circle.
In some embodiments, the data processor is programmed to cause the marine vessel to resume following the detected boat automatically in response to detecting no objects, other than the detected boat, within the safety region.
There is further provided, according to at least one embodiment, a marine vessel comprising the system.
Other aspects and features will become apparent to those ordinarily skilled in the art upon review of the following description of illustrative embodiments in conjunction with the accompanying figures.
Embodiments of this disclosure will be more readily understood from the following description of such embodiments given, by way of example only, with reference to the accompanying drawings, in which:
Referring to the drawings and first to
A driver assist system, as described herein for example, may include a data processor that may be programmed, configured, or otherwise operable to implement some or all functions of the driver assist system as described herein, for example. More generally, a driver assist system, as described herein for example, may be programmed, configured, or otherwise operable to implement some or all functions of the driver assist system as described herein, for example.
A driver assist system, a data processor, a unit, a module, a controller, or a system as described herein, for example, may include one or more processor circuits that may include one or more central processing unit (CPUs) or microprocessors, one or more machine learning chips, discrete logic circuits, or one or more application-specific integrated circuit (ASICs), or combinations of two or more thereof, for example, and that may include one or more of the same or different computer-readable storage media, which in various embodiments may include one or more of a read-only memory (ROM), a random access memory (RAM), a hard disc drive (HDD), a solid-state drive (SSD), and other computer-readable and/or computer-writable storage media. For example, one or more such computer-readable storage media may store program codes that, when executed, cause one or more processor circuits of a driver assist system, of a data processor, of a unit, of a module, of a controller, of a system, or of a combination of two or more thereof to implement functions as described herein, for example, in which case the driver assist system, the data processor, the unit, the module, the controller, the system, or the combination of two or more thereof may be programmed, configured, or operable to implement such functions. Of course a driver assist system, a data processor, a unit, a module, a controller, a system, or a combination of two or more thereof may be configured or otherwise operable to implement other functions and to implement functions in other ways.
A driver assist system, a data processor, a unit, a module, a controller, or a system as described herein, for example, may be implemented in the same device, such as a device including one processor circuit, or in one or more separate devices other implementations. For example, in some embodiments, the motion planner 69 and one or more other units, modules, controllers, systems, or a combination of two or more thereof as described herein may be implemented in one device including one processor circuit programmed, configured, or otherwise operable to implement some or all functions as described herein. In other embodiments, the motion planner 69 may be implemented in separate processor circuits or in multiple separate devices, and other units, modules, controllers, systems, or a combination of two or more thereof as described herein may be implemented in one device or in separate devices.
The marine driver assist system may utilize computer vision and machine learning to process the images received from the cameras such as camera 46, a stereo camera in this example, at the front of the craft shown in
The training may be simplified by focusing on objects that are neither sky nor water. Things that are not identified as sky or water may be deemed to be marine object candidates for a collision warning or avoidance. These objects could be a boat, buoy, shoreline, a bridge, a marker, a log, or a piling, for example.
By focusing on objects that are neither sky nor water, the processes of machine learning and real-time execution speed of image recognition may be accelerated. This method may also simplify the ground truth data annotation because there may be a reduction in the number of segmentation classes. Ground truth is a term used to refer to information provided by direct observation (i.e. empirical evidence) as opposed to information provided by inference. In some embodiments, such a system may be robust to classes of images that may not have been experienced before or which do not exist in training data. For example, a manatee may not have been experienced before, but the VPU may still recognize a manatee as an object which the boat may potentially collide with.
With reference to
The marine driver assist system may also utilize boat identification and tracking using computer vision and machine learning. In other words, the marine driver assist system may be capable of identifying boats which may be in a simplified image, similar to
The marine driver assist system may use a convolutional neural network to search predefined image regions, for example, the image of
The marine driver assist system may track boats using one or more of speed and acceleration, distance, location of an object in the image, the size of an object in the image, both height and width, to estimate the physical size of a boat based on its size and an image and the measured distance to the boat.
Boats may have distinctive features such as bows and sterns which can be used to train the convolutional neural network using pictures of different boats facing in different directions. For example, with reference to
If another boat is detected coming towards the subject boat, then the marine driver assist system may monitor the other boat and may alert the user or take collision avoidance measures with respect to the other boat as required. For example, if the other boat is detected going away from the subject boat then the subject boat can choose to follow the other boat in adaptive cruise control following application. As another example, if the other boat is going sideways to the left of the subject boat, then the VPU can choose to steer it to the right of the other boat to avoid it. As another example, if the other boat is going sideways to the right, then the VPU can choose to steer to the left of the other boat to avoid it.
With reference to
The marine driver assist system may search through a predefined set of regions, and may check some (but not all) of the predefined regions every cycle. When objects are detected new (or dynamic) regions may be set around them and may be checked every cycle as long as that object is still detected. Such use of predefined regions may extend detection range in an open water marine environment and may allow better detections while still using low resolution detectors with responsive performances. Such use of predefined regions may also save resources by only checking some fixed number of subregions in one cycle. Such use of dynamic searching algorithms with locking regions on detected objects is novel for marine object tracking and may allow efficient tracking of marine objects as the subject boat (and its sensors) pitch and roll in waves or otherwise move as the vessel travels through the water.
Many object detection algorithms run in real time. Inputs to such object detection algorithms may be at a relatively low resolution, e.g. 300×300 pixels, but at such low resolution, the model may struggle to detect small objects in the view. On the other hand, larger algorithms with larger resolution inputs may scale exponentially in complexity, and a processor may not have enough computation power and may struggle to complete object detection fast enough for real-time performance.
To enhance small object detection, or object detection more generally, the marine driver assist system may use a set of searching windows (or predefined subregions) in the view of one or more cameras.
With reference to
1. Prepare predefined subregions: (a) Consider the entire region. (b) Divide this into a series of smaller predefined subregions. (c) Evenly distribute the predefined subregions across the entire region such that they horizontally overlap and are centered vertically. They are centered vertically (or otherwise including or centered on a horizon) because the concern is small boats on the horizon.
2. Prepare dynamic subregions: (a) The dynamic subregions track boats detected only in a predefined subregion. (b) A dynamic subregion is placed so that the boat is on the edge of the dynamic subregion with some offset to allow the boat to move further into the dynamic subregion on the next cycle. (c) Boats which are not found for x cycles are dropped and the dynamic subregion is abandoned. (d) If two boats are close enough together, then one dynamic subregion is placed over both boats.
3. The processor is capable of processing N regions per cycle using N single shot detectors (SSDs), although other embodiments may include other detectors that may not necessarily be SSDs. This is comprised of: (a) The first SSD is used to detect the entire region at lower resolution. N−1 SSDs are still available. (b) Dynamic subregions are generated to monitor objects detected in previous cycles. There are K of the dynamic subregions, to a maximum of (N−1). The dynamic subregions are repeatedly created based on results of previous iterations of steps 7-9 as described below. (N−1−K) SSDs are still available. (c) All remaining processing power is used in predefined subregions. Thus (N−1−K) detectors are available to search predefined subregions.
4. Take N−1−K extra single shot detectors and feed N−1−K of M predefined subregions through the SSDs as well as entire image through the first SSD.
5. Move all the N−1−K single shot detectors to new predefined subregions every cycle such that it takes no more than ceiling(M/(N−1−K)) cycles to have a predefined subregion go through an SSD where M is the number of predefined subregions. As a result, the marine driver assist system may be configured to cause N−1−K available detectors to search only some predefined subregions in one object detection cycle, and to cause the N−1−K available detectors to search other predefined subregions in another object detection cycle.
6. Continue the cycle through the regions until a single shot detector detects the boat which is not also detected by the whole image single shot detector.
7. Then use one dynamic subregion to cover this boat. The marine driver assist system does not center the dynamic subregion on this boat, but rather places it to the edge of the boat detection with some offset to allow the boat to move in that direction frame to frame (or cycle to cycle). Some amount of spatial locality of boat detection from frame to frame (or cycle to cycle) is assumed.
8. Continue to search this dynamic subregion with the updated location of the boat as it is tracked. Boats not found for X cycles are dropped and the SSD is returned to search other regions.
9. If two boats sufficiently close to each other are detected by the SSDs of a predefined subregion, the marine driver assist system attempts to place one dynamic subregion over the boats with the buffer room. If this is not possible then two dynamic subregions are used in a nonoverlapping manner.
The marine driver assist system may also employ smart brightness adjustment as seen in
In
With reference to
1. Capture camera image and apply digital processing such as contrast or edge enhancement to the image to make some compensations and enhancements.
2. Through the use of a convolutional neural network, determine and generate foreground segmentation on the image. The foreground regions contain objects of interest for advanced driver assist functions (e.g. boats, docks, buoys) and does not contain the background region of image such as sky or water. The foreground regions may be identified as regions that are not water or sky.
3. Mask the image obtained from step 1 with the foreground segmentation from step 2. This step provides a masked image that contains only the foreground regions. The foreground information is illustrated as an example in
4. Analyze the masked regions from step 3 and determine the image quality metrics for those regions. Examples of image quality metrics include brightness, contrast, hue, saturation, exposure, and white balance, any one thereof, or a combination of any two or more thereof. The background region is ignored in the analysis.
5. Compare the image quality metrics values of the masked regions from step 4 against desired quality metric values, and compute the difference.
6. Adjust camera settings for the masked regions from step 3 by minimizing the computed metric difference in step 5.
Continue to iterate through steps 1 through 6 for each image frame or detection cycle.
Camera settings may be compensated with every frame or detection cycle, which may ensure even foreground quality even though the image transitions into and away from areas with bright backgrounds. This may give the marine driver assist system the ability to tolerate a large range of lighting conditions within the background of an image without degrading performance of the marine driver assist system. The new contribution of this method is using segmented foreground apart from water or sky areas to isolate regions considered for compensation of the camera and postprocessing settings to enhance only regions of interest in outdoor applications.
Embodiments of this disclosure may include sensor fusion. Different types of sensors have different attributes and are better at different things. Cameras may be good at recognition and classification. Radar may be good at measuring velocity and position and is not affected by bad weather. Stereo cameras can be used to measure distance to objects. However, radar may have a higher accuracy in measuring medium to long range distance. Lidar may be preferable to measure short to medium range distance.
Referring to
Where there is risk of collision from the forward direction of the boat, thrust may be reduced according to a control algorithm considering the collidable object's distance and relative velocity. If the control algorithm requests zero thrust, the boat is still moving, and the collidable object is within the safe distance, “brakes” may be applied. The duration and intensity of braking may be based upon a control algorithm considering the object's distance and the subject boat's speed.
Since boats have significant momentum and no conventional brakes, a series of other mechanisms may be used to accomplish braking. A number of options may be available and may be used in any combination including friction from passing water, shifting out of gear, shifting into reverse gear, deploying the trim tabs, and deploying interceptor tabs as illustrated in
The motion planner may receive sensor data 53 from the sensor fusion module 52 or 68. The sensor data may include velocities of other boats, positions of other boats, distances positions, or both of non-boat objects, velocity of the subject boat (the boat where the marine driver assist system is installed), or a combination of two or more thereof. The motion planner may communicate with a steering controller 84, shift and throttle controller 82, and trim tab controller 86 and may provide them with steering commands, shift and throttle commands, and trim commands respectively.
The adaptive cruise control unit or system for boats, according to an embodiment of this disclosure, may engage when following a preceding boat within a preset distance. However, other boats in a marine environment can come from different locations and can change course more suddenly compared to automobiles on a road. Also boats have inertia but no actual brakes. It may take much longer for a boat to stop or change course, particularly large boats, compared to automobiles on a road. Also boats may not travel within defined lanes and may travel in a staggered pattern even when travelling in a similar direction. Also boats travelling in a similar direction may do so at different speeds. Low-speed adaptive cruise control can be engaged in a busy waterway or marina where oncoming boats may come too close. Adaptive cruise control/collision avoidance systems may need to work together to reduce speed or stop the subject boat if necessary. There may be more interruptions to stop adaptive cruise control systems in boats compared to cars. It may be advantageous to automatically reengage adaptive cruise control for a boat if it is interrupted. The following numbered steps characterize the operation of the adaptive cruise control for boats according to one embodiment, although alternative embodiments may differ.
1. With reference to
2. Where there are objects in the adaptive cruise control range, the adaptive cruise control unit or system then determines whether the closest object is a boat.
3. Where the closest object is determined to be a boat, the adaptive cruise control unit or system determines whether this is within a safe following distance (i.e. it is not too close such that it may cause a crash).
4. The adaptive cruise control unit or system then checks whether the preceding boat is travelling in the same general direction as the subject boat.
5. If the answer to step 2 was “no”, and the closest object is not a boat, then the closest object may be an obstacle such as a rock. If the answer to step 3 was “no”, then the subject boat may be too close to the preceding boat such that it may have a risk of crashing with it. If the answer to step 4 is “no” then the closest boat may be travelling towards the subject boat. When the answer to any of the steps 2, 3 or 4 is “no” then the adaptive cruise control will disengage and execute the collision avoidance routine to mitigate the risk of collision.
6. Before engaging cruise control command, the adaptive cruise control unit or system will utilize its sensor data (e.g. radar, stereo camera, or lidar data) to determine velocities and positions of all detected boats. The adaptive cruise control unit or system then uses the present position and velocity of each boat to project its future position.
7. If the projected future position of any boat indicates a risk of collision, then the adaptive cruise control will also disengage and collision avoidance will execute.
8. Where there is no risk of collision, then adaptive cruise control will be engaged and locked onto the target boat as a target.
9. The projected boat positions and velocities of the preceding boat and surrounding boats are used to calculate the adaptive cruise control commands for steering, shift and throttle and trim tab controllers.
With reference to
It will be understood by someone skilled in the art that many of the details provided above are by way of example only and are not intended to limit the scope of the invention which is to be determined with reference to the following claims.
This application claims the benefit of, and priority to, U.S. provisional patent application No. 62/975,351 filed on Feb. 12, 2020, the entire contents of which are incorporated by reference herein.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/CA2021/050156 | 2/12/2021 | WO |
Number | Date | Country | |
---|---|---|---|
62975351 | Feb 2020 | US |