In order for unmanned vehicles to be truly autonomous, they must possess the ability to localize themselves when placed into an unknown environment and learn about the physical objects that surround them. For example, such vehicles learn information for high level applications such as mapping and vehicle localization as well as low level applications such as obstacle avoidance. Once a vehicle learns such information about the environment in which it is working, it is able to move about the environment freely and in an optimized pattern to fulfill its required tasks while staying out of harms way. While various sensors have been developed for vehicles operating out of the water, the number of sensors available for use by underwater vehicles is limited.
For example, for vehicles working in outdoor environments, localization can be accomplished using satellite-based localization sensors (e.g., GPS sensors) capable of providing accuracy in the centimeter range. Also, laser-based range finders, including Light Detection and Ranging (LiDAR) sensors, are capable of providing vehicle information about the surrounding environment with millimeter accuracy. LiDAR sensors, however, have a high cost that is prohibitive for low budget applications and both LiDAR and satellite-based sensors do not function properly in indoor (i.e., enclosed) or underwater environments.
In underwater environments, the most common sensor technologies are based on acoustics. For example, Sound Navigation and Ranging (SONAR) can provide accurate sensor data for vehicles operating in large open water environments. However, in enclosed underwater spaces, such as swimming pools, acoustic based solutions such as SONAR are difficult to use due to the high number of multiple returns caused by reflections in the enclosed environment. As a result, some laser-based approaches have been proposed. For example, one approach includes a vehicle with a laser pointer projecting a single dot and a camera that visualizes the dot reflecting, off of a wall of the enclosed space. Because of this design, such vehicles are only able to determine distance information related to a single location directly in front of the camera. Also, such designs rely heavily on calibration routines that map the laser pointer's location in an image frame with a distance. Another approach includes the use of a single laser line and camera to generate full 3D maps of underwater objects. However, it can be challenging to find the entire laser line in environments that are not extremely dark. As a result, this approach cannot be used in operating environments where large amounts of natural and artificial light may be present, such as swimming pool and spa environments.
Some embodiments provide a swimming pool cleaner. The swimming pool cleaner includes a chassis that supports a motor, and a camera that is associated with the chassis and configured to identify at least one object. A controller is in communication with the camera, and is configured to control movement of the pool cleaner based on output from the camera.
Additional embodiments provide an autonomous robotic pool cleaner for an underwater swimming pool environment. The pool cleaner includes a chassis that supports a motor, and a sensor assembly designed to map the underwater swimming pool environment. A controller is in communication with the sensor assembly and is configured to operate the sensor assembly, receive an input from the sensor assembly, and position the pool cleaner throughout the underwater swimming pool environment based on the input from the sensor assembly.
Other embodiments provide a swimming pool cleaner. The swimming pool cleaner includes a chassis that supports a motor. A camera is associated with the chassis and is configured to identify at least one object. A sensor assembly is coupled to the chassis. A controller is in communication with the sensor assembly and the camera and is configured to operate at least one of the sensor assembly or the camera, receive an input from at least one of the sensor assembly or the camera, and position the pool cleaner throughout an underwater environment based on the input from the sensor assembly or the camera.
Before any embodiments of the invention are explained in detail, it is to be understood that the invention is not limited in its application to the details of construction and the arrangement of components set forth in the following description or illustrated in the following drawings. The invention is capable of other embodiments and of being practiced or of being carried out in various ways. Also, it is to be understood that the phraseology and terminology used herein is for the purpose of description and should not be regarded as limiting. The use of “including,” “comprising,” or “having” and variations thereof herein is meant to encompass the items listed thereafter and equivalents thereof as well as additional items. Unless specified or limited otherwise, the terms “mounted,” “connected,” “supported,” and “coupled” and variations thereof are used broadly and encompass both direct and indirect mountings, connections, supports, and couplings. Further, “connected” and “coupled” are not restricted to physical or mechanical connections or couplings.
The following discussion is presented to enable a person skilled in the art to make and use embodiments of the invention. Various modifications to the illustrated embodiments will be readily apparent to those skilled in the art, and the generic principles herein can be applied to other embodiments and applications without departing from embodiments of the invention. Thus, embodiments of the invention are not intended to be limited to embodiments shown, but are to be accorded the widest scope consistent with the principles and features disclosed herein. The following detailed description is to be read with reference to the figures, in which, like elements in different figures have like reference numerals. The figures, which are not necessarily to scale, depict selected embodiments and are not intended to limit the scope of embodiments of the invention. Skilled artisans will recognize the examples provided herein have many useful alternatives and fall within the scope of embodiments of the invention.
Embodiments of the invention provide a small, low-cost, underwater vehicle for operation in enclosed underwater spaces. More specifically, embodiments of the invention provide a low-cost distance-measuring and mapping system for an autonomous robotic pool cleaner for operation in swimming pool and/or spa environments. The distance-measuring portion of the system is based upon a camera and parallel laser line setup and the mapping portion of the system allows for mapping of a swimming pool environment without previous calibration, using simultaneous localization and mapping (SLAM) techniques, in order to map cleaning routes through the swimming pool environment. This allows the pool cleaner to optimize cleaning routes, for example, in order to traverse and clean the entire swimming pool environment.
In some embodiments, the pool cleaner 12 can be supported on a surface, such, as a swimming pool floor, by the scrubber assemblies 34, 36. The pool cleaner 12 can move itself across the pool floor through operation of the scrubber assemblies 34, 36 and/or the outlet nozzle assemblies 42. More specifically, each scrubber assembly 34, 36 can include a brush 56 attached to a brush plate 58. A vibration motor 60 can be mounted on each brush plate 58 to vibrate the respective, scrubber assembly 34, 36, and vibration of the scrubber assemblies 34, 36 can facilitate forward and or turning movement of the pool cleaner 12 as well as scrubbing action of the brushes 56 against the pool floor. For example, each of the scrubber assemblies 34, 36 can be vibrated at a substantially equal intensity to facilitate forward movement of the pool cleaner 12, and the vibration intensity of each vibration motor 60 can be adjusted individually to facilitate turning movement of the pool cleaner 12 (e.g., the front left vibration motor intensity can be reduced or turned off and the front right vibration motor can be increased or maintained to facilitate a left turn and vice versa). In addition, the outlet nozzle assemblies 42 can force water outward from a rear of the pool cleaner 12 in order to assist forward and/or turning movement of the pool cleaner 12. As further described below, the force and/or amount of water exiting the outlet nozzle assemblies can be adjusted individually to assist forward or turning movement of the pool cleaner 12.
The scrubber assemblies 34, 36 can be coupled relative to the chassis 28 to provide a clearance between the pool floor and the chassis 28. This clearance can be high enough to allow the pool cleaner 12 to travel over debris on the pool floor and low enough to achieve adequate suction of such debris through an intake port 63 of the chassis 28, as shown in
The outlet nozzle assemblies 42 can each include an outlet nozzle 68, a nozzle duct 70, and a motor vessel 72 in communication with the nozzle duct 70. The nozzle ducts 70 can be coupled to the center duct 66, as shown in
In some embodiments, the filter assembly 32 can include a housing 82, a filter tube 84, a diverter 86, a first end cap (not shown), and a second end cap 90. The housing 82 can include a first suction port (not shown) in fluid communication with the intake riser and the intake plenum 62 to receive water and debris from the underside of the pool cleaner 12 and a second suction port 94 to receive water and debris near the skimmer assembly 30, as further described below. The first end cap can be coupled to a first end of the housing 82 to enclose an internal space 96 of the housing 82. In addition, the first end cap can be coupled to a front filter bracket (not shown), which can be further coupled to one or more of the I-rails 54 to support the filter assembly 32. The filter tube 84 can be a cylindrical tube positioned within the internal space 96 of the housing 82 and can include a filter media that separates the internal space 96 of the housing 82 from an internal space of the filter tube 84. The filter media can permit passage of water from the internal space 96 of the housing 82 to the internal space of the filter tube 84. In addition, the second end cap 90 can be coupled to the housing 82 and the center duct 66. The second end cap 90 can enclose the internal space 96 of the housing 82 and can include a center hole to permit fluid communication between the internal space of the filter tube 84 and the center duct 66. As a result, debris can be retained within the housing 82 while water can pass through the filter tube 84, into the center duct 66, and out of the pool cleaner 12 via the nozzle ducts 70 and the outlet nozzles 68.
The diverter 86 of the filter assembly 32 can selectively close the first suction port, as shown in
When the diverter 86 is rotated to the first, position, the pool, cleaner 12 can vacuum water and debris near the underside of the pool cleaner 12 (i.e., along the pool floor) as it travels along the pool floor, thus providing a floor cleaning operation. In the second position, the pool cleaner 12 can vacuum water and debris near the skimmer assembly 30, for example as the pool cleaner 12 travels across a surface of the swimming pool, thus providing a skimming operation. More specifically, the skimmer assembly 30 can include inflatable bladders 95 (as shown in
Referring back to the electronics box 38 of the pool cleaner 12, in some embodiments, the electronics box 38 can include electronic components necessary to power and operate the pool cleaner 12. Such electronics can include, but are not limited to, one or more power sources (e.g., batteries) and one or more controllers (such as the controller 14 of
In some embodiments, the second sensor assembly 24 can be housed within the electronics box 38. For example, in one embodiment, the second sensor assembly 24 can include a camera. An underside of the electronics box 38 can include a clear window 105 positioned relative to a through-hole 107 in the chassis 28, as shown in
In some embodiments, the controller 14 can operate the vibration motors 60 and/or the motors 74 of the outlet nozzle assemblies 42 individually based on information received from the sensor assemblies 16, 24. For example, as shown in
With reference to distance measuring methods of some embodiments of the present invention, as described above and illustrated in
As described above, generally, the controller 14 can operate the lasers 18, 20 and the camera 22 and can determine distances between the camera 22 (and thus, the front of the pool cleaner 12) and objects in front of the pool cleaner 12, such as walls of the swimming pool or spa environment, based on output from the camera 22. For example, in some embodiments, the controller 14 can perform distance calculations based on a modified pinhole camera model. More specifically, according to a traditional pinhole model, as shown in
where xw, yw, and zw are the components of P corresponding to the point in world coordinates and xf, yf, and f are the corresponding components of Q corresponding to the P's projection on the camera's focal plane. The negative signs in the projected point, Q, are a consequence of the camera's focal plane 108 being located behind the aperture, O, as shown in
In order to remove confusion caused by the negative signs, a modified version of the pinhole model can be used in some embodiments. More specifically, by moving the focal plane 108 in front of the camera's aperture O, as shown in
where the corresponding components of P and Q, described above, define the relationship.
Based on the physical layout of the sensor assembly 16, as shown
where {tilde over (y)}wyw,1−yw,2, is the physical distance between the laser line generators 18, 20, {tilde over (y)}fyf,1−yf,2 is the distance between the laser lines in the image, zw is the distance between the camera's aperture O and the object 112, and f is the focal length of the camera 22. Since {tilde over (y)}w can be known or predetermined from the physical setup of the laser range finder 16, f can be known or determined as a characteristic of the camera 22 being used, and {tilde over (y)}f can be found through an image processing algorithm, described below, the distance to the object 112 can be calculated as
Therefore, in order to determine how far away an object 112 is from the laser range tinder 16 (in particular, the camera 22), that is, the distance zw, the distance {tilde over (y)}f between the two laser lines A, B in the image frame 108 can be determined and applied to Equation 4 above along with the known focal length f and physical distance between lasers {tilde over (y)}w. According to some embodiments of the invention, as shown in
In some embodiments, the process 116 of
More specifically, with further reference to process block 120, lens distortion can be removed from the received image. Generally, most cameras suffer from distortion from the lens and other manufacturing defects. For example, a model for camera distortion can include two different types of distortions existing in cameras: radial distortion and tangential distortion. Radial distortion can be described as
xcorrected,radialx(1+k1r2+k2r4+k3r6), Eq. 5
ycorrected,radialx(1+k1r2+k2r4+k3r6), Eq. 6
where x and y are the corresponding horizontal and vertical distances from the center of the camera aperture for a point in the image, r√{square root over (x2+y2)} is the distance of the point from, the center of the camera's aperture, and the constants ki>0, i=1, 2, 3, are unique constants describing the radial distortion for a given camera.
Tangential distortion can be described as
xcorrected,tangentialx+[2p1y+p2(r2+2x2)], Eq. 7
ycorrected,tangentialy+[p1(r2+2y2)+2p2x], Eq. 8
where constants pi>0, i=1, 2, are camera specific constants that describe the tangential distortion.
Removing distortion from an image can be achieved by determining the two sets of distortion constants, ki, i=1, 2, 3, and pi, i=1, 2. In some embodiments, this can be a one-time operation performed for the camera 22. By way of example, a camera calibration method, such as the Camera Calibration Toolbox for Matlah® or a similar implementation, can be used to determine the constants. The calibration method can examine a set of images of a standard checkerboard training pattern that is placed around the working space of the camera 22 (e.g., in an underwater environment). Since the dimensions and layout of the testing pattern are known, this information can be used in Equations 5-8 to solve for the camera's distortion constants. In some embodiments, along with finding the distortion parameters, the camera calibration method can also determine the focal length f of the camera 22 and the location of the center point O of the aperture in the image. With the distortion removed at process block 120, the image can be assumed to substantially match that of an ideal pinhole camera model and the process can proceed to process block 122.
With further reference to process block 122, generally, by projecting a line across the image (i.e., via the laser line generators 18, 20), the distance to an object can be determined at multiple points along the projected lines, as opposed to at a single point which occurs when using just a single point generated by a laser pointer. This ability to determine the distance to multiple objects or multiple locations on a single object can aid the control system's ability to better map the surrounding environment, as further described below. In order to determine the distance at multiple locations, the image can be broken down into multiple segments, for example as shown in
Following process block 122, with the image broken down into smaller segments, each segment can then be processed to extract the location of the laser lines (110, 114) in the image. First, the image can be converted from a full color image to a black and white or grayscale image (i.e., by extracting color planes at process block 126). Second, a threshold can be applied in order to extract the brightest portion of the image and an edge detection algorithm can be used to extract edges that could be lines at process block 128. Third, all of the line segments can be extracted from the image, for example, using the Hough Transform at process block 130. More specifically, the Hough Transform can take as an input an image that has been processed by the edge detection algorithm. Each point in the image, located at (x, y), that is a member of an extracted edge can be represented in slope-intercept form.
y=mx+b, Eq. 9
where m is the slope of a given line and h is the point where the line intercepts the vertical axis. Any point in the x-y coordinate system can be represented as a line in the in m-b coordinate system, as shown in
Example results after each of process blocks 126, 128, 130 are illustrated in
Once all of the line segments have been extracted from the image segment at process block 130, there is a chance that multiple line segments are used to represent each laser line. As a result, each of the line segments can be grouped together based on a predefined pixel separation parameter (e.g., a user-defined or preprogrammed parameter) at process block 132. This grouping step can analyze each of the extracted line segments and, if certain line segments fall within some p pixel distance of each other, these line segments can be assumed to represent the same laser line. Once the line segments corresponding to each laser line are grouped together at process block 132, each line segment can be evaluated at the midpoint of the image segment and can be averaged to estimate the exact middle of the laser line in the frame. The pixel difference between the two laser lines can be calculated, at process block 134, based on these averages so that the physical distance to the object at the center of the image segment can be calculated at process block 136, for example using Equation 4 above.
Based on experimental results, the above control system 10 and process 116 can be capable of providing underwater distance measurements with a maximum absolute error of about 10% of the actual distance, which can be considered accurate enough for beneficial use in autonomous pool cleaner applications. In addition, the use of laser lines as opposed to traditional laser points allows the control system 10 to obtain additional data besides a single distance measurement to an object directly in front of the sensor assembly. For example, when corners or obstacles that are not flat and perpendicular to the camera's viewing axis are encountered, the control system 10 can be capable of obtaining shape data from a single image.
As described above, the control system 10 can use output from the laser range finder 16 to control movement of the pool cleaner 12. In some embodiments, the control system 10 can be configured to use the laser range finder 16 as an obstacle or feature finder, thereby controlling turning movement of the pool cleaner 12 when a detected obstacle or feature is a certain distance directly in front of the pool cleaner 12, in some embodiments, the control system 10 can be configured to map an environment (i.e., swimming pool, spa, etc.) in which the pool cleaner 12 is placed and learn about the pool cleaner's surroundings using Simultaneous Localization and Mapping (SLAM) techniques, based on output from the laser range finder 16 and the second sensor assembly 24 (i.e., without previous environment-related calibrations or teaching). In this manner, the control system 10 can determine and optimize cleaning routes and can operate the pool cleaner 12 to follow these optimized cleaning routes (e.g., to traverse an entire swimming pool floor within a certain time period). In addition, the control system 10 can track cleaner movement in order to track routes of cleared debris and ensure that the entire swimming pool floor has been traversed within a certain time period). In some embodiments, a feature-based Extended Kalman Filter (EKF) SLAM technique can be used by the control system 10, as described below. In other embodiments, other SLAM techniques can be used.
Generally, in order for robotic vehicles to be able to autonomously perform tasks in any environment, they must be able to determine their location as well as locate and remember the location of obstacles and objects of interest in that environment or, in other words, they must be capable of SLAM. An Extended Kalman Filter (EKF) can be used to estimate the SLAM posterior. The following paragraphs provide an overview of an EKE SLAM approach, in accordance with some embodiments of the invention.
In a probabilistic sense, the goal of SLAM is to estimate the posterior of the current pose of the pool cleaner 12 along with the map of the surrounding environment, denoted by
p(xt,m|z1:tu1:t), Eq. 10
where xt is the pose of the pool cleaner 12 at time t, m is the map z1:t are the measurements, and u1:t are the control inputs. The EKE can assume that state transition and measurement models are defined as
xt=g(ut,xt−1)+ηx,t, t=1,2 . . . , Eq. 11
zt=h(xt)+ηz,t, Eq. 12
where g(·) and h(·) are, nonlinear and the additive noise, ηx,t and ηz,t, are zero mean gaussian processes with covariances of Rt and Qt respectively. The EKE solution to SLAM falls into a class of solutions referred to as feature-based approaches. In feature-based SLAM, it is assumed that the environment that surrounds the pool cleaner 12 can be represented by a set of distinct points that are referred to as features. As a result, the full SLAM state is composed of the state of the cleaner 12 and the state of the map
xt[xyθMx1My1 . . . MxNMyN]T, Eq. 13
where x and y are the location of the cleaner 12 in the two-dimensional (2D) plane and θ is the heading. The map is represented by N features with the location of each feature in the 2D plane maintained in the state, Mxi and Myi.
The EKE solution to SLAM can use a classic prediction-correction model. More specifically, the prediction step of the EKE is based on the state transition model of the system given by Equation 11 above and can be defined as
xt−1=g(ut|xt−1), Eq. 14
where xt−1 is the state estimate from the previous time step, xt−1 is the prediction of the full SLAM state at the current time step, Σt−1 is the covariance estimate at the previous time step,
Kt=
xt=
Σt=(I−KtHt)
where Ht is the Jacobian of h(·) with respect to xt−1 evaluated at xt−1 and zt is the measurement at the current time.
The present EKE SLAM technique of some embodiments can include an additional step that is not present in the standard EKF, which is related to the addition of new features to the SLAM state. For example, when a new feature is encountered, it must be integrated into both the full SLAM state, xt, and the SLAM covariance Σt. The augmentation of the SLAM state can be defined by
where xt+ is the SLAM state after the addition of the new features and f(·) estimates the location of the new feature in the global frame based on the current cleaner state and the observation of the feature.
With respect to the augmentation of the SLAM covariance, an examination of the SLAM covariance shows that it takes the form
where Σt,v is the covariance of the cleaner estimate, Σt,vm is the covariance between the cleaner estimate and map estimate, and Σt,m is the covariance of the map estimate. From Bailey, et al. (“Simultaneous localization and mapping (slam): Part ii”. Robotics & Automation Magazine, IEEE, 13(3), pp. 108-117), the augmented from of the SLAM covariance can be calculated as
where Σt+ is the augmented SLAM covariance, Ft,x is the Jacobian of f(·) with respect to xt evaluated at xt and zt, and Ft,z is the Jacobian of f(·) with respect to zt calculated at xt and zt.
With reference to the control system 10 of embodiments of the present invention, the sensor assemblies 16, 24 can provide data that represent the above-described state transition model input ut and the feature measurements zt. Traditionally, for ground vehicle applications, the inputs for the state transition model are composed of odometry readings from wheel encoders while the location of features are calculated using Light Detection and Ranging (LiDAR). However, these types of sensors are unable to function in underwater environments, in typical underwater environments, many existing sensor technologies are based on acoustics, where odometry data is provided to the vehicle from a doppler velocity log (DVL) and features are located using SONAR sensors. However, as described, above, acoustic-based sensors are problematic due to the large number of multiple returns that could be generated in relatively small, enclosed environments such as swimming pools and spas. Additionally, there are sensor specific issues that arise from currently available sensors. For example, as described above, the pool cleaner 12 can operate directly on, or very close to, the pool floor. In such an operating environment, DVL sensors suffer from poor performance and they also have a large size and high price that make their use on small inexpensive underwater vehicles prohibitive. Furthermore, a problem with SONAR sensors is that they are difficult to use for feature extraction when implementing feature-based SLAM methods. More specifically, a SONAR can only report that there exists an object located at some distance in front of the SONAR sensor's scanning cone, which makes it difficult to identify unique features that can be used to generate a map in feature-based SLAM. As a result, the feature must be observed from multiple locations before proper data association can occur. The control system 10 of the present invention, based on computer vision algorithms and the above-described sensor assemblies 16, 24, can overcome the above issues and can determine control inputs to the state transition model as well as valid landmark measurements in an enclosed underwater environment as further described below.
With respect to the second sensor assembly 24, visual odometry data can be calculated from the downward facing camera by tracking a set of points between consecutive images acquired by the camera. From the translation and rotation of the points between frames, the change in the cleaner orientation can be determined (therefore providing the state transition model inputs). By way of example, with reference to
To track the points between frames, a multi-step algorithm can be used. First, Ip can be filtered, for example using a Laplacian filter with a kernel size of 7. The filtered images can be used for tracking as opposed to the raw images in order to account for changes in lighting conditions between the two frames (e.g., in order to prevent degradation of tracking performance due to changes in shadow or brightness).
After filtering Ip, the GoodFeaturesToTrack function can be executed on the image to calculate the set of points to track between frames. Ic can then be filtered using the same method used on Ip. Each of the selected points from Ip can then be found in Ic using a cross correlation technique, such as that described by Nourani-vatani, et al. (“Correlation-Based Visual Odometry for Ground Vehicles”. Journal of Field Robotics, 28(5), pp. 742-768). For example, a window containing a point is selected from Ip and cross correlation can be performed between the point window and Ic. The location of the maximum of the cross correlation corresponds to the location of the point in Ic. The relationship between a point in Ip and Ic can be determined using a linearized version of the 2D homogeneous transformation equation and the small angle approximation:
where xp, xc and yc are the x and y locations of the point in Ip and Ic, respectively and δx, δy and δθ are the components of the change and orientation of the cleaner in the camera's frame of reference. Rearranging Equation 22 yields
ypδθ+δx=xc−xp, Eq. 23
−xpδθ+δy=yc−yp, Eq. 24
which can be combined for all the points being tracked as
where i=1, 2, . . . , M and M is the number of points being tracked. The resulting change in orientation can be found by calculating the pseudoinverse using the SVD algorithm. The change in orientation δx, δy and δθ can then be transformed from pixels to world units by using a calibration constant previously determined from running a calibration algorithm.
There are two reference frames that can be taken into account in the development of the state transition model: the vehicle reference frame 169, where the odometry data is collected, and the global reference frame 167, in, which the cleaner 12 operates, both of which are illustrated in
Δx=Δy′ cos(θ)+Δx′ sin(θ), Eq. 26
Δy=Δy′ sin(θ)−Δx′ cos(θ), Eq. 27
where Δx and Δy are the translation of the cleaner in the global frame and Δx′ and Δy′ are the translation in the Vehicle frame. The resulting state transition matrix is defined as
xt=xt−1+Δy′ cos(θ)+Δx′ sin(θ) Eq. 28
yt=yt−+Δy′ sin(θ)−Δx′ cos(θ), Eq. 29
θt=θt,m, Eq. 30
where θt,m is a measurement from a compass. The resulting control input ut[Δx′ Δy′ θt,m]T is a noisy measurement. To fit the form required by the EKF, an assumption can be made that the sensor noise is a zero mean gaussian process with covariance Mt. The resulting state transition model of the system can be defined as
which has covariance Rt=VtMtVtT where Vt is the Jacobian of g(·) with respect to ut evaluated at xt−1 and ut. Thus, using the above methods, odometry data from the second sensor assembly 24 can be used to determine the state transition model inputs.
With respect to the laser range finder 16, shape information can be determined, thus allowing for feature detection. The determined range and relative heading to the feature can be used to determine the measurement model for the EKF SLAM (i.e., feature measurements). There are two frames of reference in which the laser range finder 16 works, as shown in
r=√{square root over (Mx,L2+My,L2)}, Eq. 32
ϕ=a tan 2(Mx,L,My,L), Eq. 33
where ϕ is the relative heading to the feature, r is the distance to the feature, and Mx,L and, My,L are the coordinates of the feature in the local frame 170. In the global frame 172, r and ϕ can be defined as
r=√{square root over ((My,G−y)2+(Mx,G−x)2)}, Eq. 34
where Mx,G and My,G are the location of the feature in the global frame 172. The resulting measurement model is
which has zero mean Gaussian additive noise with covariance Qt which matches the form required by the EKF.
As described above, EKF SLAM is a feature-based technique and, as a result, feature detection is a key aspect in implementing this technique. Based on the sensor assembly 16 used in some embodiments, almost anything in the environment can be used as feature. For example, in indoor environments, common features can include walls and corners as these are easy-to-identify static objects. As described above, features such as corners can be extracted from distance measurements of the laser range finder 16. For example, a slightly modified version of a Random Sample Consensus (RANSAC) algorithm for line identification can first be used. The modification made to RANSAC line identification relates to how the random sample set is generated. For example, in a standard RANSAC algorithm, the sample set is composed of random possible inliers that are not already attached to an object model. This can be modified to reduce the misidentification of lines that were not actual walls in the environment. More specifically, in order to overcome this misidentification issue, the sample set can be generated by first selecting a single possible inlier at random and then using all possible Milers that are located within a window around the selected point as the sample set. Following the line identification step, intersections between lines can be found and, if the minimum angle between those lines is greater than a predefined threshold, the intersection can be characterized as a corner. An example of this resulting corner identification is illustrated in
Another component of EKF SLAM related to features is referred to as data association, that is, associating an observed feature with itself if it has already been seen, or adding it as a new feature if it has never been seen. In some embodiments, a gated search algorithm can be used. More specifically, for each observation, the predicted location, based on the current estimate of the cleaner state, can be compared to each of the currently tracked features and, if it falls within the gating distance of a currently tracked feature, the observation can be associated with that feature and, if the observation is not associated with any of the tracked features, the observation can be assumed to be a new feature and can be added to the current state estimate. Other, more complex approaches may be used in some embodiments. By continuously or periodically updating the state estimate of the cleaner, and since the state estimate also contains all of the features currently describing the map, those estimates can also be updated, these data association methods can help provide a better estimate of the cleaner's true position and reduce error.
In some embodiments, using the above methods and techniques, the control system 10 can continuously or periodically measure object distances in front of the pool cleaner 12, map the surrounding environment, identify objects within the environment, locate the pool cleaner's position within the environment, and/or navigate the pool cleaner 12 throughout the environment. For example, based on the mapping and localization, the control system 10 can track and control the movement of the pool cleaner 12 to optimize cleaning routes of the pool cleaner 12 throughout the environment. This can include determining and storing a cleaning route and controlling the pool cleaner 12 to following the cleaning route or tracking movement routes of the pool cleaner 12 and periodically adjusting movements of the pool cleaner 12 to ensure all areas of the environment are traversed within a certain time period.
It will be appreciated by those skilled in the art that while the invention has been described above in connection with particular embodiments and examples, the invention is not necessarily so limited, and that numerous other embodiments, examples, uses, modifications and departures from the embodiments, examples and uses are intended to be encompassed by the claims attached hereto. The entire disclosure of each patent and publication cited herein is incorporated by reference, as if each such patent or publication were individually incorporated by reference herein. Various features and advantages of the invention are set forth in the following claims.
This application is a continuation of U.S. application Ser. No. 14/730,068 filed on Jun. 3, 2015, which is a continuation of U.S. application Ser. No. 13/929,715 filed on Jun. 27, 2013, which claims priority under 35 U.S.C. § 119 to U.S. Provisional Patent Application No. 61/664,945 filed on Jun. 27, 2012, the entire contents of which are incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
811432 | Peake | Jan 1906 | A |
3676884 | Wulc | Jul 1972 | A |
3845291 | Portyrata | Oct 1974 | A |
3980891 | Slaker | Sep 1976 | A |
4196648 | Jones et al. | Apr 1980 | A |
4198164 | Cantor | Apr 1980 | A |
4277707 | Silver et al. | Jul 1981 | A |
4616298 | Bolson | Oct 1986 | A |
4700427 | Knepper | Oct 1987 | A |
4705395 | Hageniers | Nov 1987 | A |
4745290 | Frankel et al. | May 1988 | A |
4760269 | McKenna | Jul 1988 | A |
4775235 | Hecker et al. | Oct 1988 | A |
4878754 | Homma et al. | Nov 1989 | A |
4920465 | Sargent | Apr 1990 | A |
5024529 | Svetkoff et al. | Jun 1991 | A |
5113080 | Leu et al. | May 1992 | A |
5205174 | Silverman et al. | Apr 1993 | A |
5313261 | Leatham et al. | May 1994 | A |
5337434 | Erlich | Aug 1994 | A |
5414268 | McGee | May 1995 | A |
5475207 | Bobba et al. | Dec 1995 | A |
5523844 | Hasegawa et al. | Jun 1996 | A |
5569371 | Perling | Oct 1996 | A |
5613261 | Kawakami et al. | Mar 1997 | A |
5705802 | Bobba et al. | Jan 1998 | A |
5837988 | Bobba et al. | Nov 1998 | A |
5852984 | Matsuyama et al. | Dec 1998 | A |
5929984 | Hamar | Jul 1999 | A |
6088106 | Rockseisen | Jul 2000 | A |
6152704 | Aboul et al. | Nov 2000 | A |
6535793 | Allard | Mar 2003 | B2 |
6758226 | Porat | Jul 2004 | B2 |
6805458 | Schindler et al. | Oct 2004 | B2 |
6876392 | Uomori et al. | Apr 2005 | B1 |
7015831 | Karlsson et al. | Mar 2006 | B2 |
7054716 | McKee et al. | May 2006 | B2 |
7145478 | Goncalves et al. | Dec 2006 | B2 |
7162056 | Burl et al. | Jan 2007 | B2 |
7162338 | Goncalves et al. | Jan 2007 | B2 |
7177737 | Karlsson et al. | Feb 2007 | B2 |
7188000 | Chiappetta et al. | Mar 2007 | B2 |
7272467 | Goncalves et al. | Sep 2007 | B2 |
7459871 | Landry et al. | Dec 2008 | B2 |
7539557 | Yamauchi | May 2009 | B2 |
7573402 | Herbert et al. | Aug 2009 | B2 |
7573403 | Goncalves et al. | Aug 2009 | B2 |
7679532 | Karlsson et al. | Mar 2010 | B2 |
7680339 | Tojo | Mar 2010 | B2 |
7689321 | Karlsson | Mar 2010 | B2 |
7774158 | Domingues et al. | Aug 2010 | B2 |
7791235 | Kern et al. | Sep 2010 | B2 |
7849547 | Erlich et al. | Dec 2010 | B2 |
7864342 | Weiss et al. | Jan 2011 | B2 |
8086419 | Goncalves et al. | Dec 2011 | B2 |
8095238 | Jones et al. | Jan 2012 | B2 |
8095336 | Goncalves et al. | Jan 2012 | B2 |
8150650 | Goncalves et al. | Apr 2012 | B2 |
8274406 | Karlsson et al. | Sep 2012 | B2 |
8506719 | Holappa et al. | Aug 2013 | B2 |
8634958 | Chiappetta et al. | Jan 2014 | B1 |
8972052 | Chiappetta | Mar 2015 | B2 |
8972061 | Rosenstein et al. | Mar 2015 | B2 |
9086274 | Leonessa et al. | Jul 2015 | B2 |
9144361 | Landry et al. | Sep 2015 | B2 |
9329598 | Pack et al. | May 2016 | B2 |
9388595 | Durvasula et al. | Jul 2016 | B2 |
10024073 | Leonessa | Jul 2018 | B2 |
20010050093 | Porat | Dec 2001 | A1 |
20030057365 | Bennett et al. | Mar 2003 | A1 |
20040001197 | Ko et al. | Jan 2004 | A1 |
20040040581 | Bruwer | Mar 2004 | A1 |
20050046569 | Spriggs et al. | Mar 2005 | A1 |
20060290781 | Hama | Dec 2006 | A1 |
20070061040 | Augenbraun et al. | Mar 2007 | A1 |
20090307854 | Garti | Dec 2009 | A1 |
20100307545 | Osaka et al. | Dec 2010 | A1 |
20110064626 | Kennedy | Mar 2011 | A1 |
20120006352 | Holappa | Jan 2012 | A1 |
20120023676 | Hansen | Feb 2012 | A1 |
20120083982 | Bonefas et al. | Apr 2012 | A1 |
20120182392 | Kearns et al. | Jul 2012 | A1 |
20130116826 | Kim et al. | May 2013 | A1 |
20130152970 | Porat | Jun 2013 | A1 |
20130206177 | Burlutskiy | Aug 2013 | A1 |
20140004929 | Kelly | Jan 2014 | A1 |
20140028861 | Holz | Jan 2014 | A1 |
20140257622 | Shamlian et al. | Sep 2014 | A1 |
20140263087 | Renaud et al. | Sep 2014 | A1 |
20140289991 | Landy et al. | Oct 2014 | A1 |
20150105964 | Sofman et al. | Apr 2015 | A1 |
20150197012 | Schnittman et al. | Jul 2015 | A1 |
20150205299 | Schnittman | Jul 2015 | A1 |
20150212521 | Pack et al. | Jul 2015 | A1 |
20150267433 | Leonessa et al. | Sep 2015 | A1 |
20160137886 | Sekol et al. | May 2016 | A1 |
20160244988 | Barcelos et al. | Aug 2016 | A1 |
20160319559 | Durvasula et al. | Nov 2016 | A1 |
20160375592 | Szatmary et al. | Dec 2016 | A1 |
Number | Date | Country |
---|---|---|
101139007 | Mar 2008 | CN |
101297267 | Oct 2008 | CN |
811432 | Dec 1997 | EP |
2246763 | Mar 2010 | EP |
2000230807 | Aug 2000 | JP |
2005045162 | May 2005 | WO |
2007028049 | Mar 2007 | WO |
2012023676 | Feb 2012 | WO |
2014004929 | Aug 2015 | WO |
2016137886 | Sep 2016 | WO |
2017055737 | Apr 2017 | WO |
Entry |
---|
International Search Report and Written Opinion, for corresponding International Application No. PCT/US2018/047583, dated Nov. 19, 2018, 8 pages. |
International Search Report and Written Opinion, for corresponding International Application No. PCT/US2013/048370, dated Nov. 26, 2013, 8 pages. |
First Office Action, Chinese Patent Application No. 201380041089.X, dated Mar. 2, 2017, 18 pages. |
Number | Date | Country | |
---|---|---|---|
20180320398 A1 | Nov 2018 | US |
Number | Date | Country | |
---|---|---|---|
61664945 | Jun 2012 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 14730068 | Jun 2015 | US |
Child | 16038032 | US | |
Parent | 13929715 | Jun 2013 | US |
Child | 14730068 | US |