Autonomous mobile robot

Information

  • Patent Grant
  • 9554508
  • Patent Number
    9,554,508
  • Date Filed
    Tuesday, March 17, 2015
    9 years ago
  • Date Issued
    Tuesday, January 31, 2017
    7 years ago
Abstract
A robot lawnmower includes a robot body, a drive system, a localizing system, a teach monitor, and a controller in communication with one another. The drive system is configured to maneuver the robot lawnmower over a lawn. The teach monitor determines whether the robot lawnmower is in a teachable state. The controller includes a data processing device and non-transitory memory in communication with the data processing device. The data processing device executes a teach routine when the controller is in a teach mode for tracing a confinement perimeter around the lawn as a human operator pilots the robot lawn mower, when the robot lawnmower is in the teachable state, the teach routine stores global positions determined by the localizing system in the non-transitory memory, and when the robot lawnmower is in the unteachable state, the teach routine issues an indication of the unteachable state.
Description
TECHNICAL FIELD

This disclosure relates to an autonomous mobile robot for grass cutting.


BACKGROUND

Autonomous robots that perform household functions such as floor cleaning and lawn cutting are now readily available consumer products. Commercially successful robots are not unnecessarily complex, and generally operate randomly within a confined area. In the case of floor cleaning, such robots are generally confined within (i) touched walls and other obstacles within the rooms of a dwelling, (ii) IR-detected staircases (cliffs) down; and/or (iii) user placed detectable barriers such as directed IR beams, physical barriers or magnetic tape. Walls provide most of the confinement perimeter. Other, much less ubiquitous robots may try to localize or to map the dwelling using a complex system of sensors and/or active or passive beacons (e.g., sonar, RFID or bar code detection, or various kinds of machine vision).


Some consumer robotic lawn mowers use a similar “invisible” barrier—a continuous guide conductor boundary proposed for confining random motion robotic mowers by the early 1960's. The guide conductor is intended to confine the robot within the lawn or other appropriate area, so as to avoid damaging non-grassy areas of the yard or intruding onto a neighboring property. The conductor is one continuous loop around the property to be mowed. Although the guide conductor can be drawn into the property in peninsulas to surround gardens or other off limits areas, it remains a continuous loop, and is energized with an AC current detectable as a magnetic field at a few feet. The guide conductor must be supplied with power, usually from a wall socket. Within the bounded area, the known robots may “bounce” randomly as the robot nears the guide conductor, or may follow along the guide conductor. Some of the mowers also touch and bounce from physical barriers. More complex commercial mowers may try to localize or to map the mowing area, again using a complex system of sensors and/or active or passive beacons (e.g., sonar, encoded optical retro-reflector detection, machine vision).


SUMMARY

According to one aspect of the invention, a robot lawnmower includes a robot body, a drive system supporting the robot body and configured to maneuver the robot lawnmower over a lawn, a localizing system configured to determine perimeter positions of the robot lawnmower with respect to an origin, a teach monitor in communication with the localizing system and configured to determine whether the robot lawnmower is in a teachable state (with the robot lawnmower localized and on traversable terrain) or in an unteachable state, and a controller in communication with the drive system, the localizing system, and the teach monitor. The controller includes a data processing device and non-transitory memory in communication with the data processing device, and is configured to execute a teach routine when the controller is in a teach mode for tracing a confinement perimeter around the lawn as a human operator pilots the robot lawnmower with the robot lawnmower in the teachable state, while the teach routine stores perimeter positions determined by the localizing system in the non-transitory memory.


In some embodiments the robot lawnmower also includes an operator feedback unit in communication with the teach monitor or the controller. The operator feedback unit is configured to emit, in response to determining that the robot lawnmower is in the unteachable state, a human-perceptible unteachable state alert signal. In some cases, the unteachable state alert signal indicates a piloting correction in a pose or a motion of the robot lawnmower calculated to return the robot lawnmower to the teachable state. The operator feedback unit may be in wireless communication with one or more boundary markers positioned along the perimeter of the lawn, and/or may be in wireless communication with the controller and comprises a user interface configured for remotely piloting the robot lawnmower, for example.


In some examples the robot lawnmower also has a sensor system in communication with the teach monitor and includes at least one of: an inertial measurement unit responsive to a moment of inertia of the robot lawnmower, an obstacle sensor responsive to proximity of an obstacle or water along a drive path of the robot lawnmower, a tilt sensor responsive to tilt of the robot body, a cliff sensor responsive to a discrete ground elevation change proximate the robot body or a drive element of the drive system, a drop sensor responsive to a drop of a drive element of the drive system, an accelerometer responsive to speed of the robot lawnmower across the lawn, and a confinement sensor responsive to proximity of the robot lawnmower to a boundary marker. In some cases the teach monitor is configured to determine that the robot lawnmower is in the unteachable state in response to a signal from the sensor system.


In some cases the controller, executing the teach routine, is configured to determine a traveled path of the robot lawnmower based on the stored perimeter positions and whether the traveled path begins and ends proximate the origin.


The origin may comprise coordinates marked by one or more boundary markers positioned in the lawn, for example. In some applications the robot lawnmower includes a boundary detection scanner disposed on the robot body and configured to perform a scan match on three or more adjacent boundary markers, with each of the three or more boundary markers being individually identifiable by adjacent scan match data. The teach routine may determine a travel path of the robot lawnmower by scan matching a current travel path scan with a stored travel path scan.


Another aspect of the invention features a method of configuring a robotic lawnmowing system for autonomous operation. The method includes receiving, at a data processing device, perimeter positions of a robot lawnmower with respect to an origin, receiving, at the data processing device, a state of the robot lawnmower indicating whether the robot lawnmower is in a teachable state, with the robot lawnmower localized and on traversable terrain, or in an unteachable state, and executing, on the data processing device, a teach routine that traces a confinement perimeter around a lawn as a human operator pilots the robot lawnmower, while the teach routine stores received perimeter positions in non-transitory memory when the robot lawnmower is in the teachable state, and issues an unteachable state indication when the robot lawnmower is in the unteachable state.


In some examples of the method, the unteachable state indication comprises a human-perceptible signal emitted by the robot lawnmower. For example, the unteachable state indication may indicate a piloting correction in a pose or a motion of the robot lawnmower selected to return the robot lawnmower to the teachable state.


The origin may comprises coordinates marked by one or more boundary markers positioned in the lawn.


In some embodiments the teach routine, executing on the data processing device, compares a current perimeter position stored in the non-transitory memory to previously stored perimeter positions and determines whether a variance is greater than a threshold variance from a stored travel path.


Another aspect of the invention features a robotic lawnmower system with a localizing system that records each global position of a robot lawnmower with respect to a global origin as a human operator pilots the robot lawnmower to trace a confinement perimeter around a lawn in a manual confinement perimeter teaching mode, contour memory in which a geometric contour of the confinement perimeter is recorded while the human operator pilots the robot lawnmower for a duration of operation in manual confinement perimeter teaching mode, the geometric contour defining a perimeter that the robot lawnmower is to avoid crossing in an autonomous mode, and a teaching property monitor configured to detect whether or not the robot lawnmower is piloted in a recordable state and on traversable terrain. The robotic lawnmower system also includes an operator feedback unit in communication with the teaching property monitor and comprising a progress indicator capable of emitting a human-perceptible signal configured to alert the human operator to an unrecordable state or untraversable terrain, and to indicate a piloting correction to return the robot lawnmower to a recordable state or traversable terrain.


In some examples the operator feedback unit is mounted on a push bar of the robot lawnmower.


In some embodiments the operator feedback unit is in wireless communication with one or more boundary markers positioned along the confinement perimeter, and is configured to indicate a piloting correction in response to the robot lawnmower travelling proximate to or beyond the perimeter.


In some cases the travel path of the robot lawnmower is localized by determining angle and range of the robot lawnmower to three or more boundary markers.


Further features and advantages will be apparent from the following description of embodiments.





DESCRIPTION OF DRAWINGS


FIG. 1A is a perspective view of an exemplary autonomous lawn care mobile robot and its base station.



FIG. 1B is a schematic view of an exemplary autonomous lawn care mobile robot.



FIG. 1C is a schematic view of an exemplary autonomous lawn care mobile robot.



FIG. 2 is a schematic view of a method of lawn cutting with an autonomous lawn care mobile robot.



FIG. 3 is a schematic view of an exemplary autonomous lawn care mobile robot.



FIG. 4 is a schematic view of an exemplary controller for an autonomous lawn care mobile robot.



FIG. 5 is a perspective view of an exemplary teaching routine.



FIGS. 6A-6B are schematic views of navigating an autonomous lawn care mobile robot.



FIG. 7A is a schematic view of an exemplary autonomous lawn care mobile robot docked in its base station.



FIG. 7B is a perspective view of an exemplary operator feedback unit of the autonomous lawn care mobile robot of FIG. 7A.



FIG. 7C is a schematic view of the exemplary autonomous lawn care mobile robot of FIG. 5A as it being pulled back from its base station.



FIG. 7D is a schematic view of the exemplary autonomous lawn care mobile robot of FIG. 7A as it is being pushed by a user to measure the lawn perimeter.



FIG. 7E is a schematic view of the exemplary autonomous lawn care mobile robot of FIG. 7A as it is being pushed at a speed greater than a threshold speed, by a user to measure the lawn perimeter.



FIG. 7F is a schematic view of an exemplary autonomous lawn care mobile robot being docked in its base station.



FIG. 7G is a schematic view of the exemplary autonomous lawn care mobile robot of FIG. 7A as it is being pushed by a user for the second time to measure the lawn perimeter.



FIG. 7H is a perspective top view of an exemplary perimeter traversed by the autonomous lawn care mobile robot of FIG. 7A.



FIG. 8A is a perspective view of exemplary beacons placed along the perimeter of the lawn.



FIG. 8B is a top view of an exemplary lawn showing a first beacon positioned in a corner of the lawn.



FIG. 8C is a perspective view of an exemplary stake for supporting a beacon.



FIG. 8D is a perspective view of an exemplary beacon positioned on top of the stake of FIG. 8C.



FIG. 8E is a top view of the exemplary lawn of FIG. 8B showing a second beacon placed at a second corner of the lawn.



FIG. 8F is a top few of the exemplary lawn of FIG. 8B having first through seven beacons positioned about the perimeter of the lawn.



FIG. 8G is a perspective top view of an exemplary perimeter having beacons traversed by an autonomous lawn care mobile robot.



FIG. 9 is a schematic view of an exemplary arrangement of operations for operating the autonomous lawn care mobile robot.





Like reference symbols in the various drawings indicate like elements.


DETAILED DESCRIPTION

An autonomous robot may be designed to mow a lawn. For example, the autonomous robot may move about the lawn and cut the grass as it is traversing the lawn. Referring to FIGS. 1A-1C, a robot lawnmower 10 includes a body 100, a surface treater 200 secured to the body 100, a drive system 400, and a sensor system 300 having at least one surface sensor 310, 320, 330 carried by the body 100 and responsive to at least one surface characteristic. The drive system 400 is carried by the body 100 and configured to maneuver the robot 10 across a surface 20 (e.g., lawn) while following at least one surface characteristic. Examples of the surface treater 200 include a reciprocating symmetrical cutter floating on a following wheel 410, a rotary cutter, a spreader, and a gatherer. The robot body 100 supports a power source 106 (e.g., a battery) for powering any electrical components of the robot 10. The robot lawnmower 10 may be docked at a base station 12. In some examples, the base station 12 includes a charging system for changing a battery 160 housed by the robot body 100.


The body 100, as shown in FIG. 1B, has a substantially circular perimeter, and the body 100 shown in FIG. 1C, had a substantially rectangular perimeter; however, other shapes may be suitable as well, such as a substantially pentagonal, or tombstone shape. In some implementations, the body 100 includes a two part articulated body, each part having a different shape than the other part. For example, the two parts articulated body has a front portion and a rearward portion. The front portion has a circular shape and the rearward portion has a rectangular or square portion. In some implementations, the robot 10 includes a frame-and-body structure or a substantially monocoque structure.


In some examples, one or more edge following sensors 310 (also referred to as cut edge detectors) and edge calibrators 320 (e.g. a grass character sensor) are mounted on the body 100. FIGS. 1B and 1C depicts an exemplary placement of boundary sensors 340, a bumper 110 (which may be coupled to two or more displacement sensors to provide impact directionality) and at least one grass sensor 330 (e.g. determines a presence of grass) on the body 100. An active or passive fore grass comber 510 precedes the surface treater 200 and an aft grass comber 520 follows each wheel 410, 420, 430. Referring to FIG. 1C, the bumper 110 includes a first portion 110a and a second portion 110b. A configuration and height of the bumper 110, in some instances, are arranged according to a ground clearance or a cut height of the cutter 200. The bumper height may be lower than the cut height of the cutter 200. Also, the bumper 110 may rise and lower with the cutter 200.


Referring to FIG. 2, a method of lawn cutting with a robotic lawnmower 10 having a drive system 400, a sensor system 300, and a cutter system 200 carried by a body 100 includes step S10 of activating the drive system 400 to maneuver the robotic lawnmower 10 across a lawn 20, step S30 of detecting a swath edge 26 with the swath edge detector 310, and step S40 of following a detected swath edge. The method may include step S20 of orienting blades of grass of the lawn 20 with a grass arranger 440 carried by the body 100 forward of a swath edge detector 310 carried by the body 100. The method includes step S50 of erecting blades of grass of the lawn 20 with a fore grass comber 510 carried by the body 100 forward of the cutter 200, step S60 of cutting the lawn 20 with the cutter 200, and step S70 of arranging blades of grass of the lawn 20 with an aft grass comber 520 carried by the body 100 rearward of the cutter 200 and/or the drive system 400. In some examples, the method includes one or more of the following steps: step S80 of continuously scanning for an absence of lawn 20 with a lawn detector 330 carried by the body 100, where the drive system 400 redirects the robot 10 in response to detecting an absence of lawn 20; step S90 of continuously scanning for a body of liquid proximate the robot 10 with a liquid detector carried by the body 100, where the drive system 400 redirects the robot 10 in response to detecting a body of liquid; step S100 of continuously scanning for a potential obstacle proximate the robot 10 with a proximity sensor (e.g. infrared sensor) carried by the body 100, where the drive system 400 redirects the robot 10 in response to detecting a potential obstacle; and step S110 of continuously scanning for a boundary markers 810 with a boundary detector 1500 carried by the body 100, where the drive system 400 redirects the robot 10 in response to detecting a boundary markers 810 (discussed below).


Referring to FIGS. 3 and 4, the robot 10 includes a robot controller 150 disposed within the robot by 100. The robot controller 150 (executing a control system 152) may execute routines 155 and behaviors 157 that cause the robot 10 to take an action, such as maneuvering in a (virtual) wall following manner, or changing its direction of travel when an obstacle is detected. The robot controller 150 can maneuver the robot 10 in any direction across the lawn by independently controlling the rotational speed and direction of each wheel module 411, 421, 431 (FIG. 1B) rotating the wheel 410, 420, 430 respectively. For example, the robot controller 150 can maneuver the robot 10 in the forward F, reverse (aft) A, right R, and left L directions. The robot controller 150 may direct the robot 10 over a substantially random (e.g., pseudo-random) path while traversing the lawn 20. The robot controller 150 can be responsive to one or more sensors (e.g., bump, proximity, wall, stasis, and cliff sensors) disposed about the robot body 100. The robot controller 150 can redirect the wheel modules 411, 421, 431 in response to signals received from the sensors (e.g., sensor system 300), causing the robot 10 to avoid obstacles and clutter while traversing the lawn.


In some implementations, to achieve reliable and robust autonomous movement, the robot 10 may include a sensor system 300 having several different types of sensors, which can be used in conjunction with one another to create a perception of the robust environment sufficient to allow the robot 10 to make intelligent decisions about actions taken that environment. The sensor system 300 may include one or more types of sensors 310, 320, 340 (shown in FIGS. 1B and 1C) supported by the robot body 100, which may include obstacle detection obstacle avoidance (ODOA) sensors, communication sensors, navigation sensors, etc. The ODOA sensors may include, but are not limited to: a cliff sensor detecting a cliff proximate the robot body 100 or proximate a drive element of the drive system 400; a drop sensor detecting a drop of a drive element (e.g., wheels 410, 420, 430) of the drive system 400; an accelerometer detecting a speed of the robot lawnmower 10; and/or a confinement sensor determining a proximity of the robot lawnmower to a boundary marker 810. Additional sensors 310, 320, 330 may include, but are not limited to, following sensors 310, 320, 330, edge calibrators 320, grass sensors 330, proximity sensors, contact sensors, a camera (e.g., volumetric point cloud imaging, three-dimensional (3D) imaging or depth map sensors, visible light camera and/or infrared camera), sonar, radar, LIDAR (Light Detection and Ranging, which can entail optical remote sensing that measures properties of scattered light to find range and/or other information of a distant target), LIDAR (Laser Detection and Ranging), etc. In some implementations, the sensor system 300 includes ranging sonar sensors, proximity cliff detectors, contact sensors, a laser scanner, and/or an imaging sonar.


In some examples, the sensor system 300 includes an inertial measurement unit (IMU) 360 in communication with the controller 150 to measure and monitor a moment of inertia of the robot lawnmower 10 with respect to the overall center of gravity CGR of the robot lawnmower 10. The IMU 360 may monitor a tilt of the robot 10 to allow the robot 10 to avoid mowing or maneuvering above a maximum robot tilt angle. For example, when IMU 360 detects a robot tilt, the robot lawnmower 10 may compare a measured robot inclination with known values to determine whether it is maneuvering over a threshold, tree roots, humps, hillocks, small hills, or other surface phenomena that may be treated as obstacles, but not easily detectable by bumpers or proximity sensors. The controller 150 may monitor any deviation in feedback from the IMU 360 from a threshold signal corresponding to normal unencumbered operation. For example, if the robot 10 begins to pitch away from an upright position, it may be impeded, or someone may have suddenly added a heavy payload. In these instances, it may be necessary to take urgent action (including, but not limited to, evasive maneuvers, recalibration, and/or issuing an audio/visual warning) in order to assure safe operation of the robot 10.


When accelerating from a stop, the controller 150 may take into account a moment of inertia of the robot 10 from its overall center of gravity CGR to prevent the robot 10 from tipping. The controller 150 may use a model of its pose, including its current moment of inertia. When payloads are supported, the controller 150 may measure a load impact on the overall center of gravity CGR and monitor movement of the robot 10 moment of inertia. If this is not possible, the controller 150 may apply a test torque command to the drive system 400 and measure actual linear and angular acceleration of the robot using the IMU 360, in order to experimentally determine safe limits.


In some examples, the drive system 400 includes left and right driven wheels 410, 420 and a trailing wheel 430 (e.g. a caster) (see FIG. 1B). In some implementations, the drive system 400 includes at least one drive wheel 410, 420 rotated by a respective wheel module 421, 422, such as a motor or other drive mechanism (e.g., an electric motor supplied power from a consumer-level battery, fuel cell, large capacitors, microwave receiver, an internal/external combustion engine powered by an onboard fuel source, hydraulic/pneumatic motor powered by an above aforementioned power source, large potential energy sources such as wound or compressed springs such as in hydraulic or pneumatic, vacuum accumulators, flywheels, or compressed air).


Referring again to FIG. 3, in some implementations, the robot 10 includes a navigation system 500 configured to allow the robot 10 to navigate the lawn 20 without colliding into obstacles or going outside a configured perimeter 21 of the lawn 20. Moreover, the navigation system 500 can maneuver the robot 10 in deterministic and pseudo-random patterns across the lawn 20. The navigation system 500 may be a behavior based system stored and/or executed on the robot controller 150. The navigation system 500 may communicate with the sensor system 300 to determine and issue drive commands to the drive system 400. The navigation system 500 influences and configures the robot behaviors 157, thus allowing the robot 10 to behave in a systematic preplanned movement. In some examples, the navigation system 500 receives data from the sensor system 300 and plans a desired path for the robot 10 to traverse.


In some implementations, the navigation system 500 includes a localization system 550. The localization system 550 determines a global position of the robot lawnmower 10 with respect to a global origin. In some implementations, the global origin coordinates coincide with a base station 12 from which the robot lawnmower 10 launches a run. In some examples, the localization system 550 stores the global position of the robot lawnmower 10 in the non-transitory memory 152b, e.g., every threshold period of time, such as, every 10, 20, or 30 seconds, or any other values. In some examples, the localizing system 550 includes the IMU 360 or a global positioning sensor (GPS) for determining the position of the robot 10 with respect to a global origin (e.g., the base station 12).


In some implementations, the robot 10 includes a teach system or teach monitor 600 in communication with the one or more of the controller 150, the sensor system 300, the drive system 400, and the navigation system 500. The teach monitor 600 is also in communication with an operator feedback unit 700 (FIG. 7B) having an operator display 750. The operator feedback unit 700 may receive an input from an operator indicating that the operator wants to teach a perimeter 21 (FIG. 7D) of the lawn 20 to the robot lawnmower 10 (e.g., by pressing a START/PAUSE button 720, discussed below). The teach monitor 600 determines whether the robot lawnmower 10 is in a teachable state or an unteachable state. The teach monitor 600 may make this determination based on sensor signals received from the sensor system 300 and/or the navigation system 500. During the teachable state, the robot lawnmower 10 is localized (i.e., it can determine its position or relative position relative to a reference point) and on traversable terrain. While during the unteachable state, the robot lawnmower 10 is not localized and/or not on traversable terrain.


Referring to FIGS. 1B and 4, in some implementations, the controller 150 (e.g., a device having one or more computing processors 152a in communication with non-transitory memory 152b capable of storing instructions executable on the computing processor(s) 152a) executes a control system 152, which includes a routine system 154, a behavior system 156, and a control arbitration system 158 in communication with each other. The control arbitration system 158 allows robot applications 158b to be dynamically added and removed from the control system 152, and facilitates allowing applications 158b to each control the robot 10 without needing to know about any other applications 158b. In other words, the control arbitration system 158 provides a simple prioritized control mechanism between applications 158b and resources 159 of the robot 10.


The applications 158b can be stored in non-transitory memory 152b or communicated to the robot 10, to run concurrently on (e.g., on a processor) and simultaneously control the robot 10. The applications 158b may access behaviors 157 of the behavior system 156 and routines 155 of the routine system 154. The independently deployed applications 158b are combined dynamically at runtime and to share robot resources 159 (e.g., drive system 400 and/or cutting system 200). A low-level policy is implemented for dynamically sharing the robot resources 159 among the applications 158b at run-time. The policy determines which application 158b has control of the robot resources 159 as required by that application 158b (e.g. a priority hierarchy among the applications 158b). Applications 158b can start and stop dynamically and run completely independently of each other. The control system 152 also allows for complex behaviors 157 which can be combined together to assist each other.


The control arbitration system 158 includes one or more application(s) 158b in communication with a control arbiter 158c. The control arbitration system 158 may include components that provide an interface to the control arbitration system 158 for the applications 158b. Such components may abstract and encapsulate away the complexities of authentication, distributed resource control arbiters, command buffering, coordinate the prioritization of the applications 158b and the like. The control arbiter 158c receives commands from every application 158b and generates a single command based on the applications' 158b priorities and publishes it for its associated resources 159. The control arbiter 158c receives state feedback from its associated resources 159 and may send it back up to the applications 158b. The robot resources 159 may be a network of functional modules (e.g., actuators, drive systems, and groups thereof) with one or more hardware controllers. The commands of the control arbiter 158c are specific to the resource 159 to carry out specific actions. A dynamics model 158a executable on the controller 150 is configured to compute the center for gravity (CG), moments of inertia, and cross products of inertial of various portions of the robot 10 for the assessing a current robot state.


In some implementations, a routine 155 or a behavior 157 is a plug-in component that provides a hierarchical, state-full evaluation function that couples sensory feedback from multiple sources, such as the sensor system 300, with a-priori limits and information into evaluation feedback on the allowable actions of the robot 10. Since the routines and behaviors 157 are pluggable into the application 158b (e.g. residing inside or outside of the application 158b), they can be removed and added without having to modify the application 158b or any other part of the control system 152. Each routine and/or behavior 157 is a standalone policy. To make routines and/or behaviors 157 more powerful, it is possible to attach the output of multiple routines and/or behaviors 157 together into the input of another so that you can have complex combination functions. The routines 155 and behaviors 157 are intended to implement manageable portions of the total cognizance of the robot 10.


In the example shown, the behavior system 156 includes an obstacle detection/obstacle avoidance (ODOA) behavior 157a for determining responsive robot actions based on obstacles perceived by the sensor (e.g., turn away; turn around; stop before the obstacle, etc.). Another behavior 157 may include a virtual wall following behavior 157b for driving adjacent a detected virtual wall or boundary markers 810. The behavior system 156 may include a cutting behavior 157c for cutting the grass in a cornrow pattern, and a cliff avoidance behavior 157d (e.g., the robot 10 detects an incline and avoids falling from the incline).


Referring to FIGS. 4 and 5, in some implementations, the routine system includes a teach routine 155. The data processing device 152a of the controller 150 executes the teach routine 155 when the controller 150 is in a teach mode for tracing a confinement perimeter 21 (FIG. 7D) around the lawn 20 as a human operator pilots the robot lawnmower 10. When the robot lawnmower 10 is in the teachable state, the teach routine 155 stores global positions 21a determined by the localization system 550 in the non-transitory memory 152b, and when the robot lawnmower 10 is in the unteachable state, the teach routine 155 issues an alert or indication to the operator of the unteachable state, which may be outputted via the operator feedback unit 700 (FIG. 7B). The teach routine 155 has a goal of storing enough perimeter positions 21i (i=l-n) (e.g., global positions determined by the localizing system 550) to map a geometric contour of the confinement perimeter 21 while the human operator pilots the robot lawnmower 10 for a duration of the teach mode. The duration of the teach mode depends on the size of the lawn 20 and on the speed that the operator is piloting the robot lawnmower 10. The geometric contour defines a confinement perimeter 21 that the robot lawnmower 10 avoids crossing while it is autonomously cutting the lawn 20 in the autonomous mode. The teach routine 155 determines a traveled path of the robot lawnmower based on the stored perimeter positions/global positions 21a and indicates whether the traveled path begins at the global origin and ends at the global origin. In some examples, during the teachable state, the robot lawnmower 10 is on a terrain of the lawn 20 traversable by the robot lawnmower 10 during an autonomous mode and piloted at a speed of between about 0.5 meters/second and about 1.5 meters/second.


In some implementations, the teach monitor 600 determines that the robot lawnmower 10 is in the unteachable state when the sensor system 300 detects at least one of an obstacle or water 25 along the drive path of the robot lawnmower 10 (e.g., via the sensor system 300), a tilt of the robot body 100 greater than a threshold tilt (e.g., via the IMU 360), a cliff, an activation of the drop sensor, a speed of the robot lawnmower 10 outside a threshold speed range, or a threshold proximity to a boundary marker. In such instance, the operator feedback unit 700 provides the operator with feedback indicating that the operator should change the direction of the robot 10 or the method of piloting the robot 10 (discussed below).


In some examples, the teach routine 155 requests that the human operator pilots the robot lawnmower 10 to trace the perimeter 21 about the lawn 20 a second time, and in some examples, a third or more times. The teach routine 155 compares a current global position 21a stored in the non-transitory memory to previously stored perimeter positions/global positions 21a and determines whether a variance between the two is greater than a threshold variance from a stored travel path. In some examples, the threshold variance comprises +/−30 cm. In some examples, if a current perimeter 21 is different than a previous perimeter 21, the robot lawnmower 10 requests that the operator re-pilots the robot 10 about the perimeter 21 since the robot 10 was not able to accurately determine the perimeter. For example, an operator might have hit the stop command rather than the pause command, thereby indicating teaching run completion instead of a pause. When the operator hits play to resume the teaching run, the robot lawnmower 10 does not recognize the current perimeter as previously traversed because the teaching run is still not completed. In other examples, an operator may change course on a second teaching run to pick a different path and the robot lawnmower 10 will request a third teaching run to validate adjustments between the first and second teaching runs. In still other examples, the operator may intentionally or unintentionally move part or all of a second run so that the traced perimeter 21 is partially or completely different from the first perimeter 21 traced on the first teaching run. The robot lawnmower 10 may then determine a coordinate path falling between the first and second taught perimeters 21 and set that determined coordinate path as the taught perimeter. The more samples (i.e., perimeter runs about the perimeter 21) a robot 10 stores during the teach mode, the more accurately the robot determines the perimeter 21 of the lawn 20 during its autonomous grass cutting.


Referring to FIG. 6A, in some implementations, an operator manually leads the robot 10 to teach the perimeter 21 of the lawn 20 by using an IR or wireless operator feedback unit 700 that sends a signal to a receiver 151 on the robot 10 that is in communication with the controller 150. The drive system 400 is configured to follow the signal received from the operator feedback unit 700.


Another method includes guiding the robot 10 with a push bar 116 attached to the body 100. The push bar 116 may be detachable from or stowable on the robot body 100. In some cases, the push bar 116 includes a switch, speed setting, or joystick to advance and steer the robot 10. In one instance, the push bar 116 includes one or more pressure or strain sensors, monitored by the robot 10 to move or steer in a direction of pressure (e.g., two sensors monitoring left-right pressure or bar displacement to turn the robot 10). In another instance, the push bar 116 includes a dead man or kill switch 117A in communication with the drive system 400 to turn off the robot 10. The switch 117A may be configured as a dead man switch to turn off the robot 10 when a user of the push bar 116 ceases use or no longer maintains contact with the push bar 116. The switch 117A may be configured act as a kill switch when the push bar 116 is stowed, allowing a user to turn off the robot 10. The dead man or kill switch 117A may include a capacitive sensor or a lever bar. In another instance, the push bar 116 includes a clutch 117B to engage/disengage the drive system 400. The robot lawnmower 10 may be capable of operating at a faster speed while manually operated by the push bar 116. For example, the robot 10 may operate at an autonomous speed of about 0.5 m/sec and a manual speed greeter than 0.5 m/sec (including a “turbo” speed actuatable to 120-150% of normal speed). In some examples, the push bar 116 may be foldable or detachable during the robot's autonomous lawn mowing.


Referring to FIG. 6B, in yet another method of navigating the robot 10, the robot 10 includes a pull leash or retractable lead wire 118 fed through a wire guide 120 from a spool 122. In this example, the drive system 400 includes a controller 450 carried by the body 100 and controlling the release and retraction of the spool 122. The pull wire extends for 6-10 feet, for example, and the robot 10 monitors the amount of extension directly or indirectly (encoder, etc.) as well as the direction in which the wire is pulled (monitoring a position of the wire guide 120). The robot 10 follows the direction of the pull and controls speed to maintain a wire length.



FIGS. 7A-7H illustrate a method of piloting the robot lawnmower 10 around the lawn 20 to determine a perimeter 21 of the lawn 20. Referring to FIG. 7A, a robot lawnmower 10 is depicted being charged on a base station 12.



FIG. 7B illustrates an example user interface. The user interface may be an operator feedback unit 700 in communication with the teach monitor 600 or the controller 150. The operator feedback unit 700 may be releasably mountable on a push bar 116 of the robot 10 (e.g., attached to the robot body 100). The operator feedback unit 700 may be configured to allow an operator to remotely drive the robot lawnmower 10. Moreover, the operator feedback unit 700 may emit a human-perceptible signal configured to alert the human operator of an unteachable state and indicate a piloting correction in a pose or motion of the robot lawnmower 10 that returns the robot lawnmower 10 to the teachable state. The human-perceptible signal may be one or more of a visual, audible, or sensory feedback alert, such as vibration. In one example, the human-perceptible signal is a visual green-yellow-red light indicating the teachable state, moving away from teachable state and an unteachable state respectively. For example, if the teachable condition is that of an operator pushing the robot lawnmower at a threshold speed during a teaching run, the visual indicator would remain green while the walking speed remained within an upper and lower speed threshold and turn yellow upon approaching either speed threshold. The light would then turn red if the operator failed to head the yellow warning. In other examples, the visual indicator may be a light of intensifying brightness indicating movement toward an unteachable state, the brightest setting being an indication of unteachable state. In other examples, the visual indicator may be a light that changes for solid during a teachable state to a slow blink near thresholds indicating an approaching unteachable state and a high rate of blinking in an unteachable state. In some examples, the green, yellow, red visual indication light may be combined in combination with intensity settings and/or variable blinking to prove an operator with noticeable visual feedback.


In some examples, the operator feedback unit 700 includes an inertial measurement unit 702a, an accelerometer 702b, a gyroscope 702c, a microphone 702d, a visual display 750, a speaker 760, a user command interface 702e, a global positioning sensor 702f, a vibrational unit 702g, a blue tooth communication module 702h, a wireless communication module 702i, one or more locally hosted applications 702j, and/or one or more web hosted applications 702k. Therefore, the operational feedback unit 700 may be a smart phone or smart device. In some examples where the human operator places boundary markers 810 about the perimeter 21 of the lawn 20 (discussed below), the operator feedback unit 700 may be in wireless communication with the one or more boundary markers 810 positioned along the perimeter 21 of the lawn 20. The operator feedback unit 700 may display the state (e.g., an autonomous mode, or a teaching mode including a teachable state, and unteachable state of the robot 10) on its display 750. The operator feedback unit 700 receives one or more user commands and/or displays a status of the robot 10. The operator feedback unit 700 is in communication with the controller 150 such that one or more commands received by the operator feedback unit 700 can initiate execution of a routine 155 by the robot 10. In some examples, the operator feedback unit 700 includes a power button 710, which allows a user to turn on/off the robot 10 and a STOP button 740 to indicating completion of a teaching lap around the geometric shape of the perimeter.


In some implementations, a human operator places the robot lawnmower 10 on the charging base station 12, allowing the robot lawnmower 10 to charge. Once the robot lawnmower 10 is charged, the operator may remove the robot lawnmower 10 from the charging base station 12, by pulling the robot 10 in a rearward direction R, to teach the robot lawnmower 10 the perimeter 21 of the lawn 20. The operator pushes the power button 710 to turn the robot lawnmower 10 on. When the operator is ready to start measuring the perimeter 21 of the lawn 20, the operator presses the START/PAUSE button 720, as shown in FIG. 7C, to begin a teach run (i.e., piloting the robot 10 to determine a perimeter 21). The human operator starts a first walk around the perimeter 21 in any direction as close to the outer edges of the lawn 20 as possible, e.g., as shown in FIG. 7D. In some implementations, during the teach mode, the operator may pause the teach run and resume at a later time. Therefore, if the operator pushes the START/PAUSE button 720 during a teach run, the operator may resume measuring the parameter at a later time from the position he stopped by pushing the START/PAUSE button 720.


In some implementations, the human operator may be piloting the robot lawnmower 10 in a manner that requires correction, thus putting the robot 10 in unteachable state. When the robot lawnmower 10 detects that it is in the unteachable state during a teach run, the robot lawnmower 10 alerts the operator (e.g., via the operator feedback unit 700) to change a direction or speed of the robot lawnmower 10, so that the robot lawnmower 10 continues to record the perimeter 21 and/or return to traveling on traversable terrain. For instance, the robot lawnmower 10 may enter the unteachable state when the operator pushes the robot lawnmower 10 in an area of the lawn 20 where the robot 10 loses localization, when the user is on a second teaching path that varies from the first teaching path, or when the user pushes the robot lawnmower 10 too fast or pushing the robot lawnmower 10 over terrain that is too bumpy or tilted.


In some examples, the terrain is too bumpy. The operator may try to push the robot lawnmower 10 between a divot and a rock, causing the robot lawnmower 10 to tilt at an angle (e.g., 30 degrees). The operator may not teach the robot lawnmower 10 a path that goes through topography that the robot 10 cannot traverse in the autonomous mode. Therefore, the robot lawnmower 10 alerts the operator (e.g., via the operator feedback unit 700) to select a different path. As previously described, the robot lawnmower 10 may alerts the operator via the operator feedback unit 700 to provide one of a visual signal on the display 750, an audible signal through the speaker 760, or tactile signal such a vibration from the vibrational unit 702g of the operator feedback unit 700.


In some examples, the operator is pushing the robot lawnmower 10 too fast or too slow during the teaching mode (see FIG. 7E), thus placing the robot in the unteachable state. Therefore, the robot lawnmower 10 alerts the user to either increase the speed of the robot lawnmower 10 or decrease the speed of the robot lawnmower 10. In some examples, operator feedback unit 700 includes a speed bump 730 that will light or flash (green, yellow, red light) when the robot lawnmower 10 is going at a speed greater than a threshold speed or lower than a threshold speed.


In some instances, as will be discussed later, boundary markers 810 placed along the perimeter of the lawn 20 aid localization of the robot lawnmower 10. When the robot lawnmower 10 loses communication with the boundary markers 810, the robot lawnmower 10 may alert the user to change paths to remain within the confinement of the boundary markers 810.


In some examples, the teaching routine 155 requires the operator to traverse the perimeter 21 of the lawn 20 a second time (or more). Once the operator completes the first teaching run, the robot 10 alerts the operator that a second run is needed. In one example, the operator hits the STOP button 740 to affirmatively indicate completion of a teaching run around the perimeter 21 of the lawn 20. In some examples, the robot 10 allows the operator to either complete the second teaching run right after the first teaching run or wait until later. In some examples, if the operator completed the second teaching run and it is different from the first teaching run, i.e., there is a variance between the two perimeters 21 greater than a threshold variance, then the robot 10 alerts the user that second run (or subsequent run) is too different from previous run, and thus the robot 10 requires another teaching run to learn the perimeter 21 of the lawn 20. In some examples, when the operator completes a teaching run, the display 750 of the operator feedback unit 700 displays a first shape of the first perimeter 21f. After the operator completes a subsequent teaching run, the display 750 of the operator feedback unit 700 displays the shape of the most recent perimeter 21s overlaid on top of the shape of the first perimeter 21s. Therefore, the operator is capable of viewing the different shapes 21f, 21s of the traverse perimeter 21. When the shapes 21f, 21s are significantly different (e.g., having a variance greater than a threshold), the display 750 of the operator feedback unit 700 alerts the operator to verify that the operator pressed the START/PAUSE button 720 and pressed the STOP button 740 indicating that the teaching run was completed. For example, the teaching run may be completed when the robot lawnmower 10 is returned to the base station 12 (global origin).


When the user has completed the teaching run, the user may dock the robot lawnmower 10 in its base station 12, allowing the robot lawnmower 10 to recharge. When the robot lawnmower 10 is in its base station 12, the operator may press the stop button 740 (FIGS. 7B and 7F), indicating that the teach run is complete.


Referring to FIGS. 3 and 8A-8G, in some implementations, the robot 10 includes a boundary detection system 800 that includes a boundary detection scanner 350 (see FIGS. 1B and 1C) disposed on the robot body 100 and configured to perform a scan match of the perimeter 21. Additionally, in implementations including boundary markers 810 placed along the perimeter 21 of the lawn 20, the boundary markers 810 are individually identifiable by adjacent scan match data and the robot lawnmower 10 is able to localize to the individually identifiable boundary markers 810. In some implementations, the boundary markers 810 may include other individual identification means perceptible to the robot lawnmower 10, such as a bar code or encoded signal to enable the robot lawnmower 10 to localize it.


As shown in the figures, boundary markers 810 (e.g., beacons) are placed around the perimeter of the lawn 20 to constrain or influence a behavior 157 of the robot 10. The boundary markers 810 create a virtual wall that constrains the robot 10 from going outside its boundaries. To create the virtual wall, the boundary markers 810 are each within a line of sight of another adjacent beacon 810. The boundary markers 810 may include a home marker 810a that an operator can place in a position indicating a global origin (e.g., base station 12 or two boundary markers places side by side). The operator distributes the boundary markers 810 as evenly as possible along the perimeter of the lawn 20 or confinement area, making sure that the major corners within the lawn 20 are occupied by the boundary markers 810. Each boundary marker 810 may be positioned to be in the line of sight of a forward boundary marker 810 and a rearward boundary marker 810.


In some implementations, the teach routine 155 determines the travel path of the robot lawnmower 10 by determining an angle and range of the robot lawnmower 10 to three or more boundary markers 810 in the line of sight of the robot 10. The types of boundary markers 810 may include: LIDAR scan match, passive LIDAR retro-reflectors (beacons) or both of those together. In some examples, the boundary markers 810 include: RADAR scan matching (blips), RADAR retro-reflectors (passive beacons) or both. The boundary markers 810 may include: Ultra-wide Band (UWB) beacons (which are time of flight and require an emitter on the robot 10). Other types of boundary markers 810 may also be used, such that the robot 10 may communicate with the boundary marker 810 (i.e., the robot 10 includes a receiver communicating with the boundary marker 810).


Referring to FIGS. 7H and 9, in some implementations, a method for teaching a robot lawnmower the perimeter of an area within the lawn, allowing the robot to autonomously mow the lawn 20 at a later time. The method 900 includes receiving 902, at a data processing device, global positions 21a of a robot lawnmower with respect to a global origin. The method 900 includes receiving 904, at the data processing device, a state of the robot lawnmower, the state comprising a teachable state or an unteachable state, wherein, in the teachable state, the robot lawnmower is localized and on traversable terrain. Moreover, the method 900 includes executing 906, on the data processing device, a teach routine 155 for tracing a confinement perimeter around a lawn 20 as a human operator pilots the robot lawn mower, when the robot lawnmower is in the teachable state, the teach routine 155 stores received global positions 21a in non-transitory memory, and when the robot lawnmower is in the unteachable state, the teach routine 155 issues an indication of the unteachable state.


In some implementations, during the teachable state, the robot lawnmower 10 is on a terrain of the lawn 20 traversable by the robot lawnmower 10 during an autonomous mode and piloted at a speed of between about 0.5 meters/second and about 1.5 meters/second. The method 900 includes emitting a human-perceptible signal, when the robot lawnmower 10 is in the unteachable state. The human-perceptible signal is configured to alert a human operator of the unteachable state and indicate a piloting correction in a pose or a motion of the robot lawnmower 10 that returns the robot lawnmower 10 to the teachable state. The teach routine 155 stores enough perimeter positions/global positions 21a to map a geometric contour of the confinement perimeter 21 while a human operator pilots the robot lawnmower 10 for a duration of the teach mode. The geometric contour defining a confinement perimeter 21 that the robot lawnmower 10 avoids crossing in the autonomous mode.


In some examples, the robot lawnmower 10 includes an operator feedback unit 700. The method 900 includes displaying the state of the robot lawnmower 10, e.g., on the display 750 of the operator feedback unit 700. The human-perceptible signal includes one or more of a visual signal displayed on the display 750 of the operator feedback unit 700, an audible signal outputted from the speaker 760 of the operator feedback unit 700, and a tactile signal such a vibration from the vibrational unit 702g of the operator feedback unit 700. In some examples, the operator pilots the robot lawnmower 10 via the operator feedback unit 700. Therefore, the method 900 may include receiving, at the data processing device 152a, a piloting command for remotely piloting the robot 10.


In some implementations, the method 900 includes receiving, at the data processing device, a moment of inertia of the robot lawnmower, a proximity of an obstacle or water along a drive path of the robot lawnmower; a tilt of the robot body, a cliff proximate the robot body or a drive element of the drive system, a drop of a drive element of the drive system, a speed of the robot lawnmower, and a proximity of the robot lawnmower to a boundary marker. The method 900 includes determining that the robot lawnmower 10 is in the unteachable state when the data processing device 152a receives at least one of an obstacle or water along the drive path of the robot, a tilt of the robot body greater than a threshold tilt, a cliff, an activation of the drop sensor, a speed of the robot lawnmower outside a threshold speed range, or a threshold proximity to a boundary marker.


The teach routine 155, executing on the data processing device 152a, determines a traveled path of the robot lawnmower 10 based on the stored global positions 21a and indicates whether the traveled path begins at the global origin (e.g., the base station 12) and ends at the global origin. In some examples, the global origin includes coordinates marked by one or more boundary markers 810 positioned in the lawn 20. The boundary markers 810 may include LIDAR retro-reflectors, RADAR retro-reflectors, or ultra-wide band beacons. The teach routine 155, executing on the data processing device 152a, compares a current global position 21a stored in the non-transitory memory 152b to previously stored global positions 21a and determines whether a variance is greater than a threshold variance (e.g., +/−30 cm) from a stored travel path. The travel path may be localized to three or more boundary markers 810.


In some implementations, the method 900 includes performing a scan match on three or more adjacent boundary markers 810, where each of the three or more boundary markers 810 are individually identifiable by adjacent scan match data. The teach routine 155 determines the travel path of the robot lawnmower 20 by scan matching a current travel path scan with a stored travel path scan. The teach routine 155 determines the travel path of the robot lawnmower 10 by determining an angle and range of the robot lawnmower 10 to the three or more boundary markers.


Various implementations of the systems and techniques described here can be realized in digital electronic and/or optical circuitry, integrated circuitry, specially designed ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various implementations can include implementation in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, coupled to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device.


These computer programs (also known as programs, software, software applications or code) include machine instructions for a programmable processor, and can be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the terms “machine-readable medium” and “computer-readable medium” refer to any computer program product, non-transitory computer readable medium, apparatus and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term “machine-readable signal” refers to any signal used to provide machine instructions and/or data to a programmable processor.


Implementations of the subject matter and the functional operations described in this specification can be implemented in digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Moreover, subject matter described in this specification can be implemented as one or more computer program products, i.e., one or more modules of computer program instructions encoded on a computer readable medium for execution by, or to control the operation of, data processing apparatus. The computer readable medium can be a machine-readable storage device, a machine-readable storage substrate, a memory device, a composition of matter effecting a machine-readable propagated signal, or a combination of one or more of them. The terms “data processing apparatus”, “computing device” and “computing processor” encompass all apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers. The apparatus can include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them. A propagated signal is an artificially generated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal, which is generated to encode information for transmission to suitable receiver apparatus.


A computer program (also known as an application, program, software, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program does not necessarily correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.


The processes and logic flows described in this specification can be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit).


Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read only memory or a random access memory or both. The essential elements of a computer are a processor for performing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks. However, a computer need not have such devices. Moreover, a computer can be embedded in another device, e.g., a mobile telephone, a personal digital assistant (PDA), a mobile audio player, a Global Positioning System (GPS) receiver, to name just a few. Computer readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto optical disks; and CD ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.


To provide for interaction with a user, one or more aspects of the disclosure can be implemented on a computer having a display device, e.g., a CRT (cathode ray tube), LCD (liquid crystal display) monitor, or touch screen for displaying information to the user and optionally a keyboard and a pointing device, e.g., a mouse or a trackball, by which the user can provide input to the computer. Other kinds of devices can be used to provide interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input. In addition, a computer can interact with a user by sending documents to and receiving documents from a device that is used by the user; for example, by sending web pages to a web browser on a user's client device in response to requests received from the web browser.


One or more aspects of the disclosure can be implemented in a computing system that includes a backend component, e.g., as a data server, or that includes a middleware component, e.g., an application server, or that includes a frontend component, e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation of the subject matter described in this specification, or any combination of one or more such backend, middleware, or frontend components. The components of the system can be interconnected by any form or medium of digital data communication, e.g., a communication network. Examples of communication networks include a local area network (“LAN”) and a wide area network (“WAN”), an inter-network (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks).


The computing system can include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. In some implementations, a server transmits data (e.g., an HTML page) to a client device (e.g., for purposes of displaying data to and receiving user input from a user interacting with the client device). Data generated at the client device (e.g., a result of the user interaction) can be received from the client device at the server.


While this specification contains many specifics, these should not be construed as limitations on the scope of the disclosure or of what may be claimed, but rather as descriptions of features specific to particular implementations of the disclosure. Certain features that are described in this specification in the context of separate implementations can also be implemented in combination in a single implementation. Conversely, various features that are described in the context of a single implementation can also be implemented in multiple implementations separately or in any suitable sub-combination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a sub-combination or variation of a sub-combination.


Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multi-tasking and parallel processing may be advantageous. Moreover, the separation of various system components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.


A number of implementations have been described. Nevertheless, it will be understood that various modifications may be made without departing from the spirit and scope of the disclosure. Accordingly, other implementations are within the scope of the following claims. For example, the actions recited in the claims can be performed in a different order and still achieve desirable results.

Claims
  • 1. A robot lawnmower comprising: a robot body;a drive system supporting the robot body and configured to maneuver the robot lawnmower over a lawn;a localizing system configured to determine perimeter positions of the robot lawnmower with respect to an origin;a teach monitor in communication with the localizing system and configured to determine whether the robot lawnmower is in a teachable state or in an unteachable state, the robot lawnmower being in the teachable state when the robot lawnmower is localized and on traversable terrain;a controller in communication with the drive system, the localizing system, and the teach monitor, the controller comprising: a data processing device; andnon-transitory memory in communication with the data processing device;wherein the controller is configured to execute a teach routine when the controller is in a teach mode for tracing a confinement perimeter around the lawn as a human operator pilots the robot lawnmower with the robot lawnmower in the teachable state, wherein the teach routine stores perimeter positions determined by the localizing system in the non-transitory memory, and wherein the teach monitor of the robot lawnmower determines whether the robot lawnmower is in the teachable state or the unteachable state; andan operator feedback unit in communication with the teach monitor or the controller and configured to emit an unteachable state indication when the teach monitor determines the robot lawnmower is in the unteachable state, the unteachable state indication configured to alert the human operator to the unteachable state and to further indicate a piloting correction of the robot lawnmower to return the robot lawnmower to the teachable state.
  • 2. The robot lawnmower of claim 1, wherein the piloting correction includes a piloting correction in a pose or a motion of the robot lawnmower calculated to return the robot lawnmower to the teachable state.
  • 3. The robot lawnmower of claim 1, wherein the operator feedback unit is in wireless communication with one or more boundary markers positioned along the perimeter of the lawn.
  • 4. The robot lawnmower of claim 1, wherein the operator feedback unit is in wireless communication with the controller and comprises a user interface configured for remotely piloting the robot lawnmower.
  • 5. The robot lawnmower of claim 1, further comprising a sensor system in communication with the teach monitor, the sensor system comprising at least one of: an inertial measurement unit responsive to a moment of inertia of the robot lawnmower;an obstacle sensor responsive to proximity of an obstacle or water along a drive path of the robot lawnmower;a tilt sensor responsive to tilt of the robot body;a cliff sensor responsive to a discrete ground elevation change proximate the robot body or a drive element of the drive system;a drop sensor responsive to a drop of a drive element of the drive system;an accelerometer responsive to speed of the robot lawnmower across the lawn; anda confinement sensor responsive to proximity of the robot lawnmower to a boundary marker.
  • 6. The robot lawnmower of claim 5, wherein the teach monitor is configured to determine that the robot lawnmower is in the unteachable state in response to a signal from the sensor system.
  • 7. The robot lawnmower of claim 1, wherein the controller, executing the teach routine, is configured to determine a traveled path of the robot lawnmower based on the stored perimeter positions and whether the traveled path begins and ends proximate the origin.
  • 8. The robot lawnmower of claim 1, wherein the origin comprises coordinates marked by one or more boundary markers positioned in the lawn.
  • 9. The robot lawnmower of claim 8, further comprising a boundary detection scanner disposed on the robot body and configured to perform a scan match on three or more adjacent boundary markers, each of the three or more boundary markers individually identifiable by adjacent scan match data.
  • 10. The robot lawnmower of claim 9, wherein the teach routine determines a travel path of the robot lawnmower by scan matching a current travel path scan with a stored travel path scan.
  • 11. A method of configuring a robotic lawnmowing system for autonomous operation, the method comprising: receiving perimeter positions of a robot lawnmower with respect to an origin;receiving a state of the robot lawnmower indicating whether the robot lawnmower is in a teachable state or in an unteachable state, the robot lawnmower being in the teachable state when the robot lawnmower is localized and on traversable terrain; andexecuting, a teach routine that traces a confinement perimeter around a lawn as a human operator pilots the robot lawnmower, wherein the teach routine stores the received perimeter positions in non-transitory memory when the received state indicates that the robot lawnmower is in the teachable state and issues an unteachable state indication when the received state indicates that the robot lawnmower is in the unteachable state, wherein the unteachable state indication includes a human-perceptible signal emitted by an operational feedback unit to alert the human operator of the unteachable state and to further indicate a piloting correction of the robot lawnmower to return the robot lawnmower to the teachable state.
  • 12. The method of claim 11, wherein the piloting correction includes a piloting correction in a pose or a motion of the robot lawnmower selected to return the robot lawnmower to the teachable state.
  • 13. The method of claim 11, wherein the origin comprises coordinates marked by one or more boundary markers positioned in the lawn.
  • 14. The method of claim 11, wherein the teach routine compares a current perimeter position stored in the non-transitory memory to previously stored perimeter positions and determines whether a variance is greater than a threshold variance from a stored travel path.
  • 15. A robotic lawnmower system, comprising: a localizing system that records each global position of a robot lawnmower with respect to a global origin as a human operator pilots the robot lawnmower to trace a confinement perimeter around a lawn in a manual confinement perimeter teaching mode;contour memory in which a geometric contour of the confinement perimeter is recorded while the human operator pilots the robot lawnmower for a duration of operation in the manual confinement perimeter teaching mode, the geometric contour defining a perimeter that the robot lawnmower is to avoid crossing in an autonomous mode;a teaching property monitor configured to detect whether or not the robot lawnmower is piloted in a recordable state and on traversable terrain; andan operator feedback unit in communication with the teaching property monitor and comprising a progress indicator configured to, as the human operator pilots the robot lawnmower in the manual confinement perimeter teaching mode, emit a human-perceptible signal when the teaching property monitor detects that the robot lawnmower is not piloted in the recordable state or is not on traversable terrain, the human-perceptible signal being configured to alert the human operator to an unrecordable state or to an untraversable terrain and to further indicate a piloting correction to return the robot lawnmower to a recordable state or traversable terrain.
  • 16. The robotic lawnmower system of claim 15, wherein the operator feedback unit is mounted on a handle of the robot lawnmower.
  • 17. The robotic lawnmower system of claim 15, wherein the operator feedback unit is in wireless communication with one or more boundary markers positioned along the confinement perimeter and is configured to indicate a piloting correction in response to the robot lawnmower travelling proximate to or beyond the perimeter.
  • 18. The robotic lawnmower system of claim 15, wherein a travel path of the robot lawnmower is localized by determining angle and range of the robot lawnmower to three or more boundary markers.
CROSS-REFERENCE TO RELATED APPLICATIONS

This application claims priority to U.S. Provisional Application Ser. No. 61/972,752, filed on Mar. 31, 2014, the entire contents of which are hereby incorporated by reference.

US Referenced Citations (322)
Number Name Date Kind
2751030 Null Jun 1956 A
3128840 Barrett Apr 1964 A
3385041 Douglas May 1968 A
3457575 Bienek Jul 1969 A
3550714 Bellinger Dec 1970 A
3674316 De Brey Jul 1972 A
3924389 Kita Dec 1975 A
3937174 Haaga Feb 1976 A
3946543 Templeton Mar 1976 A
4119900 Kremnitz Oct 1978 A
4133404 Griffin Jan 1979 A
4163977 Polstorff Aug 1979 A
4306329 Yokoi Dec 1981 A
4369543 Chen et al. Jan 1983 A
4513469 Godfrey et al. Apr 1985 A
4545453 Yoshimura et al. Oct 1985 A
4556313 Miller et al. Dec 1985 A
4600999 Ito Jul 1986 A
4603753 Yoshimura et al. Aug 1986 A
4626995 Lofgren et al. Dec 1986 A
4674048 Okumura Jun 1987 A
4679152 Perdue Jul 1987 A
4696074 Cavalli et al. Sep 1987 A
4700301 Dyke Oct 1987 A
4700427 Knepper Oct 1987 A
4716621 Zoni Jan 1988 A
4733431 Martin Mar 1988 A
4756049 Uehara Jul 1988 A
4767237 Cosman et al. Aug 1988 A
4777416 George et al. Oct 1988 A
4782550 Jacobs Nov 1988 A
4811228 Hyyppa Mar 1989 A
4854000 Takimoto Aug 1989 A
4887415 Martin Dec 1989 A
4893025 Lee Jan 1990 A
4909024 Jones et al. Mar 1990 A
4912643 Beirne Mar 1990 A
4918441 Bohman Apr 1990 A
4919224 Shyu et al. Apr 1990 A
4933864 Evans et al. Jun 1990 A
4962453 Pong et al. Oct 1990 A
4974283 Holsten et al. Dec 1990 A
5002145 Waqkaumi et al. Mar 1991 A
5017415 Cosman et al. May 1991 A
5086535 Grossmeyer et al. Feb 1992 A
5093955 Blehert et al. Mar 1992 A
5109566 Kobayashi et al. May 1992 A
5142985 Stearns et al. Sep 1992 A
5163202 Kawakami et al. Nov 1992 A
5163273 Wojtkowski et al. Nov 1992 A
5165064 Mattaboni Nov 1992 A
5204814 Noonan et al. Apr 1993 A
5208521 Aoyama May 1993 A
5216777 Moro et al. Jun 1993 A
5239720 Wood et al. Aug 1993 A
5261139 Lewis Nov 1993 A
5279672 Belker et al. Jan 1994 A
5284522 Kobayashi et al. Feb 1994 A
5293955 Lee Mar 1994 A
5303448 Hennessey et al. Apr 1994 A
5319828 Waldhauser et al. Jun 1994 A
5321614 Ashworth Jun 1994 A
5324948 Dudar et al. Jun 1994 A
5341540 Soupert et al. Aug 1994 A
5353224 Lee et al. Oct 1994 A
5369347 Yoo Nov 1994 A
5410479 Coker Apr 1995 A
5438721 Pahno et al. Aug 1995 A
5440216 Kim Aug 1995 A
5444965 Colens Aug 1995 A
5446356 Kim Aug 1995 A
5454129 Kell Oct 1995 A
5455982 Armstrong et al. Oct 1995 A
5465525 Mifune et al. Nov 1995 A
5467273 Faibish et al. Nov 1995 A
5497529 Boesi Mar 1996 A
5507067 Hoekstra et al. Apr 1996 A
5515572 Hoekstra et al. May 1996 A
5528888 Miyamoto et al. Jun 1996 A
5534762 Kim Jul 1996 A
5537017 Feiten et al. Jul 1996 A
5539953 Kurz Jul 1996 A
5542146 Hoekstra et al. Aug 1996 A
5548511 Bancroft Aug 1996 A
5553349 Kilstrom et al. Sep 1996 A
5555587 Guha Sep 1996 A
5560077 Crotchett Oct 1996 A
5568589 Hwang Oct 1996 A
5611106 Wulff Mar 1997 A
5611108 Knowlton et al. Mar 1997 A
5613261 Kawakami et al. Mar 1997 A
5621291 Lee Apr 1997 A
5622236 Azumi et al. Apr 1997 A
5634237 Paranjpe Jun 1997 A
5634239 Tuvin et al. Jun 1997 A
5650702 Azumi Jul 1997 A
5652489 Kawakami Jul 1997 A
5682213 Schmutz Oct 1997 A
5682313 Edlund et al. Oct 1997 A
5682839 Grimsley et al. Nov 1997 A
5709007 Chiang Jan 1998 A
5711139 Swanson Jan 1998 A
5761762 Kubo et al. Jun 1998 A
5781960 Kilstrom et al. Jul 1998 A
5787545 Colens Aug 1998 A
5794297 Muta Aug 1998 A
5812267 Everett et al. Sep 1998 A
5819008 Asama et al. Oct 1998 A
5825981 Matsuda Oct 1998 A
5839156 Park et al. Nov 1998 A
5841259 Kim et al. Nov 1998 A
5867800 Leif Feb 1999 A
5916111 Colens Jun 1999 A
5926909 McGee Jul 1999 A
5935179 Kleiner et al. Aug 1999 A
5940927 Haegermarck et al. Aug 1999 A
5940930 Oh et al. Aug 1999 A
5942869 Katou et al. Aug 1999 A
5943730 Boomgaarden Aug 1999 A
5943733 Tagliaferri Aug 1999 A
5959423 Nakanishi et al. Sep 1999 A
5974347 Nelson Oct 1999 A
5974348 Rocks Oct 1999 A
6009358 Angott et al. Dec 1999 A
6041471 Charky et al. Mar 2000 A
6049745 Douglas et al. Apr 2000 A
6073427 Nichols Jun 2000 A
6076025 Ueno et al. Jun 2000 A
6076227 Schalig et al. Jun 2000 A
6108076 Hanseder Aug 2000 A
6112143 Allen et al. Aug 2000 A
6124694 Bancroft et al. Sep 2000 A
6133730 Winn Oct 2000 A
6140146 Brady et al. Oct 2000 A
6166706 Gallagher et al. Dec 2000 A
6226830 Hendriks et al. May 2001 B1
6240342 Fiegert et al. May 2001 B1
6255793 Peless Jul 2001 B1
6259979 Holmquist Jul 2001 B1
6285930 Dickson et al. Sep 2001 B1
6300737 Begvall et al. Oct 2001 B1
D451931 Abramson et al. Dec 2001 S
6339735 Peless et al. Jan 2002 B1
6374155 Wallach et al. Apr 2002 B1
6385515 Dickson et al. May 2002 B1
6408226 Byrne et al. Jun 2002 B1
6417641 Peless et al. Jul 2002 B2
6438456 Feddema et al. Aug 2002 B1
6442476 Poropat Aug 2002 B1
6443509 Levin et al. Sep 2002 B1
6444003 Sutcliffe Sep 2002 B1
6463368 Feiten et al. Oct 2002 B1
6465982 Bergvall et al. Oct 2002 B1
6493613 Peless et al. Dec 2002 B2
6496754 Song et al. Dec 2002 B2
6496755 Wallach et al. Dec 2002 B2
6507773 Parker et al. Jan 2003 B2
6525509 Petersson et al. Feb 2003 B1
6532404 Colens Mar 2003 B2
6535793 Allard Mar 2003 B2
6548982 Papanikolopoulos et al. Apr 2003 B1
6571415 Gerber et al. Jun 2003 B2
6574536 Kawagoe et al. Jun 2003 B1
6580246 Jacobs Jun 2003 B2
6580978 McTamaney Jun 2003 B1
6584376 Kommer Jun 2003 B1
6586908 Petersson et al. Jul 2003 B2
6594844 Jones Jul 2003 B2
6604022 Parker Aug 2003 B2
6605156 Clark et al. Aug 2003 B1
6611120 Song et al. Aug 2003 B2
6611734 Parker et al. Aug 2003 B2
6611738 Raffner Aug 2003 B2
6615108 Peless et al. Sep 2003 B1
6650975 Ruffner Nov 2003 B2
6658693 Reed Dec 2003 B1
6661239 Ozick Dec 2003 B1
6671592 Bisset et al. Dec 2003 B1
6690134 Jones et al. Feb 2004 B1
6741054 Koselka et al. May 2004 B2
6748297 Song et al. Jun 2004 B2
6764373 Osawa et al. Jul 2004 B1
6781338 Jones et al. Aug 2004 B2
6809490 Jones et al. Oct 2004 B2
6830120 Yashima et al. Dec 2004 B1
6841963 Song et al. Jan 2005 B2
6845297 Allard Jan 2005 B2
6850024 Peless et al. Feb 2005 B2
6870792 Chiappetta Mar 2005 B2
6883201 Jones et al. Apr 2005 B2
6885912 Peless et al. Apr 2005 B2
6901624 Mori et al. Jun 2005 B2
D510066 Hickey et al. Sep 2005 S
6938298 Turbjorn Aasen Sep 2005 B2
6940291 Ozick Sep 2005 B1
6956348 Landry et al. Oct 2005 B2
6971140 Se Wan Kim Dec 2005 B2
6984952 Peless et al. Jan 2006 B2
6999850 McDonald Feb 2006 B2
7024278 Chiappetta et al. Apr 2006 B2
7069124 Whittaker et al. Jun 2006 B1
7085624 Aldred et al. Aug 2006 B2
7155309 Peless et al. Dec 2006 B2
7203576 Wilson et al. Apr 2007 B1
7206677 Hulden Apr 2007 B2
D559867 Abramson Jan 2008 S
7349759 Peless et al. Mar 2008 B2
D573610 Abramson Jul 2008 S
7441392 Lilliestielke et al. Oct 2008 B2
7481036 Lilliestielke et al. Jan 2009 B2
7525287 Miyashita et al. Apr 2009 B2
7729801 Abramson Jun 2010 B2
8027761 Nelson Sep 2011 B1
8046103 Abramson et al. Oct 2011 B2
8069639 Fancher Dec 2011 B2
D652431 Naslund Jan 2012 S
D656163 Johansson et al. Mar 2012 S
8136333 Levin et al. Mar 2012 B1
8306659 Abramson et al. Nov 2012 B2
8413616 Bergquist Apr 2013 B2
8532822 Abramson et al. Sep 2013 B2
8634960 Sandin et al. Jan 2014 B2
8635841 Fiser et al. Jan 2014 B2
8781627 Sandin et al. Jul 2014 B2
8868237 Sandin et al. Oct 2014 B2
8954193 Sandin et al. Feb 2015 B2
20010022506 Peless et al. Sep 2001 A1
20010047231 Peless Nov 2001 A1
20020011813 Koselka et al. Jan 2002 A1
20020016649 Jones Feb 2002 A1
20020120364 Colens Aug 2002 A1
20020140393 Peless et al. Oct 2002 A1
20020156556 Ruffner Oct 2002 A1
20020160845 Simonsen Oct 2002 A1
20020173877 Zweig Nov 2002 A1
20030019071 Field et al. Jan 2003 A1
20030023356 Keable Jan 2003 A1
20030025472 Jones et al. Feb 2003 A1
20030055337 Lin Mar 2003 A1
20030060928 Abramson et al. Mar 2003 A1
20030120389 Abramson et al. Jun 2003 A1
20030137268 Papanikolopoulos et al. Jul 2003 A1
20030144774 Trissel Jul 2003 A1
20030182914 Shibata et al. Oct 2003 A1
20030192144 Song et al. Oct 2003 A1
20030208304 Peless et al. Nov 2003 A1
20030216834 Allard Nov 2003 A1
20030233177 Johnson et al. Dec 2003 A1
20030234325 Marino et al. Dec 2003 A1
20040020000 Jones Feb 2004 A1
20040030448 Solomon Feb 2004 A1
20040030449 Solomon Feb 2004 A1
20040030450 Solomon Feb 2004 A1
20040030571 Solomon Feb 2004 A1
20040031113 Wosewick et al. Feb 2004 A1
20040036618 Ku et al. Feb 2004 A1
20040049877 Jones et al. Mar 2004 A1
20040068351 Solomon Apr 2004 A1
20040068415 Solomon Apr 2004 A1
20040068416 Solomon Apr 2004 A1
20040076324 Burl et al. Apr 2004 A1
20040088079 Lavarec et al. May 2004 A1
20040111184 Chiappetta et al. Jun 2004 A1
20040111196 Dean Jun 2004 A1
20040134336 Solomon Jul 2004 A1
20040134337 Solomon Jul 2004 A1
20040156541 Jeon et al. Aug 2004 A1
20040158357 Lee et al. Aug 2004 A1
20040187457 Colens Sep 2004 A1
20040200505 Taylor et al. Oct 2004 A1
20040204792 Taylor et al. Oct 2004 A1
20040211444 Taylor et al. Oct 2004 A1
20040220000 Falone Nov 2004 A1
20040236468 Taylor et al. Nov 2004 A1
20040244138 Taylor et al. Dec 2004 A1
20050000543 Taylor et al. Jan 2005 A1
20050007057 Peless et al. Jan 2005 A1
20050010331 Taylor et al. Jan 2005 A1
20050020374 Wang Jan 2005 A1
20050097952 Steph May 2005 A1
20050108999 Bucher May 2005 A1
20050113990 Peless et al. May 2005 A1
20050156562 Cohen et al. Jul 2005 A1
20050204717 Colens Sep 2005 A1
20050251292 Casey et al. Nov 2005 A1
20050278094 Swinbanks et al. Dec 2005 A1
20050287038 Dubrovsky et al. Dec 2005 A1
20060255933 Kaneko Nov 2006 A1
20060293794 Harwig et al. Dec 2006 A1
20070016328 Ziegler et al. Jan 2007 A1
20070142964 Abramson Jun 2007 A1
20070150109 Peless et al. Jun 2007 A1
20080039974 Sandin et al. Feb 2008 A1
20080097645 Abramson et al. Apr 2008 A1
20080167753 Peless et al. Jul 2008 A1
20080183349 Abramson et al. Jul 2008 A1
20090254218 Sandin et al. Oct 2009 A1
20100059000 Bergquist Mar 2010 A1
20100102525 Fancher Apr 2010 A1
20110130875 Abramson Jun 2011 A1
20110166701 Thacher Jul 2011 A1
20110234153 Abramson Sep 2011 A1
20120041594 Abramson et al. Feb 2012 A1
20120226381 Abramson et al. Sep 2012 A1
20130006419 Bergstrom et al. Jan 2013 A1
20130030609 Jagenstedt Jan 2013 A1
20130066468 Choi Mar 2013 A1
20130066484 Markusson et al. Mar 2013 A1
20130076304 Andersson et al. Mar 2013 A1
20130103200 Tucker Apr 2013 A1
20130110322 Jagenstedt et al. May 2013 A1
20130152538 Fiser et al. Jun 2013 A1
20130184924 Jagenstedt et al. Jul 2013 A1
20130249179 Burns Sep 2013 A1
20130274920 Abramson et al. Oct 2013 A1
20140102061 Sandin et al. Apr 2014 A1
20140102062 Sandin et al. Apr 2014 A1
20140117892 Coates May 2014 A1
20140257659 Dariush Sep 2014 A1
20150006015 Sandin et al. Jan 2015 A1
20150220086 Willgert Aug 2015 A1
20160007315 Lundgreen Jan 2016 A1
Foreign Referenced Citations (85)
Number Date Country
19932552 Feb 2000 DE
0774702 May 1997 EP
0792726 Sep 1997 EP
1331537 Jul 2003 EP
1704766 Dec 2008 EP
2828589 Feb 2003 FR
2142447 Jan 1985 GB
2283838 May 1995 GB
2382157 May 2003 GB
62120510 Jun 1987 JP
62154008 Jul 1987 JP
63183032 Jul 1988 JP
63241610 Oct 1988 JP
2-6312 Jan 1990 JP
3051023 Mar 1991 JP
04320612 Nov 1992 JP
06327598 Nov 1994 JP
07129239 May 1995 JP
7295636 Nov 1995 JP
816776 Jan 1996 JP
08089451 Apr 1996 JP
8152916 Jun 1996 JP
09179625 Jul 1997 JP
9185410 Jul 1997 JP
11-508810 Aug 1999 JP
11-510935 Sep 1999 JP
2001-258807 Sep 2001 JP
2001-275908 Oct 2001 JP
2001-525567 Dec 2001 JP
2002078650 Mar 2002 JP
2002-204768 Jul 2002 JP
2002-532178 Oct 2002 JP
3356170 Oct 2002 JP
2002-323925 Nov 2002 JP
3375843 Nov 2002 JP
2002-355206 Dec 2002 JP
2002-360471 Dec 2002 JP
2002-360482 Dec 2002 JP
2003005296 Jan 2003 JP
2003010076 Jan 2003 JP
2003-505127 Feb 2003 JP
2003036116 Feb 2003 JP
2003038401 Feb 2003 JP
2003038402 Feb 2003 JP
2003061882 Mar 2003 JP
2003-310489 Nov 2003 JP
9526512 Oct 1995 WO
9740734 Nov 1997 WO
9741451 Nov 1997 WO
9853456 Nov 1998 WO
9916078 Apr 1999 WO
9928800 Jun 1999 WO
9938056 Jul 1999 WO
9938237 Jul 1999 WO
9959042 Nov 1999 WO
0004430 Jan 2000 WO
0036962 Jun 2000 WO
0038026 Jun 2000 WO
0038029 Jun 2000 WO
0078410 Dec 2000 WO
0106904 Feb 2001 WO
0106905 Feb 2001 WO
0239864 May 2002 WO
0239868 May 2002 WO
02058527 Aug 2002 WO
02062194 Aug 2002 WO
02067744 Sep 2002 WO
02067745 Sep 2002 WO
02074150 Sep 2002 WO
02075356 Sep 2002 WO
02075469 Sep 2002 WO
02075470 Sep 2002 WO
02101477 Dec 2002 WO
03026474 Apr 2003 WO
03040845 May 2003 WO
03040846 May 2003 WO
03065140 Aug 2003 WO
2004004533 Jan 2004 WO
2004006034 Jan 2004 WO
2006068403 Jan 2004 WO
2004058028 Jul 2004 WO
2005055795 Jun 2005 WO
2005077244 Aug 2005 WO
2007109624 Sep 2007 WO
2014027946 Feb 2014 WO
Non-Patent Literature Citations (46)
Entry
Kozlowski and Pazderski, “Modeling and Control of a 4-wheel Skid-steering Mobile Robot,” International J. of Applied Mathematics and Computer Science, 2004, 14(4):477-496.
Caracciolo et al., “Trajectory Tracking Control of a Four-Wheel Differentially Driven Mobile Robot,” IEEE Int. Conf. Robotics and Automation, Detroit, MI, 1999, pp. 2632-2638.
Kimura et al., “Stuck Evasion Control for Active Wheel Passive-Joint Snake-like Mobile Robot ‘Genbu’,” Proceedings of the 2004 IEEE International Conference on Robotics 8 Automation, New Orleans, LA, Apr. 2004, 6 pages.
Angle et al., U.S. Appl. No. 60/177,703, filed Jan. 24, 2000, available at http://portal.uspto.gov/external/portal/pair , accessed Jul. 11, 2012, 16 pages.
Bohn et al. “Super-distributed RFID Tag Infrastructures,” Lecture Notes in Computer Science, Springer Verlag, Berlin, DE, vol. 3295, Nov. 11, 2004, pp. 1-12.
Campbell et al., U.S. Appl. No. 60/741,442, filed Dec. 2, 2005, available at http://patentscope.wipo.int/search/docservicepdf—pct/id00000005206306.pdf, accessed Jul. 11, 2012, 130 pages.
Casey et al., U.S. Appl. No. 60/582,992, filed Jun. 25, 2004, available at http://portal.uspto.gov/external/portal/pair, accessed Jul. 11, 2012, 24 pages.
Domnitcheva “Smart Vacuum Cleaner—An Autonomous Location-Aware Cleaning Device,” Proceedings of the International Conference on Ubiquitous Computing, Sep. 10, 2004, pp. 1-2.
Doty and Harrison, “Sweep Strategies for a Sensory-Driven, Behavior-Based Vacuum Cleaning Agent,” AAAI 1993 Fall Symposium Series, Instantiating Real-World Agents, Oct. 22-24, 1993, pp. 1-6.
“Electrolux—Designed for the well-lived home (Welcome to the Electrolux Trilobite),” Retrieved from the Internet: URL<http://www.electroluxusa.com/node57.as[?currentURL=node142.asp%3F >. Accessed Mar. 2005, 2 pages.
“eVac Robotic Vacuum,” S1727 Instruction Manual, Sharper Image Corp, Copyright 2004, 13 pages.
Everyday Robots, “Everyday Robots: Reviews, Discussion and News for Consumers,” Aug. 2004, Retrieved from the Internet: URL<www.everydayrobots.com/index.php?option=content&task=view&id=9>, retrieved Sep. 2012, 4 pages.
Evolution Robotics, “NorthStar—Low-cost Indoor Localization—How it Works,” E Evolution Robotics, 2005, 2 pages.
Non-final Office Action issued in U.S. Appl. No. 11/688,225, dated Feb. 24, 2011, 30 pages.
Non-final Office Action issued in U.S. Appl. No. 12/488,094, dated Jan. 26, 2011, 25 pages.
Non-final Office Action issued in U.S. Appl. No. 11/688,213, dated Jan. 27, 2011, 27 pages.
Non-final Office Action issued in U.S. Appl. No. 12/488,094, dated Jul. 28, 2011, 13 pages.
Final Office Action issued in U.S. Appl. No. 11/688,225, dated Nov. 10, 2011, 45 pages.
“Facts on Trilobite,” webpage, Retrieved from the Internet: URL<http://trilobiteelectroluxse/presskit—en/model1—1335asp?print=yes&pressID=>, accessed Dec. 2003, 2 pages.
Gat, “Robust Low-Computation Sensor-driven Control for Task-Directed Navigation,” Proc of IEEE International Conference on Robotics and Automation, Sacramento, CA, Apr. 1991, pp. 2484-2489.
Hicks and Hall, “A Survey of Robot Lawn Mowers,” http://www.robotics.uc.edu/papers/paper2000/lawnmower.pdf, 2000, 8 pages.
Hitachi: News release: “The home cleaning robot of the autonomous movement type (experimental machine) is developed,” May 29, 2003, Retrieved from the Internet: URL<www.i4u.com./japanreleases/hitachirobot.htm>, retrieved Mar. 2005, 5 pages.
International Preliminary Report on Patentability issued in International Application No. PCT/US2007/064323, dated Sep. 23, 2008, 10 pages.
International Preliminary Report on Patentability dated Sep. 23, 2008 from International Application No. PCT/US2007/064326, dated Sep. 23, 2008, 10 pages.
International Search Report and Written Opinion issued in PCT/US2007/064326, dated Jul. 17, 2008, 6 pages.
International Search Report and Written Opinion issued in PCT/US2007/064323, dated Jun. 16, 2008, 14 pages.
International Search Report and Written Opinion issued in PCT/US2015/020865, dated Jun. 25, 2015, 10 pages.
Karcher “Karcher RoboCleaner RC 3000,” Retrieved from the Internet: URL www.robocleaner.de/english/screen3.html, accessed Dec. 2003, 4 pages.
Karcher, RC 3000 Cleaning Robot-User Manual Manufacturer: Alfred-Karcher GmbH & Co, Cleaning Systems, Alfred Karcher-Str 28-40, PO Box 160, D-71349 Winnenden, Germany, Dec. 2002, 8 pages.
Karcher USA, “RC3000 Robotic Cleaner,” 2005, Retrieved from the Internet: URL http://www.karcher-usa.com/showproducts.php?op=view prod&param1=143&param2=&param3=, accessed Mar. 2005, 3 pages.
Karcher, “Product Manual Download ‘Karch’,” available at www.karcher.com, 2004, 16 pages.
KOOLVAC Robotic Vacuum Cleaner Owner's Manual, Koolatron, 2004, 13 pages.
Kubitz et al., “Application of radio frequency identification devices to support navigation of autonomous mobile robots” Vehicular Technology Conference, vol. 1, May 4, 1997, pp. 126-130.
Matthies et al., “Detecting Water Hazards for Autonomous Off-Road Navigation,” Proceedings of SPIE Conference 5083: Unmanned Ground Vehicle Technology V, Orlando, FL, Apr. 2003, pp. 231-242.
Morland,“Autonomous Lawnmower Control,” Downloaded from the internet at: http://cns.bu.edu/˜cjmorlan/robotics/lawnmower/report.pdf, Jul. 2002, 10 pages.
On Robo, “Robot Reviews Samsung Robot Vacuum (VC-RP30W),” 2005, Retrieved from the Internet: URL www.onrobo.com/reviews/AT—Home/vacuum—cleaners/on00vcrb30rosam/index.htm, accessed Mar. 2005, 2 pages.
Invitation to Pay Additional Fees issued in International Application No. PCT/US2007/064326, dated Apr. 18, 2008, 9 pages.
“RoboMaid Sweeps Your Floors So You Won't Have to,” the Official Site, Retrieved from the Internet: URLhttp://therobomaid.com/, accessed Mar. 2005, 2 pages.
Robotic Vacuum Cleaner-Blue, Retrieved from the Internet: URL http://www.sharperimage.com/us/en/catalog/productview.jhtml?sku=S1727BLU, accessed Mar. 2005, 2 pages.
Schofield, “Neither Master nor Slave—A Practical Study in the Development and Employment of Cleaning Robots,” Emerging Technologies and Factory Automation, 1999 Proceedings ETFA '99 1999 7th IEEE International Conference on Barcelona, Spain, Oct. 1999, pp. 1427-1434.
TheRobotStore.com, “Friendly Robotics Robotic Vacuum RV400—The Robot Store,” www.therobotstore.com/s.nl/sc.9/category.-109/it.A/id.43/E, accessed Apr. 2005, 1 page.
Thrun, “Learning Occupancy Grid Maps With Forward Sensor Models,” Autonomous Robots 15, Sep. 1, 2003, 28 pages.
Wigley, “The Electric Lawn”, in The American Lawn, Princeton Architectural Press New York with Canadian Centre for Architecture Montreal, 1999, pp. 155-195.
Kahney, “Wired News: Robot Vacs are in the House,” Jun. 2003, Retrieved from the Internet: URLwww.wired.com/news/technology/o,1282,59237,00.html, accessed Mar. 2005, 5 pages.
“Put Your Roomba . . . On, Automatic” webpages: http://www.acomputeredge.com/roomba, accessed Apr. 2005, 3 pages.
“Zoombot Remote Controlled Vaccuum-RV-500 New Roomba 2,” eBay website: http://cgi.ebay.com/ws/eBayISAPI.dll?ViewItem&category=43526&item=4373497618&rd=1, accessed Apr. 2005, 7 pages.
Related Publications (1)
Number Date Country
20150271991 A1 Oct 2015 US
Provisional Applications (1)
Number Date Country
61972752 Mar 2014 US