The present application relates generally to nursery and greenhouse operations and, more particularly, to an adaptable container handling system including one or more robots for picking up and transporting containers such as plant containers to specified locations.
Nurseries and greenhouses regularly employ workers to reposition plants such as shrubs and trees in containers on plots of land as large as thirty acres or more. Numerous, e.g., hundreds or even thousands of containers may be brought to a field and then manually placed in rows at a designated spacing. Periodically, the containers are re-spaced, typically as the plants grow. Other operations include jamming, (e.g., for plant retrieval in the fall), consolidation, and collection.
The use of manual labor to accomplish these tasks is both costly and time consuming. Attempts at automating such container handling tasks have met with limited success.
An adaptable container handling robot in accordance with one or more embodiments includes a chassis, a container transport mechanism, a drive subsystem for maneuvering the chassis, a boundary sensing subsystem configured to reduce adverse effects of outdoor deployment, and a controller subsystem responsive to the boundary sensing subsystem. The controller subsystem is configured to detect a boundary, control the drive subsystem to turn in a given direction to align the robot with the boundary, and control the drive subsystem to follow the boundary.
A method of operating an adaptable container handling robot in an outdoor environment in accordance with one or more embodiments includes providing a boundary outside on the ground, and maneuvering a robot equipped with a boundary sensing subsystem to: detect the boundary, turn in a given direction to align the robot with the boundary, and follow the boundary. The robot is operated to reduce adverse effects of outdoor boundary sensing and following.
(
a and 20b are schematic views illustrating operation of a sensing module utilizing a shadow wall in accordance with one or more embodiments; and
Each robot 20,
Electronic controller 34 is responsive to the outputs of both boundary sensing subsystem 30 and container detection subsystem 32 and is configured to control robot drive subsystem 36 and container lift mechanism 38 based on certain robot behaviors as explained below. Controller 34 is also responsive to user interface 100. The controller typically includes one or more microprocessors or equivalent programmed as discussed below. The power supply 31 for all the subsystems typically includes one or more rechargeable batteries, which can be located in the rear of the robot.
In one particular example, robot 20,
A drive train is employed to rotate yoke 46,
In one preferred embodiment, controller 34,
Controller 34,
Once positioned at the container source location, controller 34 controls drive subsystem 36 and lift mechanism 38 to retrieve another container as shown in
Thereafter, the remaining rows are filled with properly spaced containers as shown in
Similarly, distributed containers at source A,
Using multiple fairly inexpensive and simple robots, which operate reliably and continuously, large and even moderately sized growing operations can save money in labor costs.
The general positioning of features on the robot are shown in
The determination of the position of a container relative to the robot may be accomplished several ways including, e.g., using a camera-based container detection system.
A flowchart of a container centering/pickup method is shown in
The preferred system in accordance with one or more embodiments generally minimizes cost by avoiding high-performance but expensive solutions in favor of lower cost systems that deliver only as much performance as required and only in the places that performance is necessary. Thus navigation and container placement are not typically enabled using, for example a carrier phase differential global positioning system. Instead, a combination of boundary following, beacon following, and dead-reckoning techniques are used. The boundary subsystem provides an indication for the robot regarding where to place containers, greatly simplifying the user interface.
The boundary provides a fixed reference and the robot can position itself with high accuracy with respect to the boundary. The robot places containers typically within a few feet of the boundary. This arrangement affords little opportunity for dead-reckoning errors to build up when the robot turns away from the boundary on the way to placing a container.
After the container is deposited, the robot returns to collect the next container. Containers are typically delivered to the field by the wagonload. By the time one wagonload has been spaced, the next will have been delivered further down the field. In order to indicate the next load, the user may position a beacon near that load. The robot follows this procedure: when no beacon is visible, the robot uses dead-reckoning to travel as nearly as possible to the place it last picked up a container. If it finds a container there, it collects and places the container in the usual way. If the robot can see the beacon, it moves toward the beacon until it encounters a nearby container. In this way, the robot is able to achieve the global goal of spacing all the containers in the field, using only local knowledge and sensing. Relying only on local sensing makes the system more robust and lower in cost.
Users direct the robot by setting up one or two boundary markers, positioning a beacon, and dialing in several values. No programming is needed. The boundary markers show the robots where containers are to be placed. The beacon shows the robots where to pick up the containers.
Several engineering challenges present themselves in boundary detection and following by robots in an outdoor environment. It may be, at any given time, a sunny or cloudy day, dirt may be present on the boundary tape, shadows may be present (even shadows cast by the robot), and the like. Accordingly, in accordance with one or more embodiments, various techniques are provided to reduce adverse effects of outdoor deployment of the container handling robot.
In accordance with one or more embodiments, to reduce the adverse effects of outdoor deployment, microprocessor 204, which is a component of the overall robot controller subsystem, may include a circuit or functionality configured to modulate LEDs 202. The LEDs are modulated so that the optical signal they produce can be detected under variable ambient light conditions often exasperated by robot movement and shadows. The modulation frequency can be generated using a pulse width modulation function implemented in microcontroller 204. The LEDs can be modulated at a 50% duty cycle. That is, for 50% of the modulation period, the LEDs emit light and for the other 50% they are off. If infrared emitters are used, a modulation frequency of between 10 to 90 kHz is sufficient.
In accordance with one or more alternate embodiments, circuitry on circuit board 206 and/or functionality within microcontroller 204 may be configured to subtract or otherwise compensate for the detector current produced in response to sunlight from the overall detector signal. As shown in
The amplified photodiode signal 205 is passed through a low pass filter 207. In an exemplary implementation, the LEDs are modulated at 40 KHz and the low pass filter 207 has a corner frequency of 400 Hz (passes DC to 400 Hz, attenuates higher frequencies). This effectively eliminates the modulation signal and yields a signal that represents the background ambient light level (with frequencies below 400 Hz).
This ambient signal is converted to a current 209, which is the opposite polarity of the current generated in the photodiode due to ambient light. The two opposite currents cancel each other at the summing node, and the result is input to the photodiode amplifier 203.
In accordance with one or more alternate embodiments, a shadow wall structure is provided in the boundary sensing module to reduce the adverse effects of outdoor deployment as illustrated by way of example in
A robust boundary follower can be constructed by using two photodiodes that are shadowed in a particular way using a shadow wall structure. The output of the system is the actual absolute displacement of the retro-reflective target from the center of the detector.
Referring to
Because detectors A and B are nearly co-located, were it not for the shadow wall, each detector would produce the same signal. However, because A is shadowed, it produces a smaller signal. Thus:
SA=kI*b/L (7)
and
SB=kI (8)
where I is the intensity of the light at the detector, k is a constant that accounts for detector gain, L is the width of the detector's active material, and b is the bright (not shadowed) portion of the detector. The shadowed part is d. As the target moves toward the center of the detector, b goes to L and the signals from the two detectors become equal.
From this geometry we see that L=b+d and that d/h=e/a. Substituting we get:
e=L(1−SA/SB)*a/h (9)
This is true as long as SA<SB. That condition holds when the target is to the right of the detectors. If SB<SA, then the target must be to the left of the detectors and an analogous computation can be done to determine e in that case.
Thus without a lens system, without correcting for range, and using direct ADC readings (for SA and SB), an accurate, absolute value for the position of the boundary relative to the sensor can be obtained.
Note that the robot can maintain a generally constant distance with only one boundary sensor (front or back). However, using both sensors, and maintaining a generally constant distance for both, will allow the robot to follow the boundary (and maintain proper heading) more accurately. (Depending on mountings the front and rear sensors may be calibrated differently, i.e., e=0, may be a different distance for front and rear sensors.)
A robot 20 can use the boundary sensor subsystem to orient and position itself, find and follow the edge(s) of the spacing area, and position containers with greater accuracy.
The boundary itself is preferably defined by a retro-reflective tape, rope, painted surface, or other element that is placed on the ground to run alongside the long edge of the active spacing area. Each robot has four very similar boundary sensors 80a, 80b, 80c, 80d positioned roughly at the four corners of the robot as shown in
The four sensors 80a, 80b, 80c, 80d can be mounted on the robot pointing outward and toward the ground as illustrated in the rear view of the robot shown in
Regardless of how they are used, the boundary sensors 80a, 80b, 80c, 80d in accordance with various embodiments have the ability to detect a relatively small target signal in bright sunlight. Each boundary sensor includes an array of infrared emitters 202 and one or more photodetectors 200a, 200b as shown in the exemplary circuit board of
S=Son−Soff (10)
The subtraction operation removes the ambient light from the signal leaving only the light reflected from the target. The intensity of this light is a function of distance by the inverse r-squared law, which however can be ignored for simplicity. Each sensor can therefore detect the boundary when a portion of the boundary lies within that sensor's field of view.
It should be noted that these fields of view are not completely discrete; the robot typically does not see perfectly within the field of view, nor is it completely blind to the boundary outside of the field of view.
After picking up a pot, the robot turns to face the boundary (based on its assumption about the correct heading to the boundary). The robot drives forward until it detects the boundary (which is also described herein as “seek” behavior), then uses boundary sensor data to position itself alongside the boundary (which is also described herein as “acquire” behavior). The front boundary sensors are used to detect and acquire the boundary.
When the Seek behavior is active, the robot moves in the (anticipated) direction of the boundary until it detects the boundary. As discussed above, in one or more embodiments, each sensor has two detectors 200a, 200b, with their signals being denoted SA and SB. When boundary material 24 comes within the field of view of the sensor and is illuminated by the emitters 202, the sum of the signals from each detector, SA and SB, increases. As the boundary approaches the center of the field of view, the sum of signals increases further. If the increase exceeds a threshold, the robot determines that it is within range of a boundary.
As the robot continues travelling forward with a boundary in the sensor's field of view, the boundary fills an increasing portion of the field of view. Then, as the field of view crosses the boundary, the boundary fills a decreasing portion. Thus, the sum of the detector signals first increases, then decreases. The peak in the signal corresponds to the boundary being centered in the field of view of the detector, allowing the robot to determine the robot's distance from the boundary. The robot might slow down to more precisely judge peak signals.
By measuring the distance to the boundary with both the left and right boundary sensors 80a, 80b, the robot can determine its angle with (i.e., orientation relative to) the boundary. This information can then be used to determine the best trajectory for the robot to follow in order to align itself parallel to the boundary. The robot can then align itself more precisely by using front and rear sensor data.
In one or more embodiments, in the Seek/Acquire behavior, the front boundary sensors 80a, 80b do not provide a general-purpose range sensor. They provide limited information that can be used to determine distance to the boundary. The following describes information the sensors provide the robot during Seek behavior.
Let RD represent the (on-the-ground) distance from the robot's center to the center of a front sensor field of view. Let F represent the radius of that field of view. During the Seek/Acquire behavior, each front sensor 80a, 80b can provide the following information to the robot: (a) If the sum of signals exceeds a threshold, the boundary is in the field of view. The robot knows its distance from the boundary is between (RD+F) and (RD−F); and (b) second, if the sum of signals peaks and starts to decrease, the boundary has just crossed the center of the sensor's field of view. The robot knows its distance has just passed RD. By comparing the distances from the two sensors 80a, 80b, the robot can tell its approach angle.
Alternatively, if one front sensor crosses the boundary, and too much time elapses without the other front sensor detecting the boundary. The robot can infer that its approach angle is very shallow.
Ideally, the front sensors 80a, 80b would look very far in front of the robot to give the robot space to react at high speeds. However, the distance the boundary sensor can look forward is geometrically limited by the maximum angle at which retro-reflection from the boundary marker is reliable (typically about 30°) and the maximum height at which the boundary sensor can be mounted on the robot. The sensor mountings are designed to balance range and height limitations, resulting in a preferred range requirement wherein the front sensors are able to detect boundary distance at a minimum range of about 750 mm in one example.
Boundary sensor mountings may be adjusted to improve performance, so the range could potentially increase or decrease slightly. Additionally, adjustment could also be made to cope with undulations in the ground.
The fore/aft field of view of the boundary sensor should be sufficiently large that, as the robot approaches the boundary at a maximum approach speed during Seek behavior, the boundary will be seen multiple times (i.e., over multiple CPU cycles of the microcontroller 204) within the field of view. In one example, if the robot travels at 2 m/s and the update rate is 400 Hz, then the robot travels 2/400=0.005 m or 5 mm between updates. Assuming that 5 update cycles are sufficient for detection, a minimum field of view of 25 mm should suffice. The front sensors' field of view preferably has a minimum fore/aft length (robot X length) of 25 mm (i.e., center±12.5 mm).
After the robot has acquired the boundary, the Follow Boundary behavior will become active. In Follow Boundary behavior, the front sensors should overlap the boundary.
While the robot moves to acquire the boundary, it will continue sensing. (It does not need to plan a perfect blind trajectory based on the data it obtains during Seek behavior.) As a result, the robot is fairly tolerant to errors in distance. As long as it detects the boundary during Seek behavior, it knows it is roughly within its field of view, which will enable it to begin to turn. As it turns, it continues to receive data from the front boundary sensors 80a, 80b. If the front sensors' field of view crosses the boundary too quickly, the robot can adjust its perceived position. The front sensors 80a, 80b should consistently detect the boundary at a consistent point within their field of view, ±38 mm in one example.
A robot can also use the difference between the two sensors 80a, 80b to compute its angle of approach. The robot, in one example, can reliably Acquire the boundary if it can detect its approach within ±10 degrees. Assume the robot approaches the boundary at an angle A. w is the distance between the two fields of view, and T is the distance the robot will have to travel before the second sensor detects the boundary.
To ensure that the robot's reported angle is within 10 degrees of the actual angle A, the robot should know T within some range±X.
It can be assumed that the robot must be approaching at an angle somewhat close to perpendicular (or the robot's search will time out before the second sensor detects the boundary). Assume, for example, the robot is within 30° of perpendicular. Given, in one example, that w=748 mm and A=60°, we can compute T=431 mm, then:
Solving for x we get x=−159 mm.
So, if the robot approaches the boundary at an angle, e.g., within 30° of perpendicular, and wants to detect its heading within 10°, the second sensor should detect the boundary within an accuracy of about 160 mm. This is much more forgiving than the 38 mm example noted above, so heading does not impose any additional constraints. (Likewise, at a 60° approach, solving for A—10° is also more forgiving.)
Note that the distance sensitivities become higher when the robot approaches closer to perpendicular. Even at 88°, however, the robot must only detect the accuracy within about 130 mm—which is still much less stringent than the 38 mm example above. Also, the worst case has the first sensor detecting as soon as possible, and the second sensor detecting as late as possible. So in practice in some embodiments it is possible to cut the distances in half. But this is likely to be rare—and even so, the accuracy requirements are still less stringent than the 38 mm example above.
The Follow Boundary behavior becomes active once the robot is positioned generally parallel to the boundary with the intent to travel along it. The robot servos along the boundary and attempts to maintain a constant distance.
The robot uses two side boundary sensors (front and rear) to follow the boundary. (It is possible to perform this function less accurately with only one sensor.) Each sensor reports an error signal that indicates the horizontal distance from the boundary to the center of its field of view (or some other point determined by bias).
When the robot is following the boundary, there are preferably a few inches between the wheel and the boundary tape (e.g., 3″ or 76 mm) when the boundary tape is centered in the sensors' lateral field of view. The sensor mountings are designed to balance range and height limitations. The mountings are the same for Seek/Acquire and Follow behavior, so the range values are the same as well.
The width of the boundary sensor field of view (i.e., diameter in robot Y) comprises the range over which the robot can servo on the boundary marker during Follow behavior. In one example, this number is on the order of 7 inches (178 mm). To support boundary following, the front sensors' left/right field of view (robot Y width) are preferably at least 157 mm wide in one example.
The illuminated patch on the ground visible to the robot is a conic section, and the patch is longer in the fore/aft direction of the robot than it is transverse to the robot. This results in a condition where a larger section of retro-reflective boundary is illuminated (and visible) and the signal strength during follow behavior may be substantially higher than during seek behavior. This effect may result in less than desirable signal levels during seek behavior, or, alternatively, may cause saturation during follow behavior. In accordance with one or more embodiments, the effect can be mitigated through a brute force solution using an A/D converter with higher dynamic range. Alternately, in accordance with one or more further embodiments, the effect can be mitigated using a mask structure 300 placed over the detectors 200a and 200b to equalize the fore/aft and lateral field views as illustrated in the example of
Similarly, it can be noted that, particularly for the forward facing boundary detectors, the desired size of the illuminated area on the ground visible to the robot is small relative to the distance between the light source and the illuminated area. In accordance with one or more embodiments, in the interest of minimizing power consumption, the emission angle of the light source should be matched to the geometry of the system. The emission angle can be controlled thru optical means such as a collimating lens, or thru the use of extremely narrow beam LEDs (e.g., OSRAM LED part number SFH4550 (+/−3 degrees)).
In one example, the front sensors have a 770 mm range to the ground, and the rear sensors have a 405 mm range—so the rear sensor field of view can be proportionately smaller. The rear sensors' left/right field of view (robot Y width) in this example should be at least 113 mm wide.
Localization refers to the process of tracking and reporting the robot's absolute position and heading. In accordance with one or more embodiments, the robot's controller 34 executes Localizer software for performing these functions. There are a number of inputs to the robot's Localizer software. These can include dead reckoning, gyro input, and the like, but the boundary is preferably the only absolute position reference. It forms the spacing area's Y axis. In one example, the boundary is a primary input to localization. It is used in several ways and it provides an absolute Y position reference (denotes Y=0), and it provides an absolute heading reference. The robot can derive its angle to the boundary by looking at the difference between the two front sensor distances during Seek/Acquire behavior, or between the front and back sensor distances during Follow behavior. Since the boundary forms the absolute Y axis, the robot can derive its absolute Y heading from its angle to the boundary.
In accordance with one or more embodiments, the boundary can include tick marks to provide an absolute indicator for where container rows may be placed. As discussed above, the boundary can be defined by a retro-reflective tape 24 (
The retro-reflective tape with tick marks can be formed in various ways. In accordance with one or more embodiments, the non-reflective portions of the tape defining the tick marks 224 can comprise a non-reflective tape, paint, or other material selectively covering the retro-reflective tape 24. In one or more alternate embodiments, the retro-reflective tape 24 is formed to have an absence of reflective material in the locations of the tick marks.
The tick marks on the boundary can be used to judge distance traveled. The robot knows the width of each tick mark, and it can determine the number of ticks it has passed. Thus, the robot can determine and adjust its X position as it moves, by multiplying the number of ticks passed by the tick width. This can allow the robot to more accurately track its absolute X position.
Boundary sensor data is used for localization while executing Boundary Follow behavior. While the Boundary Follow behavior is active, the robot servos along the boundary. Thus, if the robot is following accurately, it knows its distance (i.e., the constant servo distance) and heading (i.e., parallel to the boundary).
The robot should know its Y (absolute) position relative to the boundary with good accuracy, which in some examples can be on the order of a millimeter. Sensor signal strength and accuracy are likely to be affected by environmental conditions like temperature, crooked boundaries, etc.
The robot can determine the position and orientation of the boundary by various techniques, including, e.g., integration or using a Kalman filter as it moves along the boundary. This somewhat relaxes the single-measurement accuracy requirement of the sensor.
In accordance with one or more embodiments, the robot can use boundary sensor data to compute two kinds of localization data: Y offset (distance to boundary) and heading (with respect to boundary). Accuracy requirements can be expressed in terms of overall robot performance (computed over multiple measurements and while executing behaviors) and in terms of sensor performance.
Over 1 meter of travel, the robot's measured Y offset from the boundary is preferably accurate within ±0.25 inches in one example. (This is determined by the accuracy requirements of pot spacing.) In order to space pots in rows that “appear straight,” pots should be placed along rows±1.5 inches, or about 38 mm in one example.
In one example, using the trigonometry, we can compute that for the pot furthest from the boundary (12′), to achieve error e of ±1.5 inches, the boundary angle error θ should be within approximately 0.60 degrees.
Over 1 meter of travel, the robot's measured angle from the boundary should be accurate within ±0.60 degrees in one example. In one example, individual sensors can provide error offset (as in Follow Boundary) resolution of ±1 mm.
Retro-reflectivity enables the robot to discriminate between the boundary marker and other reflective features in the environment. Typically, the retro-reflective marker will be the brightest object in the sensor's field of view. If this is true then a simple threshold test applied to the return signal strength is sufficient to eliminate false targets. However, bright features (or tall features not on the ground) could result in false boundary detections. In accordance with one or more embodiments, a simple addition to the detector board can improve performance in these cases. In
As previously discussed, the boundary tape may have a periodic pattern of reflective and non-reflective material. These alternating sections will encode absolute reference points along the boundary. The non-reflective sections are referred to as “tick bars,” and the reference points are referred to as “tick marks.” During spacing, the robot can use these tick marks to more accurately determine its absolute X position. This can serve the following purposes. The tick marks help determine the legal X position of rows of containers. This enables the system to avoid an accumulation of spacing error in the global X dimension. Accumulated spacing error might (a) challenge the system's ability to meet field efficiency (space utilization) requirements, and (b) make spaced pots appear irregular and inaccurate. In accordance with one or more embodiments, for teaming, each robot will broadcast its global X and Y coordinates. This requires a common coordinate reference. Because the tick sections repeat, the tick mark scheme does not provide a truly global X reference. But the sections will be large enough that this is not likely to be a problem. The robots would know their position within a section, so would able to avoid collisions. For example, suppose that we encode the tick marks such that the pattern repeats every 100 feet. This means that every tick mark within a 100-foot section is unique but across sections they are not unique. Thus it might be possible for a first robot to believe that it is operating near a second robot when in fact the second robot is actually operating in a different 100-foot section. This will be rare in practice.
In accordance with one or more embodiments, the boundary tape can contain a series of repeating sections of regular length. Each section will be longer than the distance the robot will typically drive from source to destination, e.g., 20 meters. Each section will have the same pattern of tick bars. The relative width and pattern of the bars encodes a series of numbers indicating absolute ‘tick mark’ positions within each section.
In accordance with one or more embodiments, the robot's front sensors' field of view is longer along the front/aft (robot X dimension) axis than that of the rear sensors. The front sensors' field of view is longer than the non-reflective sections are wide. As a result, the front sensors can disregard the non-reflective bars. The tick marks will make the front sensors' signal strength both weaker and more variable. The rear sensors can include a lens or collimating element that will make their field of view shorter along the front/aft (robot X dimension) axis—i.e., they cease to detect the boundary when the robot passes a non-reflective bar. However, their field of view will still be wide enough along the left/right (robot Y dimension) axis to meet the Boundary Follow behavior requirements described above.
In accordance with one or more embodiments, the rear sensors' sampling rate is high enough that the sensor signal will alternate on/off as the robot moves along the boundary. The robot can use its expected velocity and sensor data across time to compute the length of the non-reflective bars as it passes them. It can thus read the code to determine its absolute tick mark position within the section.
Pots are placed only at legal points along the boundary. In one or more embodiments, there is always a legal row at every code-repeat point (i.e., beginning of a tick section). There are other legal rows between code repeat points, referenced to positions indicated by tick marks.
When the robot is given the user-specified spacing width, it can compute the number of rows that must fit within a section (i.e., between two code-repeat points). The robot can also compute the legal X position (starting place) of every row along the boundary, relative to the tick mark positions. Note that the legal row locations do not necessarily line up with the tick mark positions. This absolute reference eliminates error in the number of rows the robot will place within a given area.
In accordance with one or more embodiments, because a row always starts at the beginning of a section (code-repeat point), the pots are not necessarily placed at exactly the user-specified width. The actual spacing width may be rounded slightly to ensure that the code-repeat point is at a legal row. But because each section is relatively long relative to the spacing width, this difference is not significant.
More specifically, ifs is the width of each section, n is number of tick marks per section, w is spacing width (as determined by user setting), q is the number of pots actually fitted within a section=floor(s/w) and xt is the robot's X location (absolute within the repeating section, not absolute within the spacing area), as decoded from tick marks, then each legal row will occur where:
xt=k(n/q) where k=(0, . . . , q−1) (15)
When placing a pot, the robot preferably ensures that the pot is placed in a legal row, i.e., where this condition is true.
The front sensors 80a, 80b should be able to detect any portion of the boundary at least as long as the smallest diameter (currently width) of the front sensor field of view. The tick marks may reduce the front sensors' signal strength. But even when the field of view covers the most non-reflective possible portion of the boundary, the sensors should still produce a signal strong enough to detect—and robust enough for the robot to reliably detect the signal's peak. The front sensors 80a, 80b should be able to see the boundary and effectively ignore the tick marks during both Seek and Follow behavior. As a result, the width and length of the front sensors' field of view should be larger than, e.g., at least several times, the width of the widest tick mark bar.
Likewise, in order to see ticks, the fore/aft field of view of the rear sensors should be less than the width of the narrowest bar on the boundary marker.
A maximum emitter response can be achieved using a pristine boundary tape, under bright ambient light conditions, at full range. The reading without the boundary tape, on a worst-case surface (perhaps clean ground cloth) should be significantly lower. The sensors should be able to detect reflected LED emitter light while compensating for ambient light. Emitter strength should be set properly to achieve that across a range of ambient lighting conditions. The sensors should be able to achieve the specified accuracy under a range of non-changing or slowly varying lighting conditions. These include full sunlight, darkness, and shade.
In accordance with one or more embodiments, the sensors should be insensitive to changes in varying ambient light levels as the robot moves at its maximum velocity. These include the conditions noted above. For example, the sensor should respond robustly even while the robot moves from full shade to full sunlight. It is assumed that the frequency at which the ambient light varies will be relatively low (below 400 Hz) even when the robot is in motion. The most dramatic disruptive pattern that would be sustained in the environment over many samples could be a snow fence, e.g., with 2.5 mm slats spaced 2.5 mm apart. Assuming the robot travels at a maximum of 2 m/s, a shadow caused by this fence would result in a 400 Hz ambient light signal. The robot should preferably be able to compensate for such a signal.
Robots in accordance with various embodiments can be configured to follow both straight and irregular boundary markers. As shown in
Having thus described several illustrative embodiments, it is to be appreciated that various alterations, modifications, and improvements will readily occur to those skilled in the art. Such alterations, modifications, and improvements are intended to form a part of this disclosure, and are intended to be within the spirit and scope of this disclosure. While some examples presented herein involve specific combinations of functions or structural elements, it should be understood that those functions and elements may be combined in other ways according to the present disclosure to accomplish the same or different objectives. In particular, acts, elements, and features discussed in connection with one embodiment are not intended to be excluded from similar or other roles in other embodiments. Additionally, elements and components described herein may be further divided into additional components or joined together to form fewer components for performing the same functions.
Accordingly, the foregoing description and attached drawings are by way of example only, and are not intended to be limiting.
This application is a continuation-in-part of prior U.S. patent application Ser. No. 12/378,612 filed Feb. 18, 2009, which claims the benefit of and priority to U.S. Provisional Patent Application Ser. No. 61/066,768, filed on Feb. 21, 2008; each said application incorporated herein by this reference.
Number | Date | Country | |
---|---|---|---|
61066768 | Feb 2008 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 12378612 | Feb 2009 | US |
Child | 13100763 | US |