This application relates in general to robotic guidance and, in particular, to a system and method for planning and indirectly guiding robotic actions based on external factor tracking and analysis.
Robotic control includes providing mobile effectors, or robots, with data necessary to autonomously move and perform actions within an environment. Movement can be self-guided using, for instance, environmental sensors for determining relative location within the environment. Frequently, movement is coupled with self-controlled actions to perform a task, such as cleaning, sensing, or directly operating on the environment.
Conventionally, self-guided robots use self-contained on-board guidance systems, which can include environmental sensors to track relative movement, detect collisions, identify obstructions, or provide an awareness of the immediate surroundings. Sensor readings are provided to a processor that executes control algorithms over the sensor readings to plan the next robotic movement or function to be performed. Movement can occur in a single direction or could be a sequence of individual movements, turns, and stationary positions.
Two forms of navigation are commonly employed in self-guided robots. “Dead reckoning” navigation employs movement coupled with obstruction avoidance or detection. Guided navigation employs movement performed with reference to a fixed external object, such as a ceiling or stationary marker. Either form of navigation can be used to guide a robot's movements. In addition, stationary markers can be used to mark off an area as an artificial boundary.
Dead reckoning and guided navigation allow a robot to move within an environment. However, guidance and, consequently, task completion, are opportunistic because the physical operating environment is only discovered by chance, that is, as exploration of the environment progresses. For example, a collision would teach a robot of the presence of an obstruction. Opportunistically-acquired knowledge becomes of less use over time, as non-fixed objects can move to new locations and the robot has to re-learn the environment. Moreover, opportunistic discovery does not allow a robot to observe activities occurring within the environment when the robot is idle.
Continually tracking activity levels and usage patterns occurring within an environment from a temporal perspective can help to avoid robotic movement inefficiencies. For example, interim changes affecting the environment between robotic activations can permit task planning of coverage area and task performance frequency. Furthermore, opportunistic discovery does not provide information sufficient to allow efficient task planning. The single perspective generated by an individual robot affords only a partial view of the environment of limited use in coordinating the actions of a plurality of robots for efficient multitasking behavior.
Therefore, there is a need for tracking temporally-related factors occurring in an environment for planning task execution of one or more self-guided robots to provide efficient movement and control.
A system and method for planning and indirectly guiding the actions of robots within a two-dimensional planar or three-dimensional surface projection of an environment. The environment is monitored from a stationary prospective continually, intermittently, or as needed and monitoring data is provided to a processor for analysis. The processor identifies levels of activity and patterns of usage within the environment, which are provided to a robot that is configured to operate within the environment. The processor determines those areas within the environment that require the attention of the robot and the frequency with which the robot will visit or act upon those areas. In one embodiment, the environment is monitored through visual means, such as a video camera, and the processor can be a component separate from or integral to a robot. The robot and monitoring means operate in an untethered fashion.
One embodiment provides a system and method for guiding robotic actions based on external factor tracking and analysis. External factors affecting a defined physical space are tracked through a stationary environmental sensor. The external factors are analyzed to determine one or more of activity levels and usage patterns occurring within the defined physical space. At least one of movements and actions to be performed by a mobile effector that operates untethered from the stationary environmental sensor within the defined physical space are determined. The movements and actions are autonomously executed in the defined physical space through the mobile effector.
A further embodiment provides a system and method for planning and indirectly guiding robotic actions based on external factors and movements and actions. A mobile effector that operates untethered within a defined physical space is provided. External factors affecting the defined physical space and movements and actions performed by the mobile effector in the defined physical space are tracked through a stationary environmental sensor. The external factors and the movements and actions are analyzed to determine activity levels and usage patterns occurring within the defined physical space. Further movements and actions to be performed by the mobile effector are planned based on the activity levels and usage patterns. The further movements and actions are communicated to the mobile effector for autonomous execution
Still other embodiments of the present invention will become readily apparent to those skilled in the art from the following detailed description, wherein are described embodiments by way of illustrating the best mode contemplated for carrying out the invention. As will be realized, the invention is capable of other and different embodiments and its several details are capable of modifications in various obvious respects, all without departing from the spirit and the scope of the present invention. Accordingly, the drawings and detailed description are to be regarded as illustrative in nature and not as restrictive.
Components
Each mobile effector, or robot, is capable of autonomous movement in any direction within an environment under the control of on-board guidance. Robotic actions necessary to perform a task are also autonomously controlled. Robotic movement may be remotely monitored, but physical movements and actions are self-controlled.
The robot 11 and video camera 12 are physically separate untethered components. The robot 11 is mobile while the video camera 12 provides a stationary perspective. The processor 13 can either be separate from or integral to the robot 11 and functions as an intermediary between the video camera 12 and the robot 11. In one embodiment, the processor 13 is a component separate from the robot 11. The processor 13 is interfaced to the video camera 12 either through a wired or wireless connection 14 and to the robot 11 through a wireless connection 15. Video camera-to-processor connections 14 include both digital, such as serial, parallel, or packet-switched, and analog, such as CYK signal lead, interconnections. Processor-to-robot connections 15 include bi-directional interconnections. Serial connections include RS-232 and RS-422 compliant interfaces and parallel connections include Bitronics compliant interfaces. Packet-switched connections include Transmission Control Protocol/Interface Protocol (TCP/IP) compliant network interfaces, including IEEE 802.3 (“Ethernet”) and 802.11 (“WiFi”) standard interconnections. Other types of wired and wireless interfaces, both proprietary and open standard, are possible.
The robot 11 includes a power source, motive power, a self-contained guidance system, and an interface to the processor 13, plus components for performing a function within the environment. The motive power moves the mobile robot 12 about the environment. The navigation system guides the robot 11 autonomously within the environment and can navigate the robot 11 in a direction selected by or to a marker identified by the processor 13 based on an analysis of video camera observations data and robot feedback. In a further embodiment, the robot 11 can also include one or more video cameras (not shown) to supply live or recorded observation data to the processor 13 as feedback, which can be used to plan and indirectly guide further robotic actions. Other robot components are possible.
The video camera 12 actively senses the environment from a stationary position, which can include a ceiling, wall, floor, or other surface, and the sensing can be in any direction that the video camera 12 is capable of observing in either two or three dimensions. The video camera 12 can provide a live or recorded video feed, series of single frame images, or other form of observation or monitoring data. The video camera 12 need not be limited to providing visual observation data and could also provide other forms of environment observations or monitoring data. However, the video camera 12 must be able to capture changes that occur in the environment due to the movement and operation of the robot 12 and external factors acting upon the environment, including, for example, the movements or actions of fixed and non-fixed objects that occur within the environment over time between robot activations. The video camera 12 can directly sense the changes of objects or indirectly sense the changes by the effect made on the environment or on other objects. Direct changes, for instance, include differences in robot position or orientation and indirect changes include, for example, changes in lighting or shadows. The video camera 12 can monitor the environment on either a continual or intermittent basis, as well as on-demand of the processor 13. The video camera 12 includes an optical sensor, imagery circuitry, and an interface to the processor 13. In a further embodiment, the video camera 12 can include a memory for transiently storing captured imagery, such as a frame buffer. Other video camera components, as well as other forms of cameras or environment monitoring or observation devices, are possible.
The processor 13 analyzes the environment as visually tracked in the observation data by the video camera 12 to plan and remotely guide movement and operation of the robot 11. The processor 13 can be separate from or integral to the robot 11 and includes a central processing unit, memory, persistent storage, and interfaces to the video camera 12 and robot 11. The processor 13 includes functional components to analyze the observation data and to indirectly specify, verify, and, if necessary, modify robotic actions, as further described below with reference to
Preferably, the processor 13 is either an embedded micro programmed system or a general-purpose computer system, such as a personal desktop or notebook computer. In addition, the processor 13 is a programmable computing device that executes software programs and includes, for example, a central processing unit (CPU), memory, network interface, persistent storage, and various components for interconnecting these components.
Observation and Action Modes
Robotic actions are planned and indirectly guided through observation and action modes of operation. For ease of discussion, planning and indirect guidance are described with reference to two dimensional space, but applies equally to three dimensional space mutatis mutandis.
Processor
The processor can be a component separate from or integral to the robot. The same functions are performed by the processor independent of physical location. The movements and actions performed by a plurality of robots 11 can be guided by a single processor using monitoring data and feedback provided by one or more video cameras 12.
The processor 41 includes at least two interfaces 42 for robotic 47 and camera 48 communications. The processor 41 receives activity and usage data 53 and observations data 55 through the camera interface 48. The processor 41 also receives feedback 54 and sends robot movements and actions 56 and modified robot movements and actions 57 through the robotic interface 47. If the processor 41 is implemented as a component separate from the robot, the robotic interface 47 is wireless to allow the robot to operate in an untethered fashion. The camera interface 48, however, can be either wireless or wired. If the processor 41 is implemented as a component integral to the robot, the robotic interface 47 is generally built-in and the camera interface 48 is wireless. Other forms of interfacing are possible, provided the robot operates in an autonomous manner without physical, that is wired, interconnection with the video camera.
The image processing module 43 receives the activity and usage data 53 and observations data 55 from the video camera 12. These data sets are analyzed by the processor 41 to respectively identify activity levels and usage patterns during observation mode 20 and robotic progress during action mode 30. One commonly-used image processing technique to identify changes occurring within a visually monitored environment is to identify changes in lighting or shadow intensity by subtracting video frames captured at different times. Any differences can be analyzed by the analysis module 44 to identify activity level, usage patterns, and other data, such as dirt or dust accumulation. The activity level and usage patterns can be quantized and mapped into histograms projected over a two-dimensional planar space or three-dimensional surface space, such as further described below respectively with reference to
The activity levels and usage patterns are used by the planning module 45 to robot movements and actions 56 that specify the areas of coverage 58 and frequencies of operation 59, for instance, cleaning, to be performed by the robot 12 within the environment. Although movements and actions are provided to the robot 12 by the processor 41, physical robotic operations are performed autonomously. The planning module 45 uses a stored environment map 50 that represents the environment in two dimensions projected onto a planar space or in three dimensions projected onto a surface space. In a further embodiment, the robot sends feedback 54, which, along with the observations data 55, the feedback processing module 46 uses to generate modified robot movements and actions 57. Other processor modules are possible.
Environment Example
The robot 11, video camera 12, and processor 13 function as a logically unified system to plan and indirectly guide robotic actions within an environment. The physical environment over which a robot can operate under the planning and guidance of a processor is logically represented as a two-dimensional planar space or as a three-dimensional surface space that represents the area monitored by a video camera.
The environment 72 provides a defined physical space mappable into two dimensions and over which a robot can move and function.
The environment 72 can contain both dynamic moving objects 82 and static stationary objects 83 in either two or three dimensions. For instance, two-dimensional observation data from the video camera 12 can be used to plan the vacuuming of a floor or couch. Similarly, three-dimensional observation data can be used to assist the robot 11 in climbing a set of stairs or paint the walls. Two- and three-dimensional data can be used together or separately.
Generally, the processor 13 will recognize the stationary objects 83 as merging into the background of the planar space, while the moving objects 82 can be analyzed for temporally-changing locations, that is, activity level, and physically-displaced locations, that is, patterns of usage or movement.
By comparing subsequent frames of video feed that include a reference background frame, the processor 13 can identify changes occurring within the environment 72 over time.
Both the level of activity and patterns of usage can be evaluated to determine movements and actions for the robot.
While the invention has been particularly shown and described as referenced to the embodiments thereof, those skilled in the art will understand that the foregoing and other changes in form and detail may be made therein without departing from the spirit and scope.