The embodiments described herein relate to semi-autonomous cleaning devices and more particularly, to a system and method for detecting the status of one or more components and/or systems in a semi-autonomous cleaning device for improved cleaning of surfaces.
The use of semi-autonomous devices configured to perform a set of tasks is known. For example, robots can be used to clean a surface, mow a lawn, collect items from a stocked inventory, etc. In some instances, however, some known robots fail to provide a user with an indication of the robot's position, progress, and/or status of one or more components of the system. For example, the problem of debris accumulation in back squeegee of a cleaning robot or floor scrubber is a common problem. In manual floor scrubbers, the operator can prevent the problem from happening by observing debris in the floor and avoiding driving the floor scrubber over the debris. The operator can also detect if the squeegee has blocking debris by visually inspecting the operation of one or more functions of the floor scrubber such as, for example, the quality of water pick-up provided by the back squeegee. In self-driving or semi-automatic floor scrubbers, the prevention and detection of debris in the back squeegee currently presents challenges that can reduce the efficacy and/or efficiency of these devices.
In order to perform autonomous cleaning, semi-autonomous cleaning devices such as floor scrubbers and sweepers need to be equipped with reliable obstacle detection and avoidance. Such technologies such as 3 dimensions (3D) Light Detection and Ranging (LIDAR) technologies are expensive. Achieving 270 degrees protection with 3D LIDAR technologies is uneconomical.
There are cheaper vision-based technology alternatives available such as active stereo and structured infrared lighting (IR), but these technologies pose their own challenges (i.e., not commercial grade, reliability, etc.).
Active stereo technologies may be sensitive to environmental aspects such as scene texture and illuminations. Further, matching artifacts makes it challenging to separate small objects from noise. Further, structured infrared (IR) lighting is practically unusable under direct sunlight and cannot detect IR absorbing or reflective materials.
Semi-autonomous cleaning devices contain motors and actuators. In instances where the motor or actuator fails, it is difficult to determine the failure.
Autonomous or semi-autonomous devices are used for cleaning or similar applications. The ability to robustly detect obstacles to avoid collisions, and sense cliffs to avoid falls, is an essential feature to allow the machine to successfully operate in a wide range of commercial, industrial, institutional, and other locations with a variety of lighting and obstruction characteristics. To achieve such ability, the robot must be equipped with a robust sensing system that observes the world along with all possible motion directions of the robots. The system must also be affordable so as to allow customers to purchase the machine.
The active behavior and intent of various machinery are often communicated via contextual graphical user interfaces (GUIs), beeps, lights, and the like. On a self-driving device (i.e., self-driving robot), the autonomous operation of a large machine requires an intuitive, novel arrangement of communication such that the device can be seen and heard from a multitude of distances at times with occlusions, so that nearby operators can be informed of the device's presence and future actions in order for human-device teams to work together effectively and safely in the shared workspace.
There is a desire to provide improved reliable obstacle detection and avoidance, improved sensing, improved design, improved failure detection, advanced diagnostics and expandability capabilities on semi-autonomous cleaning devices.
Embodiments described herein relate to a system that provides semi-autonomous cleaning of surfaces by a semi-autonomous cleaning device. The system provides for improved reliable obstacle detection and avoidance, improved sensing, improved design, improved failure detection, advanced diagnostics and expandability capabilities.
A system and/or method can be provided for detecting the status of one or more components and/or systems of, for example, a manual, semi-autonomous, or fully autonomous cleaning device or the like. Embodiments described herein relate to a system that provides semi-autonomous cleaning of surfaces by a semi-autonomous cleaning device. The system provides for improved reliable obstacle detection and avoidance, improved sensing, improved design, improved failure detection, advanced diagnostics and expandability capabilities.
An exemplary embodiment of a semi-autonomous cleaning device is shown in
The frame 102 of cleaning device 100 can be any suitable shape, size, and/or configuration. For example, in some embodiments, the frame 102 can include a set of components or the like, which are coupled to form a support structure configured to support the drive system 104, the cleaning assembly 108, and the electronic system 106. Cleaning assembly 108 may be connected directly to frame 102 or an alternate suitable support structure or sub-frame (not shown). The frame 102 of cleaning device 100 further comprises strobe light 110, front lights 112, a front sensing module 114 and a rear sensing module 128, rear wheels 116, rear skirt 118, handle 120 and cleaning hose 122. The frame 102 also includes one or more internal storage tanks or storing volumes for storing water, disinfecting solutions (i.e., bleach, soap, cleaning liquid, etc.), debris (dirt), and dirty water. More information on the cleaning device 100 is further disclosed in PCT publication WO2016/168944, entitled “APPARATUS AND METHODS FOR SEMI-AUTONOMOUS CLEANING OF SURFACES” filed on Apr. 25, 2016 which is incorporated herein by reference in its entirety.
More particularly, in this embodiment, the front sensing module 114 further comprises structured light sensors in a vertical and horizontal mounting position, an active stereo sensor and a RGB camera. The rear sensing module 128, as seen in
The back view of a semi-autonomous cleaning device 100, as seen in
According to
Right Module 703: 1 Active Stereo (RealSense D435), 1 or more Structured Lighting (Orbbec Astra Mini).
Center Module 704: 1 Active Stereo (RealSense D435), 1 or more Structured Lighting (Orbbec Astra Mini).
Left Module 705: 1 Active Stereo (RealSense D435), 1 or more Structured Lighting (Orbbec Astra Mini).
The cameras are positioned so that each modality could cover 270 degrees around the cleaning head, giving the robot full coverage across all possible motions. Such visual coverage is required so that when the robot is required to move in any one of a forward direction, a right turning direction, a left turning direction, or any combination of these vectors, the robot will have the ability to detect obstacles or abnormalities in the environment in those directions. These obstacles can be detected over the vertical range from the ground up to robot height. The system switches between any combination of cameras based on the configuration, available cameras, system resource usage, operator settings, plan settings, and environmental factors.
The hybrid sensing solution as seen in
Further complicating the problem of artifact detection for a robot, is that variable lighting in the environment can render one or both detection systems less useful, for example in dark areas, a passive stereo image detection system has much more difficulty detecting objects. Similarly, in extremely bright areas such as direct sunlight, the Active Stereo systems may have difficulty distinguishing the projected grid in the bright ambient light. Aspects of the current invention address both of these failings of traditional single sensor systems.
A further complication can arise from the interaction between the structured lighting system and the stereo image detection system. An aspect of the current invention is to avoid such interactions by shifting the light frequency that is being used for each system, for example, using infrared for the structured lighting system and visual optical for the active stereo system. Another embodiment of the current system multiplexes the camera systems in time, such that the systems take turns measuring the environment. If either camera system shows evidence that is correlated with compromised detection, the frequency of operation of the alternate, non-compromised camera system can be increased.
Using both detection systems in parallel can improve the effectiveness of the environmental detection system. In addition, the camera systems normally feed into a system that identifies in a 3D grid around the robot, where detected artifacts are located. The artifact identifications from each detection system can be combined, and also combined with probability information indicating to what degree each detection system may have been compromised.
The current system further improves on simple detection systems by integrating information over multiple cameras at multiple times in different ways. The information from multiple detection systems can be combined locally in 3D cells via the voxel grid. This provides a composite model map of detected environmental artifacts and provides a mechanism to evaluate the superset of environmental artifacts for further processing.
Another system that can be used is to globally merge the images to create a ground plane of navigable space for those vertical slices that do not contain obstacle artifacts. In addition, the detected artifacts can be compared to known artifacts in the environment and obstacles can be semantically identified using per frame reflection.
Referring to
Furthermore, the manager provides the ability to capture individual frames from the non-selected or idle sensors in order to provide additional contextual information regarding potential ambiguities, for example regarding positive and negative detections. These modules 806, 807, 808 are shown in Per Frame Individual Processing Modules 813. In some embodiments, the secondary camera systems can be interrogated to provide additional information that can resolve an unexpected detection result, like a visual detection result that is inconsistent with the preprogrammed floor plan over which the cleaning device is navigating. The floor plan mapper is shown as module 810. Similarly, the Voxel Grid Mapper is shown as module 811. In complex sensing environments, particularly in environments that include obstacles that can change location from time to time, the resolution of false positives and false negatives is a paramount concern for both safety and for efficient operation. The environment is further complicated by the potential for variable lighting or variable surface reflectivity. The results of the processing modules are fed to the Decision Maker module 812.
As an example of automatically resolving a false detection of a cliff, it may be sensed by a forward sensor, that the observable surface is detected as being far away. The detection of range can be accomplished by detecting the focus distance at which features become sharpest, or many other algorithmic methods that are well known in the art. The detection of a faraway or distant surface can often indicate that a cliff is present, such as the entry to a manual stairway. Usually such cliffs are to be avoided lest the cleaning robot falls from the cliff and cause injury or damage. Several possible optical situations can cause a false cliff detection, for example, a particularly reflective floor, water pooling, thermal gradient optical effects, or saturation light conditions. In all of these conditions, analysis of whether a false cliff is detected can be resolved by evaluating the size of the feature detected and comparing to both the floor plan stored in memory and to the secondary sensor system that operates on different detection principles to confirm or refute the presence of the cliff.
Majority or probabilistic logic can then be employed to determine if the initial detection was false or has even a small probability of being correct. In addition, the actions taken on such a detection can be to: a) act directly on the logic and assume the false detection, b) send an alert to an operator or logging system, c) pause and wait for manual or remote approval to continue or d) employ a tentative or exploratory motion toward the detected obstacle. Similar false positive detection logic can be used for detected obstacles, including detected reflective obstacles.
The Dynamic Calibration 905 component works as follows: for pitch, roll and height calibration, values can be estimated based on the ground plane. Calibration mode can be automatically selected as the robot is moving, and is in a good instance of ground plane where the plane is not too noisy and of sufficient size. If the change in calibration exceeds the threshold, use the selected data to recalibrate. The offset calibration can be stored in memory and used until the next calibration event.
For Yaw, x, y calibration, values can be estimated based on the consistency with the 2D motion of the robot. Aligning with Lidar at LIDAR height (should match the depth cloud via ICP) and selected via threshold for niceness. The calibration proceeds by computing RGB-D odometry and estimating the yaw, x and y values that best align the computed odometry with the robot wheel odometry 906. An alternative approach is to align a slice of the observed depth data at LIDAR height (the device is equipped with a 2D LIDAR unit) with the LIDAR data across a set of selected images.
Components of the Voxel Grid Mapper module 1001 of
Components of this module are as follows:
Additionally, the Decision Maker module 1201 determines if images from the alternative sensor modality need to be checked. Examples are in situations where there are a lot of missing depth areas from the SL sensors which could indicate either a cliff or a false negative due to an unobservable material. Another example could be a measurement that is not consistent with the models built from previous data. In such cases the Decision Maker module 1201 would request another frame and the alternative Sensor triggers the Processing Pipeline to process it accordingly.
According to
An autonomous or semi-autonomous device for cleaning or other purposes, using an onboard energy source, a combination of actuators to move around its environment, sensors to detect the environment and proximity to various landmarks, one or more computers with internal state machines in memory which transitions based on the detected features of the environment and registers accumulating the completion of tasks, a file of customizable configurations, and a set of commands from the user, a digital-to-analog or equivalent audio peripheral interface device with audio signal-producing capability of a certain bit rate and frequency, and a sound-emitting device such as a loudspeaker, microphone, audible announcements in the language of the locale, made at specific state transitions or continuously in specific states of the machine's operation.
These can have a hysteresis such that the same message is not repeated unnecessarily which may confuse operators or nearby persons. The historical sequence of messages is maintained and analyzed to produce a set of consistent and natural-sounding outputs, so that the device's intent can be effectively comprehended by nearby operators to facilitate either further interactions with the device, or warn them to stay clear at an effective volume. The decibel level can be adjusted based on the ambient sound levels and other configurations, to the point of being silent in designated zones such as quiet zones in hospitals.
The device can signal intent, arrival at unique destinations, and other situations which need to inform nearby operators or users of the state of the device. For example, when the device is experiencing some mode of failure which requires human intervention on-site, the device can emit sounds and lights which correspond to the particular failure to facilitate resolution of the problem.
When the device travels in reverse, the device can, for example, automatically initiate a periodic beeping sound produced through the speakers. Other tones or sequences of tones can be produced to signify intent, such as the intent to perform various actions like cleaning. In addition to loudspeakers, the device is also outfitted with a graphical user interface screen capable of producing rich colours, text, and animation, and other computer-controlled light emitters and illuminated arrays, augment the signaling capabilities of the device. In further embodiments, smart alerts could also include videos of state transitions or state like cleaning head lowering, and warning regarding device about to turn.
These screens and audio annunciators can signify machine intent that mimics a wide variety of anthropomorphized interactions, to assist in making the semi-autonomous machine in being less threatening and communicating next actions more clearly. For example, if the cleaning robot intends to turn left, eyes pictured on the robot's display could point to the left to help observers predict the next action. Similarly, audio signals can indicate intent, or observations about the environment, or general messages promoting cleanliness or facility policy.
The various operating states of the device coincide with the display generated on a graphical user interface which can provide graphical depictions of these states, including processing, cleaning, being stuck, being in a failure mode, being in a remote teleoperation state, as examples of states which the device is in. This user interface is also useful for indicating the status of various hardware and software components, such as in a diagnostic and a command-and-control capacity by displaying measures such as toggles, metrics, set-points, calibration parameters, and the like.
Blinking and solid lights to indicate the direction of travel, automatically triggered on the activation or projection of a change in direction on the incoming path. These paths can be computed in advance or in real-time based on immediate sensor feedback. Different regions of the map, or ambient light sensor measured light intensity, can trigger different patterns or luminosity suitable for the circumstance.
Colour coded strobe lights can indicate the future path of travel. Brake lights automatically turning on either in a solid or blinking fashion to indicate differing rates of stopping, when the device senses obstructions or is about to slow down, and turning off when the obstruction is removed and the device speeds up. The arrangement of lights is in the rear of the device, with at least two or three lights at various positions and elevations which can be visible from afar in potentially dusty environments, and from afar for operators of other large equipment such as forklifts and trucks. These can be operated through the computer and control boards.
Colour coding of lights to indicate the modality of operation, including mapping, cleaning, warning, emergency, awaiting operator input, and other functions. The lights can be animated in intensity and in the sequencing to indicate motions resembling hand gestures, etc.
In further embodiments, the autonomous or semi-autonomous cleaning device may include a platform for expandability as a multi-purpose intelligent system. In one embodiment, the system payload bay will allow for future sensor add-ons. Further features of the expandability platform may include:
Floor scrubbing robots cannot often clean directly adjacent to walls and other objects, due to the risk of damage to walls and objects from rigid components which support the scrubbing device, and due to the requirement by most operators that dispensed cleaning liquid be collected (usually by a device wider than the cleaning device). As a result, floor areas adjacent to walls and other objects go uncleaned.
When cleaning with a floor scrubbing machine, manual labour is required to sweep and mop (“dust mopping” and “trail mopping”) along walls and between the scrubber and obstacles. This is due to the distance between the scrubber, and the wall along which it is driving. In some instances, performing dust mopping and trail mopping is unaffordable. Additionally, when dust mopping is not performed, or is performed poorly, floor scrubber performance can be negatively affected.
Reducing the need for dust mopping by collecting debris improves facility cleanliness, and reduces labour required; removing debris near walls and obstacles (whose presence will accelerate the development of stains) reduces the need for trail mopping. At a constant wall distance, the proportion of a floor that remains uncleaned increases as sector width decreases. As a result, the side sweeper will provide the greatest benefit to operators who intend to operate the cleaning device in “corridor” style environments which have long, narrow cleaning sectors.
Debris contact by the brush is pushed toward the robot centre, and into the primary cleaning path of the machine to which it is attached. There, debris can be swept by cylindrical brushes into a debris bin for later disposal. Relatively large and heavy debris can be captured by the spinning sweeper.
The side sweeper module 1302 component consists of the following features:
Detection of failures of motors and actuators, and of components or systems generally, is a concern. This application describes a system where one can command a motor or actuator and then look for corroborating feedback from that motor or actuator or some other actuator to verify the command was successful. Independent instruments can be crosschecked to determine failure of any one of the instruments, the sensors and actuators in a semi-autonomous cleaning device can be crosschecked to identify failures as well. Essentially, a stimulus-response mechanism may be employed where a stimulus to one or more components or subsystems may generate responses in one or more component or subsystem measurements. These stimuli and measurements may be initiated, received, stored and correlated on the semi-autonomous vehicle itself in an advanced diagnostics module, may be sent to a central server for storage and correlation, or may employ a hybrid approach. In addition, failures of instrumentation or actuation systems may result in an immediate change of machine or subsystem state to ensure the safest possible mode of operation. For example, if a drive motor or optical sensor system is found to be improperly functioning, the robot may enter a safe mode where movement is disabled.
The central server may employ simple algorithmic or heuristic rules to correlate stimuli with measurements and thereby determine present failure conditions, potential future failure conditions or likelihoods of future failures. Alternatively, the central server may employ a machine learning or artificial intelligence subsystem to perform these analyses. For these correlations and predictions, the central server may rely on historical data or data trends from a single or multiple semi-autonomous vehicles.
The central server may perform an action based on these correlations, such as displaying a relevant message to the user, updating information in a central database, ordering replacement parts in anticipation of failure, or taking some other action. In taking these actions, confidence in the performance and reliability of the system is maximized, as is the uptime of the system.
An exemplar implementation of this system would be a diagnostic system around dropping a squeegee down. From a system point of view, there is a squeegee motor and a vacuum motor. The squeegee may be in the up (undeployed) position or state, or down (deployed) position or state; and may be commanded by the control system to move from one state to the other. This command is translated into squeegee motor currents. The initiation of vacuuming may be controlled by controlling vacuum motor currents. The currents of both of these motors may be measured when they are engaged by a measuring system. Using these measurements and the squeegee position, a failure or later anticipated failure may be detected or predicted. In these cases, the motors would be components under test, receiving the stimulus of a control current. The timing of the squeegee motor turning on and off, relative to the commands sent to turn it on and off, may indicate motor failure. The timing of the vacuum turning on when commanded may indicate vacuum failure. The vacuum current measured while the squeegee is in the up position may indicate hose blockage if the current is outside a normal range. A lack of change in vacuum current as the squeegee supposedly moved from the up to the down position may indicate a missing squeegee, hose blockage or squeegee motor failure.
The regular operation mode of the system may be ascertained in some cases by running motors in open-loop configuration to detect dirt or expected damage. During regular cleaning operation, a change in vacuum current may indicate a failure condition—a sudden decrease in current may indicate the squeegee has fallen off, while a sudden increase in current may indicate the squeegee hose has become blocked.
Other system currents and measurements which may be monitored include such actuator currents as brush motor current, cleaning head lift motor current, vacuum current, squeegee motor current, drive motor current, steering motor current, water pump current, back wheel encoders, steering wheel encoders or IMU data. Brush over currents, or divergences between the currents on multiple brush motors, may detect cleaning faults. Loss of communications, protocol violations or latency violations on an internal or inter-system bus may indicate a failure of a particular system.
In one implementation, this information may be fed into a machine learning model which may be used to determine the likelihood of specific failures. Such failures may include a missing cleaning brush, a broken cleaning brush belt, actuator failure, a missing squeegee, vacuum hose blockage, water contamination, or encoder failure.
In a further embodiment, a Health Monitor node or module is created. The Health Monitor module will monitor for any new conditions not already monitored. The Health Monitor module will publish diagnostics messages which are captured by the Diagnostics Collector module and fed into the Diagnostic AggregatorSafety MonitorExecutive/UI pipeline for action and reporting purposes. For this stage, the existing UI is modified to display a “warning”, instead of “safety error”, when abnormal operating conditions are detected by the Health Monitor module.
All preexisting conditions continue to result in an error message displayed in the user interface (UI) when abnormal operation [are]is detected. Otherwise, the functionality of the Safety Monitor/UI pipeline remains unchanged.
Longer term, the non-critical conditions monitored in the system, as well as associated actions and reporting functions will be shifted to the Health Monitor module. The critical conditions (i.e., anything that can result in personal injury or property damage) will be the subject of a simpler Safety Monitor module that will act to augment a safety board.
The Health Monitor module is implemented using Health Monitor classes. As an example, the HMNode instantiates Monitor classes in priority order, subscribes to topics that need monitoring, notifies the interested monitors when new messages are available (not all monitors are interested in all messages) and periodically calls Check( )on each monitor, in order of their priority. Each monitor implements Check( )and tells the HMNode if its monitored condition is in a normal or abnormal status via a diagnostics.msg reference.
Further, the HMNode publishes a periodic diagnostics message that will be processed by the existing safety pipeline/UI starting with the Diagnostics Collector. Even though a priority handling mechanism is built into the framework, given that the robot reaction is the same for all encountered conditions, the same priority has been assigned to all conditions for the time being.
At the warning page at block 1403, if the user presses the reset button at block 1407, the system checks to see if a cleaning plan was executed. If so, resume cleaning at block 1408 until the flow chart concludes at block 1410, Otherwise, the system will transition to a manual plan at block 1409.
At the warning page at block 1403, if the user presses the home button at block 1411, the system checks to see if a cleaning plan was executed at block 1412. If so, the system ends the cleaning and generates a report at block 1413. Otherwise, the system will transition to a manual plan 1409. The method concludes at block 1410.
In a further embodiment, the system has a method and module to detect a squeegee fault.
Referring to
At the FAILURE CLEARED event at block 1505, the system checks if a HOME PRESSED button (Home button pressed) event at block 1506, AUTO event at block 1507, and CANCEL CLEAN event at block 1508. If these events occur, then the system will return to the home page at block 1509.
Returning to the FAILURE CLEARED event at block 1505, the system checks if a RESET PRESSED (RESET button pressed) evet at block 1510, and SQ ATTACHED event at block 1511. If these events occur, then the system will return the beginning for monitoring of SQ DOWN or SW COMMANDED DOWN or SQ DETACHED events at block 1501.
Further embodiments of the cleaning device include mechanical system that include a front wheel assembly and cleaning head control. The front wheel assembly includes a further Drive Flex cable management system.
The cleaning head control system further includes the following features:
Some features of the electrical system include the following:
In a further embodiment, semi-autonomous cleaning device include a disinfection module.
According to
The disinfection module 1702 will contain a solution tank, an atomizing system, a dispersion system, and an electrostatic system. The system will be mounted so the disinfectant solution 1710 can spread at an appropriate height and within a 1.5 m distance from the cleaning device. By utilizing an electrostatic system, the module we can maximize total coverage and disinfectant despite spray angle. Further info on the disinfection module can be found in the U.S. provisional application No. 63/055,919, entitled “DISINFECTION MODULE FOR A SEMI-AUTONOMOUS CLEANING AND DISINFECTION DEVICE”, filed on Jul. 24, 2020, which is incorporated herein by reference in its entirety.
Implementations disclosed herein provide systems, methods and apparatus for generating or augmenting training data sets for machine learning training.
The functions described herein may be stored as one or more instructions on a processor-readable or computer-readable medium. The term “computer-readable medium” refers to any available medium that can be accessed by a computer or processor. By way of example, and not limitation, such a medium may comprise RAM, ROM, EEPROM, flash memory, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer. It should be noted that a computer-readable medium may be tangible and non-transitory. As used herein, the term “code” may refer to software, instructions, code or data that is/are executable by a computing device or processor. A “module” can be considered as a processor executing computer-readable code.
A processor as described herein can be a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor can be a microprocessor, but in the alternative, the processor can be a controller, or microcontroller, combinations of the same, or the like. A processor can also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. Although described herein primarily with respect to digital technology, a processor may also include primarily analog components. For example, any of the signal processing algorithms described herein may be implemented in analog circuitry. In some embodiments, a processor can be a graphics processing unit (GPU). The parallel processing capabilities of GPUs can reduce the amount of time for training and using neural networks (and other machine learning models) compared to central processing units (CPUs). In some embodiments, a processor can be an ASIC including dedicated machine learning circuitry custom- build for one or both of model training and model inference.
The disclosed or illustrated tasks can be distributed across multiple processors or computing devices of a computer system, including computing devices that are geographically distributed.
The methods disclosed herein comprise one or more steps or actions for achieving the described method. The method steps and/or actions may be interchanged with one another without departing from the scope of the claims. In other words, unless a specific order of steps or actions is required for proper operation of the method that is being described, the order and/or use of specific steps and/or actions may be modified without departing from the scope of the claims.
As used herein, the term “plurality” denotes two or more. For example, a plurality of components indicates two or more components. The term “determining” encompasses a wide variety of actions and, therefore, “determining” can include calculating, computing, processing, deriving, investigating, looking up (e.g., looking up in a table, a database or another data structure), ascertaining and the like. Also, “determining” can include receiving (e.g., receiving information), accessing (e.g., accessing data in a memory) and the like. Also, “determining” can include resolving, selecting, choosing, establishing and the like.
The phrase “based on” does not mean “based only on,” unless expressly specified otherwise. In other words, the phrase “based on” describes both “based only on” and “based at least on.”
While the foregoing written description of the system enables one of ordinary skill to make and use what is considered presently to be the best mode thereof, those of ordinary skill will understand and appreciate the existence of variations, combinations, and equivalents of the specific embodiment, method, and examples herein. The system should therefore not be limited by the above described embodiment, method, and examples, but by all embodiments and methods within the scope and spirit of the system. Thus, the present disclosure is not intended to be limited to the implementations shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The application is a National Phase application that claims priority to and the benefit of the International Application Serial No. PCT/CA2020/051100, entitled “SYSTEM AND METHOD OF SEMI-AUTONOMOUS CLEANING OF SURFACES”, filed on Aug. 12, 2020, which claims the benefit of U.S. Provisional Patent Application Ser. No. 62/885,375, entitled “SYSTEM AND METHOD OF SEMI-AUTONOMOUS CLEANING OF SURFACES”, filed on Aug. 12, 2019, and No. 63/030,053, entitled “SYSTEM AND METHOD OF SEMI-AUTONOMOUS CLEANING OF SURFACES”, filed on May 26, 2020, and No. 63/055,919, entitled “DISINFECTION MODULE FOR A SEMI-AUTONOMOUS CLEANING AND DISINFECTION DEVICE”, filed on Jul. 24, 2020, these disclosures of which are incorporated herein by reference in their entirety.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/CA2020/051100 | 8/12/2020 | WO |
Number | Date | Country | |
---|---|---|---|
62885375 | Aug 2019 | US | |
63030053 | May 2020 | US | |
63055919 | Jul 2020 | US |