This disclosure relates to robots and, more particularly, to autonomous robots.
Autonomous mobile robots (AMRs) are robots that can move around and perform tasks without the need for human guidance or control. The development of autonomous mobile robots has been driven by advances in robotics, artificial intelligence, and computer vision. The concept of autonomous robots has been around for several decades, but it was not until the late 20th century that the technology became advanced enough to make it a reality. In the early days, autonomous robots were limited to industrial applications, such as manufacturing and assembly line tasks.
However, with the advancements in computer processing power and sensors, autonomous robots have become more sophisticated and can now perform a wide range of tasks. Today, AMRs are used in a variety of applications, including warehousing and logistics, agriculture, healthcare, and even in military and defense.
The development of autonomous mobile robots has been driven by the need for more efficient and cost-effective solutions for various tasks. AMRs can operate around the clock, without the need for breaks or rest, making them ideal for repetitive tasks that would otherwise require human intervention.
In one implementation, a computer implemented method is executed on a computing device and includes: navigating an autonomous mobile robot (AMR) within a defined space; acquiring imagery at one or more defined locations within the defined space; processing the imagery using an ML model to define a completion percentage for the one or more defined locations within the defined space; and reporting the completion percentage of the one or more defined locations within the defined space to a user.
One or more of the following features may be included. The defined space may be a construction site. The imagery may include one or more of: flat images; 360° images; and videos. Navigating an autonomous mobile robot (AMR) within a defined space may include one or more of: navigating an autonomous mobile robot (AMR) within a defined space via a predefined navigation path; navigating an autonomous mobile robot (AMR) within a defined space via GPS coordinates; and navigating an autonomous mobile robot (AMR) within a defined space via a machine vision system. The machine vision system may include one or more of: a LIDAR system; and a plurality of discrete machine vision cameras. The plurality of defined locations may include one or more of: at least one human defined location; and at least one machine defined location. Processing the imagery using an ML model to define a completion percentage for the one or more defined locations within the defined space may include one or more of: comparing the imagery to visual training data to define the completion percentage for the one or more defined locations within the defined space; and comparing the imagery to user's defined completion content to define the completion percentage for the one or more defined locations within the defined space. The ML model may be trained using visual training data that identifies construction projects or portions thereof in various levels of completion so that the ML model may associate various completion percentages with visual imagery. Training the ML model using visual training data that identifies construction projects or portions thereof in various percentages of completion may include: having the ML model make an initial estimate concerning the completion percentage of a specific visual image within the visual training data; and providing the specific visual image and the initial estimate to a human trainer for confirmation and/or adjustment.
In another implementation, a computer program product resides on a computer readable medium and has a plurality of instructions stored on it. When executed by a processor, the instructions cause the processor to perform operations including: navigating an autonomous mobile robot (AMR) within a defined space; acquiring imagery at one or more defined locations within the defined space; processing the imagery using an ML model to define a completion percentage for the one or more defined locations within the defined space; and reporting the completion percentage of the one or more defined locations within the defined space to a user.
One or more of the following features may be included. The defined space may be a construction site. The imagery may include one or more of: flat images; 360° images; and videos. Navigating an autonomous mobile robot (AMR) within a defined space may include one or more of: navigating an autonomous mobile robot (AMR) within a defined space via a predefined navigation path; navigating an autonomous mobile robot (AMR) within a defined space via GPS coordinates; and navigating an autonomous mobile robot (AMR) within a defined space via a machine vision system. The machine vision system may include one or more of: a LIDAR system; and a plurality of discrete machine vision cameras. The plurality of defined locations may include one or more of: at least one human defined location; and at least one machine defined location. Processing the imagery using an ML model to define a completion percentage for the one or more defined locations within the defined space may include one or more of: comparing the imagery to visual training data to define the completion percentage for the one or more defined locations within the defined space; and comparing the imagery to user's defined completion content to define the completion percentage for the one or more defined locations within the defined space. The ML model may be trained using visual training data that identifies construction projects or portions thereof in various levels of completion so that the ML model may associate various completion percentages with visual imagery. Training the ML model using visual training data that identifies construction projects or portions thereof in various percentages of completion may include: having the ML model make an initial estimate concerning the completion percentage of a specific visual image within the visual training data; and providing the specific visual image and the initial estimate to a human trainer for confirmation and/or adjustment.
In another implementation, a computing system includes a processor and a memory system configured to perform operations including: navigating an autonomous mobile robot (AMR) within a defined space; acquiring imagery at one or more defined locations within the defined space; processing the imagery using an ML model to define a completion percentage for the one or more defined locations within the defined space; and reporting the completion percentage of the one or more defined locations within the defined space to a user.
One or more of the following features may be included. The defined space may be a construction site. The imagery may include one or more of: flat images; 360° images; and videos. Navigating an autonomous mobile robot (AMR) within a defined space may include one or more of: navigating an autonomous mobile robot (AMR) within a defined space via a predefined navigation path; navigating an autonomous mobile robot (AMR) within a defined space via GPS coordinates; and navigating an autonomous mobile robot (AMR) within a defined space via a machine vision system. The machine vision system may include one or more of: a LIDAR system; and a plurality of discrete machine vision cameras. The plurality of defined locations may include one or more of: at least one human defined location; and at least one machine defined location. Processing the imagery using an ML model to define a completion percentage for the one or more defined locations within the defined space may include one or more of: comparing the imagery to visual training data to define the completion percentage for the one or more defined locations within the defined space; and comparing the imagery to user's defined completion content to define the completion percentage for the one or more defined locations within the defined space. The ML model may be trained using visual training data that identifies construction projects or portions thereof in various levels of completion so that the ML model may associate various completion percentages with visual imagery. Training the ML model using visual training data that identifies construction projects or portions thereof in various percentages of completion may include: having the ML model make an initial estimate concerning the completion percentage of a specific visual image within the visual training data; and providing the specific visual image and the initial estimate to a human trainer for confirmation and/or adjustment.
The details of one or more implementations are set forth in the accompanying drawings and the description below. Other features and advantages will become apparent from the description, the drawings, and the claims.
Like reference symbols in the various drawings indicate like elements.
Referring to
Autonomous mobile robot process 10 may be implemented as a server-side process, a client-side process, or a hybrid server-side/client-side process. For example, autonomous mobile robot process 10 may be implemented as a purely server-side process via autonomous mobile robot process 10s. Alternatively, autonomous mobile robot process 10 may be implemented as a purely client-side process via one or more of autonomous mobile robot process 10c1, autonomous mobile robot process 10c2, autonomous mobile robot process 10c3, and autonomous mobile robot process 10c4. Alternatively still, autonomous mobile robot process 10 may be implemented as a hybrid server-side/client-side process via autonomous mobile robot process 10s in combination with one or more of autonomous mobile robot process 10c1, autonomous mobile robot process 10c2, autonomous mobile robot process 10c3, and autonomous mobile robot process 10c4. Accordingly, autonomous mobile robot process 10 as used in this disclosure may include any combination of autonomous mobile robot process 10s, autonomous mobile robot process 10c1, autonomous mobile robot process 10c2, autonomous mobile robot process 10c3, and autonomous mobile robot process 10c4.
Autonomous mobile robot process 10s may be a server application and may reside on and may be executed by computing device 12, which may be connected to network 14 (e.g., the Internet or a local area network). Examples of computing device 12 may include, but are not limited to: a personal computer, a server computer, a series of server computers, a mini computer, a mainframe computer, a smartphone, or a cloud-based computing platform.
The instruction sets and subroutines of autonomous mobile robot process 10s, which may be stored on storage device 16 coupled to computing device 12, may be executed by one or more processors (not shown) and one or more memory architectures (not shown) included within computing device 12. Examples of storage device 16 may include but are not limited to: a hard disk drive; a RAID device; a random-access memory (RAM); a read-only memory (ROM); and all forms of flash memory storage devices.
Network 14 may be connected to one or more secondary networks (e.g., network 18), examples of which may include but are not limited to: a local area network; a wide area network; or an intranet, for example.
Examples of autonomous mobile robot processes 10c1, 10c2, 10c3, 10c4 may include but are not limited to a web browser, a game console user interface, a mobile device user interface, or a specialized application (e.g., an application running on e.g., the Android™ platform, the iOS™ platform, the Windows™ platform, the Linux™ platform or the UNIX™ platform). The instruction sets and subroutines of autonomous mobile robot processes 10c1, 10c2, 10c3, 10c4, which may be stored on storage devices 20, 22, 24, 26 (respectively) coupled to client electronic devices 28, 30, 32, 34 (respectively), may be executed by one or more processors (not shown) and one or more memory architectures (not shown) incorporated into client electronic devices 28, 30, 32, 34 (respectively). Examples of storage devices 20, 22, 24, 26 may include but are not limited to: hard disk drives; RAID devices; random access memories (RAM); read-only memories (ROM), and all forms of flash memory storage devices.
Examples of client electronic devices 28, 30, 32, 34 may include, but are not limited to a personal digital assistant (not shown), a tablet computer (not shown), laptop computer 28, smart phone 30, smart phone 32, personal computer 34, a notebook computer (not shown), a server computer (not shown), a gaming console (not shown), and a dedicated network device (not shown). Client electronic devices 28, 30, 32, 34 may each execute an operating system, examples of which may include but are not limited to Microsoft Windows™, Android™, iOS™, Linux™, or a custom operating system.
Users 36, 38, 40, 42 may access autonomous mobile robot process 10 directly through network 14 or through secondary network 18. Further, autonomous mobile robot process 10 may be connected to network 14 through secondary network 18, as illustrated with link line 44.
The various client electronic devices (e.g., client electronic devices 28, 30, 32, 34) may be directly or indirectly coupled to network 14 (or network 18). For example, laptop computer 28 and smart phone 30 are shown wirelessly coupled to network 14 via wireless communication channels 44, 46 (respectively) established between laptop computer 28, smart phone 30 (respectively) and cellular network/bridge 48, which is shown directly coupled to network 14. Further, smart phone 32 is shown wirelessly coupled to network 14 via wireless communication channel 50 established between smart phone 32 and wireless access point (i.e., WAP) 52, which is shown directly coupled to network 14. Additionally, personal computer 34 is shown directly coupled to network 18 via a hardwired network connection.
WAP 52 may be, for example, an IEEE 802.11a, 802.11b, 802.11g, 802.11n, Wi-Fi, and/or Bluetooth device that is capable of establishing wireless communication channel 50 between smart phone 32 and WAP 52. As is known in the art, IEEE 802.11x specifications may use Ethernet protocol and carrier sense multiple access with collision avoidance (i.e., CSMA/CA) for path sharing. As is known in the art, Bluetooth is a telecommunications industry specification that allows e.g., mobile phones, computers, and personal digital assistants to be interconnected using a short-range wireless connection.
Referring also to
The key components of an AMR may include a mobile base (e.g., mobile base 104), a navigation subsystem (e.g., navigation subsystem 106), a controller subsystem (e.g., controller subsystem 108), and a power source (e.g., battery 110). The mobile base (e.g., mobile base 104) may be a wheeled or tracked platform, or it may use legs to move like a quadruped robot. The sensors (e.g., navigation subsystem 106) may provide information about the robot's surroundings, such as obstacles, people, or other objects. The controller (e.g., controller subsystem 108) may process this information and generate commands for the robot's actuators to move and interact with the environment.
Referring also to
Autonomous mobile robot process 10 may navigate 200 an autonomous mobile robot (AMR) 100 within a defined space (e.g., defined space 102). An example of this defined space (e.g., defined space 102) may include but is not limited to a construction site.
Referring also to
To operate autonomously, autonomous mobile robot (AMR) 100 may use various algorithms such as simultaneous localization and mapping (SLAM) to create a map of the environment and localize themselves within it. Autonomous mobile robot (AMR) 100 may also use path planning algorithms to find the best route to navigate through the environment, avoiding obstacles and other hazards.
As is known in the art, Simultaneous Localization and Mapping (SLAM) is a computational technique used by AMRs to map and navigate an unknown environment (e.g., defined space 102). SLAM works by using sensor data, such as laser range finders, cameras, or other sensors, to gather information about the AMRs' environment. The AMR may use this data to create a map (e.g., floor plan 114) of its surroundings while also estimating its own location within the map (e.g., floor plan 114). The process is called “simultaneous” because the AMR is building the map (e.g., floor plan 114) and localizing itself at the same time.
The SLAM algorithm involves several steps, including data acquisition, feature extraction, data association, and estimation. In the data acquisition step, the AMR collects sensor data about its environment. In the feature extraction step, the algorithm extracts key features from the data, such as edges or corners in the environment. In the data association step, the algorithm matches the features in the current sensor data to those in the existing map. Finally, in the estimation step, the algorithm uses statistical methods to estimate the robot's position in the map.
SLAM is a critical technology for many applications, such as autonomous vehicles, mobile robots, and drones, as it enables these devices to operate in unknown and dynamic environments and navigate safely and efficiently. AMRs may be used in a wide range of applications, including manufacturing, logistics, healthcare, agriculture, and security, wherein these AMRs may perform a variety of tasks such as transporting materials, delivering goods, cleaning, and inspection. With advances in artificial intelligence and machine learning, AMRs are becoming more sophisticated and capable of handling more complex tasks.
Autonomous mobile robot process 10 may acquire 208 time-lapsed imagery (e.g., imagery 116) at a plurality of defined locations (e.g., locations 118, 120, 122, 124, 126, 128, 130) within the defined space (e.g., defined space 102) over an extended period of time. Examples of such time-lapsed imagery may include but are not limited to:
The time-lapsed imagery (e.g., imagery 116) may be collected via a vision system (e.g., vision system 132) mounted upon/included within/coupled to autonomous mobile robot (AMR) 100. Vision system 132 may include one or more discrete camera assemblies that may be used to acquire 208 the time-lapsed imagery (e.g., imagery 116).
The time-lapsed imagery (e.g., imagery 116) may be collected on a regular/recurring basis. For example, autonomous mobile robot process 10 may acquire 208 an image from each of the plurality of defined locations (e.g., locations 118, 120, 122, 124, 126, 128, 130) within the defined space (e.g., defined space 102) at regular intervals (e.g., every day, every week, every month, every quarter) over an extended period of time (e.g., the life of a construction project).
The plurality of defined locations (e.g., locations 118, 120, 122, 124, 126, 128, 130) may include one or more of: at least one human defined location; and at least one machine defined location. For example, one or more administrators/operators (e.g., one or more of users 36,38, 40, 42) of autonomous mobile robot process 10 may define the plurality of defined locations (e.g., locations 118, 120, 122, 124, 126, 128, 130) using GPS coordinates to which autonomous mobile robot (AMR) 100 may navigate. Additionally/alternatively, autonomous mobile robot process 10 and/or autonomous mobile robot (AMR) 100 may define the plurality of defined locations (e.g., locations 118, 120, 122, 124, 126, 128, 130) along (in this example) predefined navigation path 112, wherein the plurality of defined locations (e.g., locations 118, 120, 122, 124, 126, 128, 130) are defined to e.g., be spaced every 50 feet to provide overlapping visual coverage or located based upon some selection criteria (e.g., larger spaces, smaller spaces, more complex spaces as defined within a building plan, more utilized spaces as defined within a building plan).
As is known in the art, GPS (i.e., Global Positioning System) is a satellite-based navigation system that allows users to determine their precise location on Earth, which uses a network of satellites, ground-based control stations, and receivers to provide accurate positioning, navigation, and timing information.
Generally speaking, GPS satellites are positioned in orbit around the Earth. The GPS constellation typically consists of 24 operational satellites, arranged in six orbital planes, with four satellites in each plane. These satellites are constantly transmitting signals that carry information about their location and the time the signal was transmitted. GPS receivers are devices that users carry or are installed on vehicles, smartphones, or other devices, wherein these GPS receivers receive signals from multiple GPS satellites overhead. Once the GPS receiver receives signals from at least four GPS satellites, the GPS receiver uses a process called trilateration to determine the user's precise location. Trilateration involves measuring the time it takes for the signals to travel from the satellites to the receiver and using that information to calculate the distance between the receiver and each satellite. Using the distances calculated through trilateration, the GPS receiver may determine the user's precise location by finding the point where the circles (or spheres in three-dimensional space) representing the distances from each satellite intersect. This point represents the user's position on Earth. Once the user's position is determined, GPS may be used for navigation by calculating the user's direction, speed, and time to reach a desired destination based on their position and movement.
Autonomous mobile robot process 10 may store 210 the time-lapsed imagery (e.g., imagery 116) within a user-accessible location (e.g., image repository 54). An example of image repository 54 includes any data storage structure that enables the storage/access/distribution of the time-lapsed imagery (e.g., imagery 116) for one or more user (e.g., one or more of users 36,38, 40, 42) of autonomous mobile robot process 10.
When storing 210 the time-lapsed imagery (e.g., imagery 116) within a user-accessible location (e.g., image repository 54), autonomous mobile robot process 10 may wirelessly upload time-lapsed imagery (e.g., imagery 116) to the user-accessible location (e.g., image repository 54) via e.g., a wireless communication channel (e.g., wireless communication channel 134) established between autonomous mobile robot (AMR) 100 and docking station 136, wherein docking station 136 may be coupled to network 138 to enable communication with the user-accessible location (e.g., image repository 54). Additionally/alternatively, autonomous mobile robot (AMR) 100 may upload time-lapsed imagery (e.g., imagery 116) to the user-accessible location (e.g., image repository 54) via a wired connection between autonomous mobile robot (AMR) 100 and docking station 136 that is established when autonomous mobile robot (AMR) 100 is e.g., docked for charging purposes.
Autonomous mobile robot process 10 may organize 212 the time-lapsed imagery (e.g., imagery 116) within a user-accessible location (e.g., image repository 54) based, at least in part, upon defined location & acquisition time of the images within time-lapsed imagery (e.g., imagery 116). Accordingly:
Referring also to
For example, assume that autonomous mobile robot process 10 gathers one image per week (for a year) for each of the plurality of defined locations (e.g., locations 118, 120, 122, 124, 126, 128, 130) that are stored 210 on image repository 54. Accordingly, autonomous mobile robot process 10 may render user interface 140 that allows the user (e.g., one or more of users 36,38, 40, 42) to select a specific location (from plurality of locations 118, 120, 122, 124, 126, 128, 130) via e.g., drop down menu 142. Assume for this example that the user (e.g., one or more of users 36,38, 40, 42) selects “Elevator Lobby, East Wing, Building 14”. Accordingly, autonomous mobile robot process 10 may retrieve from image repository 54 the images included within the time-lapsed imagery (e.g., imagery 116) that are associated with the location “Elevator Lobby, East Wing, Building 14”.
As autonomous mobile robot process 10 gathered one image per week (for a year) for each of the plurality of defined locations (e.g., locations 118, 120, 122, 124, 126, 128, 130), autonomous mobile robot process 10 may retrieve fifty-two images from time-lapsed imagery (e.g., imagery 116) that are associated with the location “Elevator Lobby, East Wing, Building 14”. These fifty-two images may be presented to the user (e.g., one or more of users 36,38, 40, 42) in a time sequenced fashion that allows 216 the user (e.g., one or more of users 36,38, 40, 42) to review the time-lapsed imagery (e.g., imagery 116) for a specific defined location over the extended period of time. For example, the user (e.g., one or more of users 36,38, 40, 42) may select forward button 144 to view the next image (e.g., image 146) in the temporal sequence of the images associated with the location “Elevator Lobby, East Wing, Building 14” and/or select backwards button 148 to view to the previous image (e.g., image 150) in the temporal sequence of the images associated with location “Elevator Lobby, East Wing, Building 14”.
Accordingly and through the use of autonomous mobile robot process 10, the user (e.g., one or more of users 36,38, 40, 42) may visually “go back in time” and e.g., remove drywall, remove plumbing systems, remove electrical system, etc. to see areas that are no longer visible in a completed construction project, thus allowing e.g., the locating of a hidden standpipe, the location of a hidden piece of ductwork, etc.
Referring also to
As discussed above, autonomous mobile robot process 10 may navigate 300 an autonomous mobile robot (AMR) 100 within a defined space (e.g., defined space 102), an example of which may include but is not limited to a construction site. As also discussed above, when navigating 300 an autonomous mobile robot (AMR) 100 within a defined space (e.g., defined space 102), autonomous mobile robot process 10 may:
As discussed above, autonomous mobile robot process 10 may acquire 308 imagery (e.g., imagery 116) at one or more defined locations (e.g., locations 118, 120, 122, 124, 126, 128, 130) within the defined space (e.g., defined space 102). Examples of such imagery may include but are not limited to:
As discussed above, the plurality of defined locations (e.g., locations 118, 120, 122, 124, 126, 128, 130) may include at least one human defined location and/or at least one machine defined location. As also discussed above, autonomous mobile robot process 10 may store the imagery (e.g., imagery 116) within image repository 54.
Autonomous mobile robot process 10 may process 310 the imagery (e.g., imagery 116) using an ML model (e.g., ML model 56) to define a completion percentage (e.g., completion percentage 58) for the one or more defined locations (e.g., locations 118, 120, 122, 124, 126, 128, 130) within the defined space (e.g., defined space 102).
As is known in the art, ML models may be utilized to process images (e.g., imagery 116). Specifically, ML models (e.g., ML model 56) may process training data (e.g., visual training data 60) so that the ML model (e.g., ML model 56) may be used to process the imagery (e.g., imagery 116) stored within image repository 54. Specifically and with respect to training the ML model (e.g., ML model 56), several processes may be performed as follows:
Specifically and with respect to the training of ML model 56, autonomous mobile robot process 10 may train 312 the ML model (e.g., ML model 56) using visual training data (e.g., visual training data 60) that identifies construction projects or portions thereof in various levels of completion so that the ML model (e.g., ML model 56) may associate various completion percentage (e.g., completion percentage 58) with visual imagery. For example, assume that visual training data 60 includes 110,000 discrete images, wherein:
Accordingly and when training 312 the ML model (e.g., ML model 56) using visual training data (e.g., visual training data 60) that identifies construction projects or portions thereof in various percentages of completion, autonomous mobile robot process 10 may:
For example, if ML model 56 applies a completion percentage of 60% to a discrete image (i.e., the initial estimate), autonomous mobile robot process 10 may provide 316 this specific visual image and the initial estimate (60%) to a human trainer (e.g., one or more of users 36,38, 40, 42) for confirmation and/or adjustment (e.g., confirming 60%, lowering 60% to 50% or raising 70% to 80%).
As discussed above, autonomous mobile robot process 10 may process 310 the imagery (e.g., imagery 116) using the (now trained) ML model (e.g., ML model 56) to define a completion percentage (e.g., completion percentage 58) for the one or more defined locations (e.g., locations 118, 120, 122, 124, 126, 128, 130) within the defined space (e.g., defined space 102).
When processing 310 the imagery (e.g., imagery 116) using an ML model (e.g., ML model 56) to define a completion percentage (e.g., completion percentage 58) for the one or more defined locations (e.g., locations 118, 120, 122, 124, 126, 128, 130) within the defined space (e.g., defined space 102), autonomous mobile robot process 10 may:
An example of defined completion content 64 may include but is not limited to CAD drawings (e.g., internal/external elevations) that show the construction project are various stages of completion (e.g., 0%, 10%, 20%, 30%, 40%, 50%, 60%, 70%, 80%, 90%, 100%). Defined completion content 64 may then be processed by autonomous mobile robot process 10/ML model 56 in a fashion similar to the manner in which visual training data 60 was processed so that ML model 56 may “learn” what these various stages of completion look like.
Autonomous mobile robot process 10 may report 316 the completion percentage (e.g., completion percentage 58) of the one or more defined locations (e.g., locations 118, 120, 122, 124, 126, 128, 130) within the defined space (e.g., defined space 102) to a user (e.g., one or more of users 36,38, 40, 42).
Referring also to
As discussed above, autonomous mobile robot process 10 may navigate 400 an autonomous mobile robot (AMR) 100 within a defined space (e.g., defined space 102), an example of which may include but is not limited to a construction site. As also discussed above, when navigating 400 an autonomous mobile robot (AMR) 100 within a defined space (e.g., defined space 102), autonomous mobile robot process 10 may:
When navigating 400 an autonomous mobile robot (AMR) 100 within a defined space (e.g., defined space 102), autonomous mobile robot process 10 may:
As discussed above, the plurality of defined locations (e.g., locations 118, 120, 122, 124, 126, 128, 130) may include at least one human defined location and/or at least one machine defined location.
As autonomous mobile robot (AMR) 100 patrols defined space 102 and/or visits the plurality of defined locations (e.g., locations 118, 120, 122, 124, 126, 128, 130) within defined space 102, autonomous mobile robot process 10 may acquire 412 sensory information (e.g., sensory information 152) proximate the autonomous mobile robot (AMR) 100., wherein autonomous mobile robot process 10 may process 414 the sensory information (e.g., sensory information 152) to determine if an unsafe condition is occurring proximate the autonomous mobile robot (AMR) 100.
Examples of such unsafe conditions occurring proximate the autonomous mobile robot (AMR) 100 may include but are not limited to:
Autonomous mobile robot process 10 may effectuate 416 a response if an unsafe condition is occurring proximate the autonomous mobile robot (AMR) 100.
For example and when effectuating 416 a response if an unsafe condition is occurring proximate the autonomous mobile robot (AMR), autonomous mobile robot process 10 may: effectuate 418 an audible response if an unsafe condition is occurring proximate autonomous mobile robot (AMR) 100. For example, autonomous mobile robot process 10 may sound a siren (not shown) included within autonomous mobile robot (AMR) 100 and/or play/synthesize an evacuation order.
Further and when effectuating 416 a response if an unsafe condition is occurring proximate autonomous mobile robot (AMR) 100, autonomous mobile robot process 10 may: effectuate 420 a visual response if an unsafe condition is occurring proximate the autonomous mobile robot (AMR) 100. For example, autonomous mobile robot process 10 may flash a strobe (not shown) or warning light (not shown) included on autonomous mobile robot (AMR) 100.
Additionally and when effectuating 416 a response if an unsafe condition is occurring proximate autonomous mobile robot (AMR) 100, autonomous mobile robot process 10 may: effectuate 422 a reporting response if an unsafe condition is occurring proximate the autonomous mobile robot (AMR) 100.
When effectuating 422 a reporting response if an unsafe condition is occurring proximate autonomous mobile robot (AMR) 100, autonomous mobile robot process 10 may:
For example and in response to an unsafe condition that can be life threatening (e.g., fire/flood/explosion hazard), autonomous mobile robot process 10 may:
Further and in response to an unsafe condition concerning a safety violation, autonomous mobile robot process 10 may:
Further and in response to an unsafe condition concerning a property issue (e.g., theft/burglary/vandalism), autonomous mobile robot process 10 may:
Referring also to
As discussed above, autonomous mobile robot process 10 may navigate 500 an autonomous mobile robot (AMR) 100 within a defined space (e.g., defined space 102), an example of which may include but is not limited to a construction site. As also discussed above, when navigating 300 an autonomous mobile robot (AMR) 100 within a defined space (e.g., defined space 102), autonomous mobile robot process 10 may:
As also discussed above, when navigating 500 an autonomous mobile robot (AMR) 100 within a defined space (e.g., defined space 102), autonomous mobile robot process 10 may:
As discussed above, the plurality of defined locations (e.g., locations 118, 120, 122, 124, 126, 128, 130) may include at least one human defined location and/or at least one machine defined location.
As autonomous mobile robot (AMR) 100 patrols defined space 102 and/or visits the plurality of defined locations (e.g., locations 118, 120, 122, 124, 126, 128, 130) within defined space 102, autonomous mobile robot process 10 may acquire 512 housekeeping information (e.g., housekeeping information 156) proximate autonomous mobile robot (AMR) 100 and may process 514 the housekeeping information (e.g., housekeeping information 156) to determine if remedial action is needed proximate autonomous mobile robot (AMR) 100.
Examples of such remedial action needed may include but are not limited to one or more of:
Autonomous mobile robot process 10 may effectuate 516 a response if remedial action is needed proximate autonomous mobile robot (AMR) 100.
For example and when effectuating 516 a response if remedial action is needed proximate autonomous mobile robot (AMR) 100, autonomous mobile robot process 10 may: effectuate 518 a visual response if remedial action is needed proximate autonomous mobile robot (AMR) 100. For example, autonomous mobile robot process 10 may sound a siren (not shown) included within autonomous mobile robot (AMR) 100 and/or play/synthesize a warning signal
Further and when effectuating 516 a response if remedial action is needed proximate autonomous mobile robot (AMR) 100, autonomous mobile robot process 10 may:
For example and in response to remedial action being needed concerning a cleaning issue (e.g., litter on the floor/ground, a water spill, a stain on a wall) proximate autonomous mobile robot (AMR) 100, autonomous mobile robot process 10 may:
For example and in response to remedial action being needed concerning a storage/retrieval issue (e.g., tools/specialty equipment that needs to be put away) proximate autonomous mobile robot (AMR) 100, autonomous mobile robot process 10 may:
Further and when effectuating 516 a response if remedial action is needed proximate autonomous mobile robot (AMR) 100, autonomous mobile robot process 10 may: effectuate 520 a physical response if remedial action is needed proximate autonomous mobile robot (AMR) 100. For example, autonomous mobile robot (AMR) 100 may be equipped with specific functionality (e.g., a vacuum system 158) to enable autonomous mobile robot (AMR) 100 to reply to minors housekeeping issues, such as vacuuming up minor debris (e.g., saw dust, metal filings, etc.).
As will be appreciated by one skilled in the art, the present disclosure may be embodied as a method, a system, or a computer program product. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Furthermore, the present disclosure may take the form of a computer program product on a computer-usable storage medium having computer-usable program code embodied in the medium.
Any suitable computer usable or computer readable medium may be utilized. The computer-usable or computer-readable medium may be, for example but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, device, or propagation medium. More specific examples (a non-exhaustive list) of the computer-readable medium may include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a transmission media such as those supporting the Internet or an intranet, or a magnetic storage device. The computer-usable or computer-readable medium may also be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted, or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory. In the context of this document, a computer-usable or computer-readable medium may be any medium that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. The computer-usable medium may include a propagated data signal with the computer-usable program code embodied therewith, either in baseband or as part of a carrier wave. The computer usable program code may be transmitted using any appropriate medium, including but not limited to the Internet, wireline, optical fiber cable, RF, etc.
Computer program code for carrying out operations of the present disclosure may be written in an object oriented programming language such as Java, Smalltalk, C++ or the like. However, the computer program code for carrying out operations of the present disclosure may also be written in conventional procedural programming languages, such as the “C” programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through a local area network/a wide area network/the Internet (e.g., network 14).
These computer program instructions may also be stored in a computer-readable memory that may direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer-implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
The corresponding structures, materials, acts, and equivalents of all means or step plus function elements in the claims below are intended to include any structure, material, or act for performing the function in combination with other claimed elements as specifically claimed. The description of the present disclosure has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the disclosure in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the disclosure. The embodiment was chosen and described in order to best explain the principles of the disclosure and the practical application, and to enable others of ordinary skill in the art to understand the disclosure for various embodiments with various modifications as are suited to the particular use contemplated.
A number of implementations have been described. Having thus described the disclosure of the present application in detail and by reference to embodiments thereof, it will be apparent that modifications and variations are possible without departing from the scope of the disclosure defined in the appended claims.
This application claims the benefit of U.S. Provisional Application No. 63/328,993, filed on 8 Apr. 2022, the entire contents of which are incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
Parent | 63328993 | Apr 2022 | US |
Child | 18298039 | US |