Multimodal localization and mapping for a mobile automation apparatus

Information

  • Patent Grant
  • 10949798
  • Patent Number
    10,949,798
  • Date Filed
    Monday, May 1, 2017
    7 years ago
  • Date Issued
    Tuesday, March 16, 2021
    3 years ago
Abstract
A device and method for multimodal localization and mapping for a mobile automation apparatus is provided. Features are extracted from images acquired by a ceiling-facing image device of the mobile automation apparatus in an environment, and stored in association with estimated positions of the mobile automation apparatus in the environment as determined from one or more sensors, as well as in association with features extracted from depth data acquired from a depth-sensing device, for example as map data. The map data is later used by the mobile automation apparatus to navigate the environment based, at least in part, on further images acquired by the ceiling-facing image device.
Description
CROSS-REFERENCE TO RELATED APPLICATIONS

This application is related to U.S. Provisional Application No. 62/492,670 entitled “Product Status Detection System,” filed on May 1, 2017, by Perrella et al., which is incorporated herein by reference in its entirety.


BACKGROUND OF THE INVENTION

Mobile automation apparatuses are increasingly being used in retail environments to perform pre-assigned tasks such as out of stock detection, low stock detection, price label verification, plug detection (SKU (stock keeping unit) in the wrong position) and planogram compliance (e.g. determining whether products on a “module” of shelving conform to specified plan). For example, such mobile automation apparatuses are generally equipped with various sensors to perform such tasks, as well as to autonomously navigate paths within the retail environment. Such navigation can include both mapping the environment and performing localization to determine a position of a mobile automations apparatus in the environment. Use of near-ground planar lidar is often used to assist with such navigation, including both mapping and localization, however near-ground planar lidar can occasionally be highly incorrect, for example when the environment changes due to structural changes at the near-ground level (e.g. obstacles and/or object on the ground change position, and/or are added or removed from the environment).


Accordingly, there is a need for a device and method for multimodal localization and mapping for a mobile automation apparatus.





BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS

The accompanying figures, where like reference numerals refer to identical or functionally similar elements throughout the separate views, together with the detailed description below, are incorporated in and form part of the specification, and serve to further illustrate embodiments of concepts that include the claimed invention, and explain various principles and advantages of those embodiments.



FIG. 1 is a block diagram of a mobile automation system in accordance with some embodiments.



FIG. 2 is a side view of a mobile automation apparatus in accordance with some embodiments.



FIG. 3 is a block diagram of the mobile automation apparatus of FIG. 2 in accordance with some embodiments.



FIG. 4 is a block diagram of a control application in accordance with some embodiments.



FIG. 5 is a block diagram of a control application in accordance with some embodiments.



FIG. 6 is a flowchart of a method of mapping an environment using a ceiling-facing image device in accordance with some embodiments.



FIG. 7 is a flowchart of a method of generating a map of an environment using images of a ceiling of an environment with some embodiments.



FIG. 8 is a top view of the mobile automation apparatus of FIG. 2 in an environment in accordance with some embodiments.



FIG. 9 depicts example image data, depth data and proprioceptive data acquired by the mobile automation apparatus at different positions along a path in the environment of FIG. 8 in accordance with some embodiments.



FIG. 10 depicts example map data generated from the data of FIG. 9 in accordance with some embodiments.



FIG. 11 depicts a closed loop path of the mobile automation apparatus in accordance with some embodiments.





Skilled artisans will appreciate that elements in the figures are illustrated for simplicity and clarity and have not necessarily been drawn to scale. For example, the dimensions of some of the elements in the figures may be exaggerated relative to other elements to help to improve understanding of embodiments of the present invention.


The apparatus and method components have been represented where appropriate by conventional symbols in the drawings, showing only those specific details that are pertinent to understanding the embodiments of the present invention so as not to obscure the disclosure with details that will be readily apparent to those of ordinary skill in the art having the benefit of the description herein.


DETAILED DESCRIPTION OF THE INVENTION

An aspect of the specification provides a mobile automation apparatus for navigating an environment having a ceiling comprising: a movement module for moving the mobile automation apparatus in the environment; an image device configured to acquire images in a ceiling-facing direction from the mobile automation apparatus; a proprioceptive sensor configured to monitor movement of the mobile automation apparatus; and a controller configured to: associate the images received from the image device with proprioceptive data received from the proprioceptive sensor.


In some implementations, the controller is further configured to one or more of; store, to a memory, the images with associated proprioceptive data; and transmit, using a communication interface, the images with associated proprioceptive data to a mapping device.


In some implementations, the image device comprises one or more of a monocular camera, a stereo camera, a color camera, a monochrome camera; a depth camera, a structured light camera, a time-of-flight camera, and a LIDAR device.


In some implementations, the proprioceptive data acquired by the proprioceptive sensor is indicative of a distance traveled by the mobile automation apparatus as the movement module moves the mobile automation apparatus through the environment.


In some implementations, the proprioceptive data is further indicative of changes in direction of the mobile automation apparatus as the movement module moves the mobile automation apparatus through the environment.


In some implementations, the proprioceptive sensor comprises one or more of an odometry device, a wheel odometry device, an accelerometer, a gyroscope and an inertial measurement unit.


In some implementations, the proprioceptive data and the images are one more or more of: time-stamped; acquired periodically; and acquired as a function of time.


The mobile automation apparatus further comprising a depth-sensing device configured to acquire depth data in one or more directions from the mobile automation apparatus, the controller further configured to: associate depth data received from the depth-sensing device with the images and the proprioceptive data. In some of these implementations, the controller is further configured to one or more of: store, to the memory, the images with the associated proprioceptive data and associated depth data; and transmit, using the communication interface, the images with the associated proprioceptive data and the associated depth data. In some of these implementations, the depth-sensing device comprises a LIDAR device.


In some implementations, the controller is further configured to one or more of navigate the environment and determine an adjusted position in the environment by comparing further features extracted from further images acquired by the image device with extracted features from the images, that are associated with positions in the environment.


A further aspect of the specification provides a method comprising: at a mobile automation apparatus comprising: a movement module for moving the mobile automation apparatus in the environment; an image device configured to acquire images in a ceiling-facing direction from the mobile automation apparatus; a proprioceptive sensor configured to monitor movement of the mobile automation apparatus; and a controller, associating, using the controller, the images received from the image device with proprioceptive data received from the proprioceptive sensor. In some implementations, the method further comprises one or more of; storing, to a memory, the images with associated proprioceptive data; and transmitting, using a communication interface, the images with associated proprioceptive data to a mapping device.


Yet a further aspect of the specification provides a non-transitory computer-readable medium storing a computer program, wherein execution of the computer program is for: at a mobile automation apparatus comprising: a movement module for moving the mobile automation apparatus in the environment; an image device configured to acquire images in a ceiling-facing direction from the mobile automation apparatus; a proprioceptive sensor configured to monitor movement of the mobile automation apparatus; and a controller, associating, using the controller, the images received from the image device with proprioceptive data received from the proprioceptive sensor. In some implementations, execution of the computer program is further for one or more of; storing, to a memory, the images with associated proprioceptive data; and transmitting, using a communication interface, the images with associated proprioceptive data to a mapping device


Yet a further aspect of the specification provides a device comprising: a communication interface; a memory; and a mapping controller configured to: receive, from one or more of the memory and the communication interface, images with associated proprioceptive data, the images representing a ceiling of an environment acquired by a ceiling-facing imaging device at a mobile automation apparatus and the associated proprioceptive data indicative of movement of the mobile automation apparatus through the environment; extract features from the images; determine, from the associated proprioceptive data, a respective position of a mobile automation apparatus in the environment where each of the images was acquired; and, store, in the memory, extracted features from the images with associated positions.


In some implementations, the mapping controller is further configured to: receive, from one or more of the memory and the communication interface, the images with the associated proprioceptive data and associated depth data indicative of objects located in one or more directions from the mobile automation apparatus; and store, in the memory, the extracted features from the images with the associated positions and further features extracted from the associated depth data. In some of these implementations, the associated positions are stored in association with the extracted features and the further features as map data, and the mapping controller is further configured to refine the map data based on one or more of further images, further depth data, and further proprioceptive data received from the mobile automation apparatus.


Yet a further aspect of the specification provides a method comprising: at a device comprising: a communication interface; a memory; and a mapping controller, receiving, from one or more of the memory and the communication interface, images with associated proprioceptive data, the images representing a ceiling of an environment acquired by a ceiling-facing imaging device at a mobile automation apparatus and the associated proprioceptive data indicative of movement of the mobile automation apparatus through the environment; extract, using the mapping controller, features from the images; determining, using the mapping controller, from the associated proprioceptive data, a respective position of a mobile automation apparatus in the environment where each of the images was acquired; and, storing, using the mapping controller, in the memory, extracted features from the images with associated positions.


In some implementations, the method further comprises: receiving, from one or more of the memory and the communication interface, the images with the associated proprioceptive data and associated depth data indicative of objects located in one or more directions from the mobile automation apparatus; and storing, in the memory, the extracted features from the images with the associated positions and further features extracted from the associated depth data. In some of these implementations, the associated positions are stored in association with the extracted features and the further features as map data, and the method further comprises refining the map data based on one or more of further images, further depth data, and further proprioceptive data received from the mobile automation apparatus.


Yet a further aspect of the specification provides a non-transitory computer-readable medium storing a computer program, wherein execution of the computer program is for: at a device comprising: a communication interface; a memory; and a mapping controller, receiving, from one or more of the memory and the communication interface, images with associated proprioceptive data, the images representing a ceiling of an environment acquired by a ceiling-facing imaging device at a mobile automation apparatus and the associated proprioceptive data indicative of movement of the mobile automation apparatus through the environment; extract, using the mapping controller, features from the images; determining, using the mapping controller, from the associated proprioceptive data, a respective position of a mobile automation apparatus in the environment where each of the images was acquired; and, storing, using the mapping controller, in the memory, extracted features from the images with associated positions.


In some of these implementations, execution of the computer program is further for: receiving, from one or more of the memory and the communication interface, the images with the associated proprioceptive data and associated depth data indicative of objects located in one or more directions from the mobile automation apparatus; and storing, in the memory, the extracted features from the images with the associated positions and further features extracted from the associated depth data. In some of these implementations, the associated positions are stored in association with the extracted features and the further features as map data, and execution of the computer program is further for refining the map data based on one or more of further images, further depth data, and further proprioceptive data received from the mobile automation apparatus.



FIG. 1 depicts a mobile automation system 100 in accordance with the teachings of this disclosure. The system 100 includes a server 101 in communication with at least one mobile automation apparatus 103 (also referred to herein simply as the apparatus 103) and at least one mobile device 105 via communication links 107, illustrated in the present example as including wireless links. The system 100 is deployed, in the illustrated example, in a retail environment including a plurality of modules 110 of shelves each supporting a plurality of products 112. More specifically, the apparatus 103 is deployed within the retail environment, and controlled by the server 101 (via the link 107) to travel a path within the retail environment that may include passing the length of at least a portion of the modules 110. The apparatus 103 is equipped with a plurality of data capture sensors, such as image sensors (e.g. one or more digital cameras) and depth sensors (e.g. one or more lidar sensors), and is further configured to employ one or more of the sensors to capture shelf data. In the present example, the apparatus 103 is configured to capture a series of digital images of the modules 110, as well as a series of depth measurements, each describing the distance and direction between the apparatus 103 and a point on a module 110 or a product on the module 110.


The server 101 includes a special purpose controller 120 specifically designed generate map data from data received from the mobile automation apparatus 103. The controller 120 is, interconnected with a non-transitory computer readable storage medium, such as a memory 122. The memory 122 includes any suitable combination of volatile (e.g. Random Access Memory or RAM) and non-volatile memory (e.g. read only memory or ROM, Electrically Erasable Programmable Read Only Memory or EEPROM, flash memory). In general, the controller 120 and the memory 122 each comprise one or more integrated circuits. The controller 120, for example, includes any suitable combination of central processing units (CPUs) and graphics processing units (GPUs) and/or any suitable combination of field-programmable gate arrays (FPGAs) and/or application-specific integrated circuits (ASICs) configured to implement the specific functionality of the server 101. In an embodiment, the controller 120, further includes one or more central processing units (CPUs) and/or graphics processing units (GPUs). In an embodiment, a specially designed integrated circuit, such as a Field Programmable Gate Array (FPGA), is designed to perform specific functionality of the server 101, either alternatively or in addition to the controller 120 and the memory 122.


The server 101 also includes a communications interface 124 interconnected with the controller 120. The communications interface 124 includes any suitable hardware (e.g. transmitters, receivers, network interface controllers and the like) allowing the server 101 to communicate with other computing devices—particularly the apparatus 103 and the mobile device 105—via the links 107. The links 107 may be direct links, or links that traverse one or more networks, including both local and wide-area networks. The specific components of the communications interface 124 are selected based on the type of network or other links that the server 101 is required to communicate over. In the present example, a wireless local-area network is implemented within the retail environment via the deployment of one or more wireless access points. The links 107 therefore include both wireless links between the apparatus 103 and the mobile device 105 and the above-mentioned access points, and a wired link (e.g. an Ethernet-based link) between the server 101 and the access point.


The memory 122 stores a plurality of applications, each including a plurality of computer readable instructions executable by the controller 120. The execution of the above-mentioned instructions by the controller 120 configures the server 101 to perform various actions discussed herein. The applications stored in the memory 122 include a control application 128, which may also be implemented as a suite of logically distinct applications. In general, via execution of the control application 128 or subcomponents thereof, the controller 120 is configured to implement various functionality. The controller 120, as configured via the execution of the control application 128, is also referred to herein as the controller 120. As will now be apparent, some or all of the functionality implemented by the controller 120 described below may also be performed by preconfigured hardware elements (e.g. one or more ASICs) rather than by execution of the control application 128 by the controller 120.


In general, the controller 120 is configured to control the apparatus 103 to capture data, and to obtain the captured data via the communications interface 124 and store the captured data in a repository 132 in the memory 122. The server 101 is further configured to perform various post-processing operations on the captured data, and to detect the status of the products 112 on the modules 110. When certain status indicators are detected, the server 101 is also configured to transmit status notifications to the mobile device 105.


For example, in some embodiments, the server 101 is configured via the execution of the control application 128 by the controller 120, to process image and depth data captured by the apparatus 103 to identify portions of the captured data depicting a back of a module 110, and to detect gaps between the products 112 based on those identified portions. Furthermore, the server 101 is configured to provide a map and/or paths and/or path segments and/or navigation data and/or navigation instructions to the apparatus 103 to enable the apparatus 103 to navigate the path which may pass the modules 110 within a retail environment.


Attention is next directed to FIG. 2 and FIG. 3 which respectively depict: a schematic side perspective view of the apparatus 103, showing particular sensors of the apparatus 103; and a schematic block diagram of the apparatus 103.


With reference to FIG. 2, the apparatus 103 is being operated adjacent the modules 110. The apparatus 103 generally comprises a: a base 210 configured to move on wheels 212 (e.g. on a floor 213 of the environment), and the like; and a mast 214 and/or support structure extending in an upward direction (e.g. vertically) from the base 210. However, in other embodiments, the base 210 moves on devices other than wheels, for example casters, tracks and the like. Furthermore, a ceiling 220 of the environment is depicted, the ceiling 220 comprising, for example, ceiling tiles 221, and the like, other than in a region 222, and a fire detector 223. While the ceiling 220 is depicted only with the ceiling tiles 221 and the fire detector 223, the depicted ceiling 220 is an example only, and it is appreciated that the ceiling 220 comprises any item and/or objects that are generally included on ceilings including, but not limited to, lights, other fire detectors, parts of a fire extinguishing system, beams (e.g. structural support beams), skylights, step discontinuities (e.g. due to beams and other structural features) security equipment, network access points, vents, speakers, signage, status indicators, speakers, cameras, beacons, electrical equipment, cabling, labels and the like. In particular, the apparatus 103 is to navigate the environment without reference to external devices and/or external sensors. In this manner, the ceiling 220 of the environment is not changed to accommodate navigation and/or localization of the apparatus 103.


The base 210 and mast 214 and/or support structure may together form a housing; however, in other embodiments, other shapes and/or housings of the apparatus 103 may occur to persons skilled in the art.


The apparatus 103 further includes an image device 228 configured to acquire images in a ceiling-facing direction from the apparatus 103 and/or the mast 214 and/or support structure. For example, as depicted, the image device 228 is mounted in and/or on the mast 214 in an upward-facing and/or ceiling-facing direction. However, the image device 228 need not be on the mast 214; rather the image device 228 is located at the apparatus 103 in a position where the image device 228 is ceiling facing. Hence, the image device 228 is, in some embodiments, mounted in and/or on the base 210.


While the image device 228 is described as comprising a camera, the image device 228 comprises any device which acquires images, including, but not limited to one or more of a monocular camera, a stereo camera, a color camera, a monochrome camera; a depth camera, a structured light camera, a time-of-flight camera, and a lidar device. In some of these embodiments, the images include depth information about the ceiling 220 and/or any object for which images are being acquired.


Regardless, image data from the image device 228 is usable to identify ceiling feature positions in the environment (e.g. using localization references based on the images, such as features extracted from the images), when navigating to a destination physical position.


As depicted, the apparatus 103 further comprises a proprioceptive sensor 229 (interchangeably referred to hereafter as the sensor 229) configured to monitor movement of the apparatus 103 and, in particular, incremental motion. As the sensor 229 is generally internal to the apparatus 103, the sensor 229 is depicted in broken lines in FIG. 2. In particular, proprioceptive data acquired by the proprioceptive sensor 229 is indicative of a distance traveled by the apparatus 103 as a movement module (described with reference to FIG. 3) of the apparatus 103 moves the apparatus 103 through the environment, for example in and around the modules 110. Indeed, the proprioceptive data is further indicative of changes in direction of the apparatus 103 as the movement module moves the apparatus 103 through the environment along a path.


The proprioceptive sensor 229 comprises one or more of an odometry device, a wheel odometry device, an accelerometer, a gyroscope and an inertial measurement unit (IMU), and the like. Indeed, the sensor 229 generally monitors internal signals and/or data of the apparatus 103. For example, when the sensor 229 comprises a wheel odometry device, and the like, the wheel odometry device monitors rotation of the wheels 212 (e.g. using a shaft encoder, and the like) to determine a distance traveled by the apparatus 103, for example since a last measurement of the rotation of the wheels 212 (e.g. as an incremental movement). In these embodiments, the proprioceptive data comprises a count of rotations (and/or portions of rotations) of a shaft of the wheels 212. Such determination of distance and/or incremental movement, occurs, in some embodiments, without reference to a global physical position determining device, such as a GPS (global positioning system device), and the like. Indeed, while in some embodiments the apparatus 103 comprises a GPS device, in an environment in which the apparatus 103 is to navigate, it can be difficult for such a GPS device to acquire a satellite signal. Hence, the sensor 229 is generally configured to determine a distance traveled without reference to any device which globally determines physical position.


Indeed, while the environment could be equipped with devices that assist the apparatus 103 with determining position reading (e.g. devices which assist the apparatus 103 by triangulating a location and/or position, or sensing proximity), in some deployments of the apparatus 103, for example in retail environments, it is a general requirement that the environment not be changed to accommodate the apparatus 103.


Similarly, an IMU, and the like of the sensor 229, and/or proprioceptive data indicative of a steering angle of the wheels 212, enables a determination of a change in direction of the apparatus 103.


Hence, by monitoring the proprioceptive data, a distance traveled and changes in direction are determined. Hence, a local position reading of the apparatus 103 can be generated from the proprioceptive data. In some embodiments, when a common coordinate is provided between a local coordinate system and a global coordinate system, the proprioceptive data is also used to determine a position and/or global coordinate of the apparatus 103 in the global coordinate system.


In yet further implementations, positions of the apparatus 103 are determined with respect to both a physical position of the apparatus 103 in the environment, as well as an orientation of the apparatus 103 at a given physical position, for example an angle of the apparatus 103 with respect to a coordinate system associated with the environment. Such position data that includes orientation can be referred to as pose data and/or as a pose.


As depicted, the apparatus 103 further comprises at least one depth-sensing device 230 configured to acquire depth data in one or more directions from the apparatus 103 and/or the base 210. For example, the at least one depth-sensing device 230 includes, but is not limited to, one or more near-ground planar lidar (Light Detection and Ranging) sensor and/or devices configured to sense depth in one or more directions, for example in one or more movement directions. Indeed, at near-ground level (e.g. between about 5 cm and about 10 cm), the depth-sensing device 230 “observes” shelf structures (e.g. of the modules 110) and/or pallet structures (e.g. pallets stacked on the floor 213) and/or display structure (e.g. displays located on the floor 213 and not on the modules 110). Hence, depth data from the depth-sensing device 230, when present, is usable to identify position in the environment (e.g. using localization references based on the depth data, such as features extracted from the depth data), when navigating to a destination physical position.


Indeed, in some embodiments, the apparatus 103 is non-holonomic and hence configured to move in a constrained number of directions (e.g. a forwards direction, a backwards direction, steering and/or turning left, steering and/or turning right) and the one or more depth-sensing devices 230 is configured to sense depth in one or more of the directions in which the apparatus 103 is configured to move.


While not depicted, the base 210 and the mast 214 are provisioned with other various navigation sensors for navigating in the environment in which the modules 110 are located and/or various obstacle detection sensors for detecting and avoiding obstacles and/or one or more data acquisition sensors for capturing and/or acquiring data associated with products 112, labels and the like on shelves of the modules 110.


Attention is next directed to FIG. 3 which depicts a schematic block diagram of further components of the apparatus 103. In particular, the apparatus 103 includes a special purpose controller 320 specifically designed to associate images from the image device 228, and depth data from the depth-sensing device 230, with positions as described herein. The controller 320 is interconnected with the image device 228, the sensor 229, the depth-sensing device 230, and a non-transitory computer readable storage medium, such as a memory 322. The memory 322 includes any suitable combination of volatile (e.g. Random Access Memory or RAM) and non-volatile memory (e.g. read only memory or ROM, Electrically Erasable Programmable Read Only Memory or EEPROM, flash memory). In general, the controller 320 and the memory 322 each comprise one or more integrated circuits. The controller 320, for example, includes any suitable combination of central processing units (CPUs) and graphics processing units (GPUs) and/or any suitable combination of field-programmable gate arrays (FPGAs) and/or application-specific integrated circuits (ASICs) configured to implement the specific functionality of the apparatus 103. Further, as depicted, the memory 322 includes a repository 332 for storing data, for example data collected by image device 228, the sensor 229, the depth-sensing device 230 and/or other sensors 330. In an embodiment, the controller 320, further includes one or more central processing units (CPUs) and/or graphics processing units (GPUs). In an embodiment, a specially designed integrated circuit, such as a Field Programmable Gate Array (FPGA), is designed to implement obstacle detection as discussed herein, either alternatively or in addition to the controller 320 and memory 322. As those of skill in the art will realize, the mobile automation apparatus 103 can also include one or more special purpose controllers or processors and/or FPGAs, in communication with the controller 320, specifically configured to control navigational and/or data capture aspects of the mobile automation apparatus 103.


The apparatus 103 also includes a communications interface 324 interconnected with the controller 320. The communications interface 324 includes any suitable hardware (e.g. transmitters, receivers, network interface controllers and the like) allowing the apparatus 103 to communicate with other devices—particularly the server 101 and, optionally, the mobile device 105—for example via the links 107, and the like. The specific components of the communications interface 324 are selected based on the type of network or other links over which the apparatus 103 is required to communicate, including, but not limited to, the wireless local-area network of the retail environment of the system 100.


The memory 322 stores a plurality of applications, each including a plurality of computer readable instructions executable by the controller 320. The execution of the above-mentioned instructions by the controller 320 configures the apparatus 103 to perform various actions discussed herein. The applications stored in the memory 322 include a control application 328, which may also be implemented as a suite of logically distinct applications. In general, via execution of the control application 328 or subcomponents thereof, the controller 320 is configured to implement various functionality. The controller 320, as configured via the execution of the control application 328, is also referred to herein as the controller 320. As will now be apparent, some or all of the functionality implemented by the controller 320 described below may also be performed by preconfigured hardware elements (e.g. one or more ASICs) rather than by execution of the control application 328 by the controller 320.


The apparatus 103 further includes sensors 330, including, but not limited to, any further navigation sensors and/or data acquisition sensors and/or obstacle avoidance sensors.


The apparatus 103 further includes a navigation module 340 configured to move the apparatus 103 in an environment, for example the environment of the modules 110. The navigation module 340 comprises any suitable combination of motors, electric motors, stepper motors, and the like configured to drive and/or steer the wheels 212, and the like, of the apparatus 103. In particular, the navigation module 340 is controllable using control inputs, and further error signals associated with the navigation module 340 are determinable (e.g. when slippage of one or more of the wheels 212 occurs, and the like, which is also determinable using the sensor 229).


Hence, in general, the controller 320 is configured to control the apparatus 103 to navigate the environment of the module 110 using data from the navigation sensors to control the navigation module 340 including, but not limited to, avoiding obstacles. For example, while not depicted, the memory 322 further stores a map, and the like, of the environment, which is used for navigation in conjunction with the navigation sensors. The map, and the like, is generally generated from a combination of images from the image device 228, the sensor 229 and, optionally, the depth-sensing device 230 as described hereafter.


In the present example, in particular, the apparatus 103 is configured via the execution of the control application 328 by the controller 320 to: associate images received from the image device 228 with proprioceptive data received from the proprioceptive sensor 229; and one or more of; store, to the memory 322, the images with associated proprioceptive data; and transmit, using the communication interface 324, the images with associated proprioceptive data to a mapping device which includes, but is not limited to, the server 101.


Furthermore, the apparatus 103 is generally powered by a battery 399.


While not depicted, in some embodiments, the apparatus 103 comprises a clock, and the like, including, but not limited to, a clock and/or a timing device, in the controller 320.


Furthermore, as will be described in further detail below, the apparatus 103 is generally configured to operate in at least two modes: a mapping mode in which the apparatus 103 is manually operated, for example using a remote-control device, such as mobile device 105, to manually navigate the apparatus 103 and during which the apparatus 103 acquires data used to generate map data representing a map the environment; and a localization mode in which the apparatus 103 autonomously navigates the environment using the map data acquired in the mapping mode. In some implementations, in the mapping mode, navigation also occurs autonomously according to navigation sensors and/or obstacle avoidance sensors at the apparatus 103; in other words, in these implementations, the apparatus 103 autonomously moves around the environment acquiring the data used to generate the map data.


Turning now to FIG. 4, before describing the operation of the application 328 (e.g. to associate images received from the image device 228 with proprioceptive data received from the proprioceptive sensor 229), certain components of the application 328 will be described in greater detail. As will be apparent to those skilled in the art, in other examples the components of the application 328 may be separated into distinct applications, or combined into other sets of components. Some or all of the components illustrated in FIG. 4 may also be implemented as dedicated hardware components (e.g. one or more FPGAs and/or one or more ASICs).


The control application 328 includes a data coordinator 401, the data coordinator 401 configured to associate images received from the image device 228 with proprioceptive data received from the proprioceptive sensor 229, for example during the mapping mode. The data coordinator 401 includes: an image device controller 418 configured to control the image device 228 to acquire images; a proprioceptive sensor controller 419 configured to control the sensor 229 to acquire proprioceptive data; and, optionally, a depth-sensing device controller 420 configured to control the depth-sensing device 230 to acquire depth data.


For example, each of the image device controller 418, the proprioceptive sensor controller 419, and the depth-sensing device controller 420 are configured to acquire (respectively) the images, the proprioceptive data and the depth-sending data as one or more of: periodically; and as a function of time. Alternatively, the images, the proprioceptive data and the depth-sending data are time stamped as they are acquired using, for example, using the clock and/or timing device of the controller 320. Alternatively, the images, the proprioceptive data and the depth-sending data are acquired as a function of distance and/or incremental motion, for example for a given rotation of the wheels 212 as determined using the proprioceptive data (e.g. every given number of centimeters, and the like).


Regardless, the data coordinator 401 is configured to associate images received from the image device 228 with proprioceptive data received from the proprioceptive sensor 229 (and optionally depth data from the depth-sensing device 230) based on one or more of respective time stamps, periodicity, and the like. Indeed, in general, images acquired by the image device 228 at a given time are associated with proprioceptive data acquired by the sensor 229 at the give time; and further, when acquired, depth data acquired by the depth-sensing device 230 at the given time are associated with the images and the proprioceptive data acquired at the give time.


In some embodiments, the data coordinator 401 stores the associated images, proprioceptive data and depth data to the memory 322, for example at the repository 332. In other embodiments, the data coordinator 401 transmits, using the interface 324, the associated images, proprioceptive data and depth data to a mapping device, such as the server 101. The mapping device is configured to generate a map and/or map data representing, for example, a path through the environment. In yet further embodiments, the repository 332 comprises a removeable memory (including, but not limited to, a removeable hard-drive, a flash-drive, and the like) which is removed from the apparatus 103 and transported to the mapping device; the mapping device interfaces with the repository 332 to receive the associated images, proprioceptive data and depth data.


Regardless, the apparatus 103 and/or the mapping device receives the associated images, proprioceptive data and depth data and converts the associated images, proprioceptive data and depth data into a map and/or map data, and the like.


For example, when the map/map data generation is performed at the apparatus 103, the application 328 further comprises a mapping controller 451 which performs the conversion. An arrow is depicted from the data coordinator 401 to the mapping controller 451 indicating output and/or data from the data coordinator 401 is received at the mapping controller 451.


In particular, the mapping controller 451 comprises a feature extractor 458 configured to extract features from the images acquired by the image device 228 and, optionally, further configured to extract respective features from the depth data acquired by the depth-sensing device 230. The feature extractor 458 is hence configured to implement one or more feature extraction techniques including, but not limited to Oriented FAST and rotated BRIEF (“ORB”), computer vision feature detection, scale-invariant feature transforms, blob detection, gradient location and orientation histograms, local energy based shape histograms, and the like.


The mapping controller 451 further comprises a position determiner 459 configured to convert the proprioceptive data to a respective position, and the like. For example, when the proprioceptive data comprises wheel odometry data, the wheel odometry data is converted to a position reading such as an estimate of a distance from a starting point and/or an origin to a subsequent physical position. The origin position is generally predetermined and/or manually determined and/or set to a starting position as mapping begins including, but not limited to, a dock position (e.g. a position of a dock with which the apparatus 103 is configured to mate with), a charging station position (e.g. a position of a charging station configured to charge the apparatus 103), and the like; the origin position may also be defined with a given error and/or uncertainty as a unirary constraint on a determination of position of the apparatus 103. For example, each turn and/or partial turn, of a shaft of the wheels 212 is converted to a distance traveled. Similarly, when the proprioceptive data includes accelerometer data, IMU data, and the like, changes in direction of the apparatus 103 are determined, and the position determiner 459 converts the proprioceptive data into a position reading such as two-dimensional (2D) coordinates, based on an origin, for example a starting position as mapping is initiated. However, the origin position can be set to any local position within the environment, and/or according to global coordinate system.


Hence, features extracted by the feature extractor 458 are associated with a ceiling feature position, for example by a map generator 460 configured to generate a map of the environment. The map comprises ceiling feature positions in the environment and features associated with each of the ceiling feature positions, the features including at least ceiling features extracted from the images acquired by the image device 228 at a ceiling feature position, and, optionally, features extracted from the depth data acquired by the depth-sensing device 230 at a depth feature position. The extracted features comprise data that defines intersections, points, angles, depth, three-dimensional features, colors, and the like, in the image data and/or depth data.


Furthermore, the map and/or map data represents paths and/or segments of paths through the environment as generally the ceiling feature positions and associated features, and depth feature positions and associated features are only at physical positions where the apparatus 103 can navigate. Indeed, the map generator 460 is, in some embodiments, configured to further segment a path through the environment into smaller segments, and the apparatus 103 later navigates through the environment on a segment-by-segment basis.


In any event, the mapping controller 451, when present at the apparatus 103, generally stores map data at the memory 322, for example in the repository 332. The map data is then used by the apparatus 103 to navigate through the environment represented by the map data.


Indeed, while not depicted, the data coordinator 401, in some embodiments, further stores control inputs and error signals for the navigation module 340 as a function of time and/or periodically. In these embodiments, the control inputs and error signals for the navigation module 340 are provided to the mapping controller 451 by the data coordinator 401, and the mapping controller 451 stores the control inputs and error signals in the map data according to an associated ceiling feature position or depth data position. For example, the control inputs and error signals are stored on a segment-by-segment basis and/or in association with nodes (e.g. where two adjacent segments meet).


Indeed, the application 328 further comprises a localizer and navigator 461 (referred to interchangeably hereafter as the localizer 461) which performs localization and/or navigation of the apparatus 103 using the map data. An arrow is depicted from the mapping controller 451 to the localizer 461 indicating that map data from the mapping controller 451 is received at the localizer 461.


While not depicted, the localizer 461, in some embodiments, receives a destination position (e.g. in the form of a coordinate), for example a destination in the environment of the modules 110 that is to be navigated to, and which is identifiable using the map data. The localizer 461 then navigates to the destination position using the control inputs and error signals in the map data, for example, as well as using further images, proprioceptive data and, optionally, depth data.


In particular, the localizer 461 comprises an image device controller 468, a proprioceptive sensor controller 469 and, optionally (e.g. when the depth-sensing device 230 is present), a depth-sensing device controller 470 each of which are similar to the image device controller 418, the proprioceptive sensor controller 419 and the depth-sensing device controller 420. Indeed, the localizer 461 generally shares resources with the data coordinator 401 such that the various components of the data coordinator 401 and the localizer 461 are common to both (e.g. the data coordinator 401 and the localizer 461 share a common image device controller 418, 468, etc.). Similarly, the localizer 461 comprises a feature extractor 478 and position determiner 479 similar to the feature extractor 458 and the position determiner 459 and hence a common feature extractor and common position determiner is shared between the mapping controller 451 and the localizer 461.


Hence, at any given position of the apparatus 103, the localizer 461 determines a current position using at least the proprioceptive data acquired by the sensor 229, using the proprioceptive sensor controller 469. The current position is compared with the same position in the map data to determine features stored in association with the same position, presuming there that the apparatus 103 has previously mapped the current position. Such a situation also occurs when the a “loop closure” occurs and the apparatus 103 “loops” back around to a position previously visited.


The localizer 461 compares these features with features extracted from images acquired by the image device 228 at a physical position corresponding to the current position reading, using the image device controller 468, (and optionally compares features extracted from depth data acquired by the depth-sensing device 230 at a physical position corresponding to the current position reading, using the depth-sensing device controller 470).


A determination of position occurs, in some implementations, within respective error boundaries; for example, as the apparatus 103 navigates through the environment, error on the proprioceptive data tends to increase; hence a position estimated may be determined according to the proprioceptive data is determined with an error that may be defined using an ellipse of uncertainty. Similarly, respective position estimates may be determines according to the features extracted from current images data acquired by the image device 228 (e.g. by comparing the extracted features to stored extracted image features in the map data), and according to the features extracted from current depth data acquired by the depth-sensing device 230 (e.g. by comparing the extracted features to stored extracted depth data features in the map data), each with respective ellipses of uncertainty. The intersection of the various ellipses of uncertainty may be used to estimate a current position, also within some error bounds. Furthermore, techniques such as ICP (iterative closest points) may be used in comparing current estimates of position with the map data, which may include using various correction factors, as understood by persons skilled in the art.


In any event, presuming that a current estimated position is within a given error boundary, the localizer 461 generally confirms that the apparatus 103 is navigating successfully and navigation proceeds. Furthermore, in some embodiments, the control input and error signals of the navigation module 340 are updated when the current error signals are smaller than stored error signals and/or error signal thresholds. Such error signals generally comprise data indicative of a navigation error and the like and hence can be alternatively referred to as error data. Regardless, the error signals are storable as data at the memory 322. The error signals include, but are not limited to, one or more of: error signals from navigation sensors, and the like, indicating whether the apparatus 103 is deviating from a path; error signals from proprioceptive sensors that monitor, for example, slippage by the wheels 212; error signals from the navigation module 340 that indicate issues in implementing the control signals; errors signals indicating whether the apparatus 103 is violating any navigation constraints, for example being outside an imaging range, violating an angle constraint and the like. Violating an angle constraint includes determining that the apparatus 103 is at an angle to a module 110 where high-quality images cannot be acquired (e.g. images where labels and the like on the shelves of the modules 110 cannot be imaged adequately for computer vision algorithms to extract data therefrom). Other types of error signals will occur to those skilled in the art.


In some instances, however, a current estimated position is outside a given error boundary. Such an instance may occur, for example, when the apparatus 103 is within a pathological region such as the region 222 (e.g. a pathological region can include regions where mapping and/or localization is difficult due to, for example, a lack of object in the region). In some of these implementations, the apparatus 103 is navigated to another region where localization is more likely to occur, for example outside of the region 222.


In other implementations, however, the apparatus 103 one or more of: stops, and transmits a notification to the server 101 to indicate a position error and/or the like, and/or the apparatus 103 is placed back into the mapping mode, for example to refine the map data. Indeed, in these implementations, the map data may be inaccurate and regions of the environment may be remapped using, for example, a map refiner 480 (and/or the mapping controller 451), which is configured to refine the map data by, for example, replacing data stored therein with current data, acquired as described above. Put another way, the associated ceiling feature positions of the extracted features are refined based on further images and further proprioceptive data received from the mobile automation apparatus 103 (e.g. the further images acquired close to the associated ceiling feature positions already stored in the memory 322, which may be deleted due to inaccuracy). Put yet another way, the apparatus 103 either manually or automatically reenters the mapping mode to remap at least a portion of the environment.


Indeed, such inaccuracy in the mapping data may occur when structural changes have occurred in environment (e.g. new displays, pallets, modules, signs and/or displays on the ceiling etc.), and/or when the map data is inaccurate.


While FIG. 4 was described with respect to a mapping controller 451 being present at the apparatus 103, in some embodiments, the apparatus 103 transmits the associated images, proprioceptive data and depth data to a mapping device such as the server 101 using, for example, link 107a and/or a removable repository 332. In these embodiments, the server 101 generates the map and/or the map data and transmits the map and/or the map data to the apparatus 103.


Hence, attention is directed to FIG. 5 which depicts a block diagram of an aspect of the control application 128 of the server 101, the application 128 comprising a mapping controller 551 having the same functionality of the mapping controller 451, with similar components including a feature extractor 558, a position determiner 559 and a map generator 560. The mapping controller 551 receives the associated images, proprioceptive data and depth data from the apparatus 103 using links 107 and/or a removable repository 332, and transmits the map data to the apparatus 103 and/or stores the map data in the repository 132 and/or the removable repository 332 (which is returned to the apparatus 103, and otherwise functions similar to the mapping controller 451.


Attention is now directed to FIG. 6 which depicts a flowchart representative of an example method 600 of mapping an environment using a ceiling-facing image device. The example operations of the method 600 of FIG. 6 correspond to machine readable instructions that are executed by, for example, the apparatus 103, and specifically by the controller 320 and/or the various components of the control application 328 including, but not limited to, the data coordinator 401. Indeed, the example method 600 of FIG. 6 is one way in which the apparatus 103 is configured. However, the following discussion of the example method 600 of FIG. 6 will lead to a further understanding of the apparatus 103, and its various components. However, it is to be understood that in other embodiments, the apparatus 103 and/or the method 600 are varied, and hence need not work exactly as discussed herein in conjunction with each other, and that such variations are within the scope of present embodiments.


Furthermore, the example method 600 of FIG. 6 need not be performed in the exact sequence as shown and likewise, in other embodiments, various blocks may be performed in parallel rather than in sequence. Accordingly, the elements of method 600 are referred to herein as “blocks” rather than “steps.” The example method 600 of FIG. 6 may be implemented on variations of the example apparatus 103, as well.


It is further assumed in the method 600, and indeed, throughout the present specification, that the image device 228, the proprioceptive sensors 229, the depth-sensing device 230 are calibrated.


At block 601, the controller 320 associates images received from the image device 228 with proprioceptive data received from the proprioceptive sensor 229.


At block 602, assuming the apparatus 103 further comprises the depth-sensing device 230, the controller 320 associates depth data received from the depth-sensing device 230 with the images and the proprioceptive data.


At block 603, the controller 320 stores, to the memory 322, the images with associated proprioceptive data (and associated depth data, when present).


At the block 605, the controller 320 transmits, using the communication interface 324, the images with associated proprioceptive data (and associated depth data, when present) to a mapping device, for example the server 101.


One of the blocks 603, 605 is optional in some embodiments, depending on whether the apparatus 103 and/or the server 101 is configured as a mapping device and is hence implementing a respective mapping controller 451, 551.


While not depicted, in some embodiments, the method 600 further includes the controller


Attention is now directed to FIG. 7 which depicts a flowchart representative of an example method 700 of generating a map and/or map data of an environment using images of a ceiling of an environment. The example operations of the method 700 of FIG. 7 correspond to machine readable instructions that are executed by, for example, the apparatus 103, and specifically by the controller 320 and/or the various components of the control application 328 including, but not limited to, the mapping controller 451.


However, in other embodiments, the example operations of the method 700 of FIG. 7 also correspond to machine readable instructions that are executed by, for example, the server 101, and specifically by the controller 120 and/or the various components of the control application 128 including, but not limited to, the mapping controller 551.


Indeed, the example method 700 of FIG. 7 is one way in which the apparatus 103 and/or the server 101 is configured. However, the following discussion of the example method 700 of FIG. 7 will lead to a further understanding of the apparatus 103, the server 101 and their various components. However, it is to be understood that in other embodiments, the apparatus 103 and/or the server 101 and/or the method 700 are varied, and hence need not work exactly as discussed herein in conjunction with each other, and that such variations are within the scope of present embodiments.


Furthermore, the example method 700 of FIG. 7 need not be performed in the exact sequence as shown and likewise, in other embodiments, various blocks may be performed in parallel rather than in sequence. Accordingly, the elements of method 700 are referred to herein as “blocks” rather than “steps.” The example method 700 of FIG. 7 may be implemented on variations of the example apparatus 103 and/or the example server 101, as well.


At block 701, a mapping controller receives, from one or more of a memory and a communication interface, images with associated proprioceptive data, the images representing a ceiling of an environment acquired by a ceiling-facing imaging device at a mobile automation apparatus and the associated proprioceptive data indicative of movement of the mobile automation apparatus through the environment; the mapping controller optionally further receives depth data indicative of objects located in one or more directions from the mobile automation apparatus 103.


At block 703, the mapping controller extracts features from the images (e.g. using the feature extractor 458, 558), as well as the depth data, when received.


At block 705, the mapping controller determines, from the associated proprioceptive data, a respective ceiling feature position of the mobile automation apparatus 103 in the environment where each of the images was acquired (e.g. using the position determiner 459, 559).


At block 707, the mapping controller stores, in a memory (e.g. the memory 122, 322), extracted features from the images with associated ceiling feature positions (e.g. using the map generator 460, 560) as well as further features extracted from the depth data, when received.


The methods 600, 700 are next described with reference to FIG. 8, FIG. 9 and FIG. 10.


In particular, FIG. 8 depicts a top view of the apparatus 103 in the environment of the modules 110, for example in a mapping mode, with the relative physical positions of the ceiling tiles 221 and the fire detector 223 of the ceiling 220 depicted in broken lines. Further depicted are four positions, in the form of coordinates: Position 1, Position 2, Position 3, and Position 4 between two of the modules 110. The apparatus 103 is navigated between two of the modules 110 to each of the Positions 1, 2, 3, 4 in a mapping mode (e.g. using a remote-control device, such as mobile device 105) to manually navigate the apparatus 103). Alternatively, a first time the apparatus 103 automatically navigates between Position 1, Position 2, Position 3, and Position 4, the apparatus 103 navigates using obstacle avoidance sensors and/or by maintaining a given distance from one or more of the modules 110.


Regardless of the mode of initial navigation in the mapping mode, at each of the Positions 1, 2, 3, 4, the apparatus 103 acquires an image of the ceiling 220 using the image device 228, and a depth data using the depth-sensing device 230, as well as proprioceptive data from the sensor 229. It is assumed that the apparatus 103 is travelling in a direction in which the depth-sensing device 230 is able to acquire depth data usable to define a position in the environment, including, but not limited to a movement direction, a forward direction, a backwards direction, and the like.


While only four coordinates are discussed in this example, the number of coordinates at which images and data are acquired is configurable, and is periodic, a function of time, and the like; indeed, in some embodiments, an image (and depth data) is acquired at given distances of travel determined from the proprioceptive data, for example, wheel odometry data and the like.


Attention is next directed to FIG. 9 which depicts example associated data 900 acquired by the apparatus 103 at each of the Positions 1, 2, 3, 4 for example using the method 600. In particular, at each of the Positions 1, 2, 3, 4 an image of the ceiling 220 is acquired, for example of joins in the ceiling tiles 221 and/or other objects, such as the fire detector 223 (e.g. at the Position 3). Furthermore, the associated data 900 includes depth data representing a horizontal near-ground scan by the depth-sensing device 230 (e.g. indicating depth of the modules 110 in a planar horizontal scan). Furthermore, the associated data 900 includes proprioceptive data comprising a number of rotations of the wheels 212 from the initial Position 1 (e.g. at the Position 1, the wheels 212 have not turned, so the number of rotations is “0”). However, the proprioceptive data comprises any proprioceptive data indicative of movement. For example, rather than a total number of rotations, the proprioceptive data comprises a number of rotations from a last position reading. While not depicted, turns and/or steering and/or changes in directions are also stored in the proprioceptive data, for example if the apparatus 103 is navigated around a module 110 to a next row of modules 110.


Furthermore, the example data 900 is associated using a time-based technique as described elsewhere in the present specification.


While not depicted, in some embodiments the example data 900 includes control input and error signals (e.g. error data) associated with the navigation module 340 at each Position 1, 2, 3, 4. Indeed, in some embodiments, each Position 1, 2, 3, 4 comprises a node between segments of a path that the apparatus 103 is to navigate.


Attention is next directed to FIG. 10 which depicts example map data 1000 generated from the example associated data 900 using the method 700. It is assumed that the example map data 1000 has been received at the server 101 and/or retrieved from the memory 322 as described elsewhere in the present specification.


For each of the Positions 1, 2, 3, 4 where images, depth data and proprioceptive data has been acquired, features are extracted from the images using, for example, one or more of the feature extractors 458, 558. The features are indicated in FIG. 10 as an “IMAGE FEATURE” or a “DEPTH FEATURE”, however the features are stored as any feature data that results from the feature extraction technique (e.g. ORB) used to extract features from images and/or depth data. For example, each of the image feature data and the depth feature data comprise data that defines intersections, points, angles, depth, three-dimensional features, colors, and the like, in the image data and/or depth data.


For each of the Positions 1, 2, 3, 4 where images, depth data and proprioceptive data has been acquired, the proprioceptive data is converted to position reading data, for example as a 2D coordinate. For example, if Position 1 represents one or more of a global origin in the environment and a local origin between the modules 110 where the apparatus 103 is navigating, at “0” rotations, the 2D coordinates for the position is (0,0), as indicated in “X,Y” coordinates. Furthermore, if each rotation of the wheels 212 represents a 1 m distance, at each of the coordinates, the “Y” coordinate increases by 2 between the Positions 2, 3, 4; the “X” coordinate remains at “0” as no movement has yet occurred in the “X” direction.


Indeed, assuming that one of the coordinates generated is associated with a global coordinate system, correspondences between the global coordinate system and the local coordinate system are determined, in some embodiments.


In any event, the coordinates (0,0), (0,2), (0,4), (0,6) represent a path between two of the modules 110. Furthermore, in some embodiments, a distance between the modules 110 is determined from the depth data and/or the depth features and stored in the example map data 1000. In some of these embodiments, an alternate parallel path between the two modules can be determined; for example, assuming that the depth data indicates that the apparatus 103 could also navigate a path a half meter to the right of the path (0,0), (0,2), (0,4), (0,6), the example map data 1000 can also store a path (0.5,0), (0.5,2), (0.5,4), (0.5,6) between the two modules 110. While images and/or depth data (and/or control inputs and/or error signals) associated with the (0.5,0), (0.5,2), (0.5,4), (0.5,6) are not yet stored in the example map data 1000, they can be stored at a later time, for example the first time the apparatus 103 navigates the path (0.5,0), (0.5,2), (0.5,4), (0.5,6). Such navigation occurs, in some embodiments, when the apparatus 103 is avoiding an obstacle along the path (0,0), (0,2), (0,4), (0,6).


Attention is next directed to FIG. 11 which is similar to FIG. 8, with like elements having like numbers, however FIG. 11 depicts a path 1100 of the apparatus 103 around the modules 110, the path 1100 including the Positions 1, 2, 3 4, of FIG. 8, and further the path 1100 loops back around on itself, which is referred to as a loop closure. Indeed, in the example depicted in FIG. 11, it is assumed that the apparatus 103 has entered a localization mode and is determining an estimate of its current position using, for example, the localizer 461. As depicted, the apparatus 103 is again positioned at the coordinates of Position 1, and further acquires a further image and further depth data respectively from the image device 228 and the depth-sensing device 230. It is further assumed that as the apparatus 103 travels the path 1100, images, depth data and proprioceptive data are acquired as described above.


Furthermore, once the apparatus 103 loops back around to a previously visited position, and/or when a position is determined and/or when the apparatus 103 is in the localization mode, the position is publishable (e.g. by transmitting the position to the server 101) as a local coordinate and/or as global coordinate. In some embodiments, when the apparatus 103 is within a (configurable) threshold distance of a destination position, the apparatus 103 begins to publish the position in at least the local coordinate system which can be more accurate than a global position as the determination of the position is based on the incremental motion determined from the proprioceptive data. In other words, the position in the local coordinate system represent, in some embodiments, a distance from an origin position that is relative to one or more of the modules 110 and/or relative to the environment.


Furthermore, as the loop back around to the region of Position 1, occurs, as determined at least in part from the further images acquired by the image device 228, the control inputs, and further error signals associated with the navigation module 340 are optimized to align the apparatus 103 with the path 1100 to produce a “best” trajectory” back to POSITION 1.


In some embodiments, the proprioceptive data from the sensor 229 indicates that the apparatus 103 is located at Position 1, however it is assumed that an estimated position is determined with an error boundary as described above; in this situation, further extracted features from the further image and the further depth data are compared to the example map data 1000 to further determined the estimated position as described above, for example, by comparing the further extracted features from the further image and the further depth data with the features stored in the map data 1000, including, but not limited to, “IMAGE FEATURE 1” and “DEPTH FEATURE 1” and/or any other image features and/or depth features using ICP techniques, and the like, as described above.


However, in some instances, the error boundary on an estimated position is greater than a threshold error boundary and/or a position of the apparatus 103 may not be determinable. For example, the map data 1000 may be inaccurate and/or the environment may have changed. In general, the likelihood of the ceiling changing is smaller than the likelihood of the environment around the modules 110 changing as the environment around the modules 110 is generally dynamic. In for example, in a simple example, when the further extracted image features match “IMAGE FEATURE 1”, but the further extracted depth features do not match “DEPTH FEATURE 1”, the apparatus 103 determines that apparatus 103 is located at POSITION 1, and the apparatus 103 may, in some implementations, update the depth features at Position 1 (and/or a current position) with current depth data acquired at Position 1 (and/or the current position). In this manner, the apparatus 103 may perform remapping of the environment while navigating according to the ceiling features.


However, when the error boundary exceeds a threshold error boundary, the apparatus 103, in some implementations, reenters the mapping mode and remaps at least a portion of the environment, as described above.


For example, with further reference to FIG. 11, the path 1100 further includes the region 222 of the ceiling 220 where there are no ceiling tiles 221, or any objects, and the like. Indeed, feature extraction from images of the region 222 generally result in a null result (and/or a small number of features) as there are no features in the region 222. Such areas of the ceiling 220 may be referred to as a pathological region of the image device 228. Indeed, the environment of the modules 110, in some embodiments, includes pathological regions for the depth-sensing device 230. Hence, in some embodiments, the map data includes a sensor weighting for each mapped ceiling feature position indicative of usability, and the like, of respective data at the ceiling feature position for navigation. For example, for ceiling feature positions in the region 222, the weighting on one or more of the images acquired by the image device 228 and extracted features from the images is low as compared to a weighting on one or more of the depth data acquired by the depth-sensing device 230 and extracted features from the depth data. Furthermore, the weightings, when present, are smoothly changed from outside the region 222 to inside the region 222. Hence, when the apparatus 103 is navigating through and/or close to the region 222, the weightings stored in the map data indicate whether the extracted image features or the extracted depth features are to be used to confirm position readings, and/or a weighted combination of both. Such weightings are used by the localizer 461, for example, when determining a weight to place on position estimates, and associated error boundaries, associated with each of the image device 228 and the depth-sensing device 230.


Furthermore, in highly pathological regions, for example, in regions where the features from the images and the depth data are null, and/or include small numbers of features, the apparatus 103 is configured to avoid such regions; alternatively, where such avoidance is not possible, the apparatus 103 is configured to navigate to a region where weightings are relatively high to confirm a present position reading of the apparatus 103, as described above.


In yet further implementations, the memory 322 is configured with data which defines objects (e.g. as object data) that may be deployed in the environment, for example given types of displays, pallets and the like. In these implementations, when such an object is encountered by the apparatus 103 in the localization mode, the depth data from the depth-sensing device 230 can indicate that the environment has changed while the image data from the image device 228 indicates that the ceiling has not changed. In some of these implementations, the depth data from the depth-sensing device 230 is compared to the object data stored in the memory 322 and when a match, is found (e.g. within an error boundary) the map data is updated with one or more of the depth data and the object data to indicate the position of the object in the environment.


In yet further implementations, the map data generated based on data acquired by the apparatus 103 is provisioned at other mobile automation apparatuses within the environment, assuming that the other mobile automation apparatuses are similar and/or the same as the apparatus 103.


Disclosed herein are devices and methods for multimodal localization and mapping for a mobile automation apparatus. For example, the apparatus navigates by imaging a ceiling and, in some embodiments, by simultaneously acquiring near-ground depth data using a depth-sending device, such as a LIDAR device. In particular, a ceiling-facing imaging device acquires images of a ceiling of an environment in which the apparatus is navigating, and features are extracted from the images (e.g. using ORB, and the like) in the environment. Features are also extracted, in some embodiments, from depth data acquired by the depth-sensing device including, but not limited to, three-dimensional near-ground features. Furthermore, incremental motion of the apparatus is tracked using a proprioceptive sensor, such as a wheel odometer and the like. The position of the apparatus is hence saved with the features to produce a “map” of the environment. When a physical position is revisited, a “loop closure” is recognized, and the incremental motion+loop closures can be simultaneously optimized to produce a best estimate of a trajectory to the physical position being revisited. Furthermore, after mapping occurs, pathological regions for each of the image device and the depth sensing device are determined, and weightings are placed on each to avoid highly pathological regions and/or identify non-pathological regions used to “recalibrate” localization.


In the foregoing specification, specific embodiments have been described. However, one of ordinary skill in the art appreciates that various modifications and changes can be made without departing from the scope of the invention as set forth in the claims below. Accordingly, the specification and figures are to be regarded in an illustrative rather than a restrictive sense, and all such modifications are intended to be included within the scope of present teachings.


The benefits, advantages, solutions to problems, and any element(s) that may cause any benefit, advantage, or solution to occur or become more pronounced are not to be construed as a critical, required, or essential features or elements of any or all the claims. The invention is defined solely by the appended claims including any amendments made during the pendency of this application and all equivalents of those claims as issued.


Moreover, in this document, language of “at least one of X, Y, and Z” and “one or more of X, Y and Z” can be construed as X only, Y only, Z only, or any combination of two or more items X, Y, and Z (e.g., XYZ, XY, YZ, XZ, and the like). Similar logic can be applied for two or more items in any occurrence of “at least one . . . ” and “one or more . . . ” language.


Moreover, in this document, relational terms such as first and second, top and bottom, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. The terms “comprises,” “comprising,” “has”, “having,” “includes”, “including,” “contains”, “containing” or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises, has, includes, contains a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. An element proceeded by “comprises . . . a”, “has . . . a”, “includes . . . a”, “contains . . . a” does not, without more constraints, preclude the existence of additional identical elements in the process, method, article, or apparatus that comprises, has, includes, contains the element. The terms “a” and “an” are defined as one or more unless explicitly stated otherwise herein. The terms “substantially”, “essentially”, “approximately”, “about” or any other version thereof, are defined as being close to as understood by one of ordinary skill in the art, and in one non-limiting embodiment the term is defined to be within 10%, in another embodiment within 5%, in another embodiment within 1% and in another embodiment within 0.5%. The term “coupled” as used herein is defined as connected, although not necessarily directly and not necessarily mechanically. A device or structure that is “configured” in a certain way is configured in at least that way, but may also be configured in ways that are not listed.


It will be appreciated that some embodiments may be comprised of one or more generic or specialized processors (or “processing devices”) such as microprocessors, digital signal processors, customized processors and field programmable gate arrays (FPGAs) and unique stored program instructions (including both software and firmware) that control the one or more processors to implement, in conjunction with certain non-processor circuits, some, most, or all of the functions of the method and/or apparatus described herein. Alternatively, some or all functions could be implemented by a state machine that has no stored program instructions, or in one or more application specific integrated circuits (ASICs), in which each function or some combinations of certain of the functions are implemented as custom logic. Of course, a combination of the two approaches could be used.


Moreover, an embodiment can be implemented as a computer-readable storage medium having computer readable code stored thereon for programming a computer (e.g., comprising a processor) to perform a method as described and claimed herein. Examples of such computer-readable storage mediums include, but are not limited to, a hard disk, a CD-ROM, an optical storage device, a magnetic storage device, a ROM (Read Only Memory), a PROM (Programmable Read Only Memory), an EPROM (Erasable Programmable Read Only Memory), an EEPROM (Electrically Erasable Programmable Read Only Memory) and a Flash memory. Further, it is expected that one of ordinary skill, notwithstanding possibly significant effort and many design choices motivated by, for example, available time, current technology, and economic considerations, when guided by the concepts and principles disclosed herein will be readily capable of generating such software instructions and programs and ICs with minimal experimentation.


The Abstract of the Disclosure is provided to allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing Detailed Description, it can be seen that various features are grouped together in various embodiments for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separately claimed subject matter.

Claims
  • 1. A mobile automation apparatus for navigating an environment having a ceiling comprising: a movement module for moving the mobile automation apparatus in the environment;an image device configured to acquire images in a ceiling-facing direction from the mobile automation apparatus;a sensor configured to monitor internal signals of the mobile automation apparatus in response to movement of the mobile automation apparatus; anda controller configured to: (a) in a mapping mode, interface with a remote control to manually navigate the mobile automation apparatus and generate map data by associating the images received from the image device with sensor data received from the sensor in response to the movement of the mobile automation apparatus, and(b) in a localization mode, autonomously navigate the mobile automation apparatus based on the map data generated in the mapping mode.
  • 2. The mobile automation apparatus of claim 1, wherein the controller is further configured to one or more of; store, to a memory, the images with associated sensor data; and transmit, using a communication interface, the images with associated sensor data to a mapping device.
  • 3. The mobile automation apparatus of claim 1, wherein the image device comprises one or more of a monocular camera, a stereo camera, a color camera, a monochrome camera, a depth camera, a structured light camera, a time-of-flight camera, and a LIDAR device.
  • 4. The mobile automation apparatus of claim 1, wherein the sensor data acquired by the sensor is indicative of a distance traveled by the mobile automation apparatus as the movement module moves the mobile automation apparatus through the environment.
  • 5. The mobile automation apparatus of claim 1, wherein the sensor data is further indicative of changes in direction of the mobile automation apparatus as the movement module moves the mobile automation apparatus through the environment.
  • 6. The mobile automation apparatus of claim 1, wherein the sensor comprises one or more of an odometry device, a wheel odometry device, an accelerometer, a gyroscope and an inertial measurement unit.
  • 7. The mobile automation apparatus of claim 1, wherein the sensor data and the images are one more or more of: time-stamped; acquired periodically; and acquired as a function of time.
  • 8. The mobile automation apparatus of claim 1, further comprising a depth-sensing device configured to acquire depth data in one or more directions from the mobile automation apparatus, the controller further configured to: associate depth data received from the depth-sensing device with the images and the sensor data.
  • 9. The mobile automation apparatus of claim 8, wherein the controller is further configured to one or more of: store, to a memory, the images with associated data and associated depth data; and transmit, using a communication interface, the images with the associated sensor data and the associated depth data.
  • 10. The mobile automation apparatus of claim 8, wherein the depth-sensing device comprises a LIDAR device.
  • 11. The mobile automation apparatus of claim 1, wherein the controller is further configured to one or more of navigate the environment and determine an adjusted position in the environment by comparing further features extracted from further images acquired by the image device with extracted features from the images, that are associated with positions in the environment.
  • 12. A method in a mobile automation apparatus, the method comprising: acquiring images in a ceiling facing direction from the mobile automation apparatus;acquiring sensor data from a sensor configured to monitor movement of the mobile automation apparatus;in a mapping mode of the mobile automation apparatus: interfacing with a remote control to manually navigate the mobile automation apparatus and generating map data by associating the images with sensor data received from the sensor in response to the movement of the mobile automation apparatus, andin a localization mode of the mobile automation apparatus: autonomously navigating the mobile automation apparatus based on the map data generated in the mapping mode.
  • 13. The method of claim 12, further comprising one or more of; storing, to a memory, the images with associated sensor data; and transmitting, using a communication interface, the images with associated sensor data to a mapping device.
  • 14. A device comprising: a communication interface; a memory; and a mapping controller configured to: receive, from one or more of the memory and the communication interface, images with associated sensor data, the images representing a ceiling of an environment acquired by a ceiling-facing imaging device at a mobile automation apparatus and the associated sensor data indicative of movement of the mobile automation apparatus through the environment;extract features from the images;determine, from the associated sensor data, a respective position of a mobile automation apparatus in the environment where each of the images was acquired; and,store, in the memory, extracted features from the images with associated positions.
  • 15. The device of claim 14, wherein the mapping controller is further configured to: receive, from one or more of the memory and the communication interface, the images with the associated sensor data and associated depth data indicative of objects located in one or more directions from the mobile automation apparatus; and store, in the memory, the extracted features from the images with the associated positions and further features extracted from the associated depth data.
  • 16. The device of claim 15, wherein the associated positions are stored in association with the extracted features and the further features as map data, and the mapping controller is further configured to refine the map data based on one or more of further images, further depth data, and further sensor data received from the mobile automation apparatus.
  • 17. A method comprising: at a device comprising: a communication interface; a memory; and a mapping controller, receiving, from one or more of the memory and the communication interface, images with associated sensor data, the images representing a ceiling of an environment acquired by a ceiling-facing imaging device at a mobile automation apparatus and the associated sensor data indicative of movement of the mobile automation apparatus through the environment;extracting, using the mapping controller, features from the images;determining, using the mapping controller, from the associated sensor data, a respective position of a mobile automation apparatus in the environment where each of the images was acquired; and,storing, using the mapping controller, in the memory, extracted features from the images with associated positions.
  • 18. The method of claim 17, further comprising: receiving, from one or more of the memory and the communication interface, the images with the associated sensor data and associated depth data indicative of objects located in one or more directions from the mobile automation apparatus; and storing, in the memory, the extracted features from the images with the associated positions and further features extracted from the associated depth data.
  • 19. The method of claim 18, wherein the associated positions are stored in association with the extracted features and the further features as map data, and method further comprises refining the map data based on one or more of further images, further depth data, and further sensor data received from the mobile automation apparatus.
US Referenced Citations (396)
Number Name Date Kind
5209712 Ferri May 1993 A
5214615 Bauer May 1993 A
5408322 Hsu et al. Apr 1995 A
5414268 McGee May 1995 A
5534762 Kim Jul 1996 A
5566280 Fukui et al. Oct 1996 A
5953055 Huang et al. Sep 1999 A
5988862 Kacyra et al. Nov 1999 A
6026376 Kenney Feb 2000 A
6034379 Bunte et al. Mar 2000 A
6075905 Herman et al. Jun 2000 A
6115114 Berg et al. Sep 2000 A
6141293 Amorai-Moriya et al. Oct 2000 A
6304855 Burke Oct 2001 B1
6442507 Skidmore et al. Aug 2002 B1
6549825 Kurata Apr 2003 B2
6580441 Schileru-Key Jun 2003 B2
6711293 Lowe Mar 2004 B1
6721769 Rappaport et al. Apr 2004 B1
6836567 Silver et al. Dec 2004 B1
6995762 Pavlidis et al. Feb 2006 B1
7090135 Patel Aug 2006 B2
7137207 Armstrong et al. Nov 2006 B2
7245558 Willins et al. Jul 2007 B2
7248754 Cato Jul 2007 B2
7277187 Smith et al. Oct 2007 B2
7295119 Rappaport Nov 2007 B2
7373722 Cooper et al. May 2008 B2
7474389 Greenberg et al. Jan 2009 B2
7487595 Armstrong et al. Feb 2009 B2
7493336 Noonan Feb 2009 B2
7508794 Feather Mar 2009 B2
7527205 Zhu et al. May 2009 B2
7605817 Zhang et al. Oct 2009 B2
7647752 Magnell Jan 2010 B2
7693757 Zimmerman Apr 2010 B2
7726575 Wang et al. Jun 2010 B2
7751928 Antony et al. Jul 2010 B1
7783383 Eliuk et al. Aug 2010 B2
7839531 Sugiyama Nov 2010 B2
7845560 Emanuel et al. Dec 2010 B2
7885865 Benson et al. Feb 2011 B2
7925114 Mai et al. Apr 2011 B2
7957998 Riley et al. Jun 2011 B2
7996179 Lee Aug 2011 B2
8009864 Linaker et al. Aug 2011 B2
8049621 Egan Nov 2011 B1
8091782 Cato et al. Jan 2012 B2
8094902 Crandall et al. Jan 2012 B2
8094937 Teoh et al. Jan 2012 B2
8132728 Dwinell et al. Mar 2012 B2
8134717 Pangrazio et al. Mar 2012 B2
8189855 Opalach et al. May 2012 B2
8199977 Krishnaswamy et al. Jun 2012 B2
8207964 Meadow et al. Jun 2012 B1
8233055 Matsunaga et al. Jul 2012 B2
8265895 Willins et al. Sep 2012 B2
8277396 Scott Oct 2012 B2
8284988 Sones et al. Oct 2012 B2
8423431 Rouaix et al. Apr 2013 B1
8429004 Hamilton et al. Apr 2013 B2
8463079 Ackley et al. Jun 2013 B2
8479996 Barkan et al. Jul 2013 B2
8520067 Ersue Aug 2013 B2
8542252 Perez et al. Sep 2013 B2
8571314 Tao et al. Oct 2013 B2
8599303 Stettner Dec 2013 B2
8630924 Groenevelt et al. Jan 2014 B2
8660338 Ma et al. Feb 2014 B2
8743176 Stettner et al. Jun 2014 B2
8757479 Clark et al. Jun 2014 B2
8812226 Zeng Aug 2014 B2
8923893 Austin et al. Dec 2014 B2
8939369 Olmstead et al. Jan 2015 B2
8954188 Sullivan et al. Feb 2015 B2
8958911 Wong Feb 2015 B2
8971637 Rivard Mar 2015 B1
8989342 Liesenfelt et al. Mar 2015 B2
9007601 Steffey et al. Apr 2015 B2
9037287 Grauberger et al. May 2015 B1
9037396 Pack et al. May 2015 B2
9064394 Trundle Jun 2015 B1
9070285 Ramu et al. Jun 2015 B1
9129277 MacIntosh Sep 2015 B2
9135491 Morandi et al. Sep 2015 B2
9159047 Winkel Oct 2015 B2
9171442 Clements Oct 2015 B2
9247211 Zhang et al. Jan 2016 B2
9329269 Zeng May 2016 B2
9367831 Besehanic Jun 2016 B1
9380222 Clayton et al. Jun 2016 B2
9396554 Williams et al. Jul 2016 B2
9400170 Steffey Jul 2016 B2
9424482 Patel et al. Aug 2016 B2
9476964 Stroila Oct 2016 B2
9517767 Kentley et al. Dec 2016 B1
9542746 Wu et al. Jan 2017 B2
9549125 Goyal et al. Jan 2017 B1
9562971 Shenkar et al. Feb 2017 B2
9565400 Curlander et al. Feb 2017 B1
9589353 Mueller-Fischer et al. Mar 2017 B2
9600731 Yasunaga et al. Mar 2017 B2
9600892 Patel et al. Mar 2017 B2
9612123 Levinson et al. Apr 2017 B1
9639935 Douady-Pleven et al. May 2017 B1
9697429 Patel et al. Jul 2017 B2
9766074 Roumeliotis Sep 2017 B2
9778388 Connor Oct 2017 B1
9791862 Connor Oct 2017 B1
9805240 Zheng et al. Oct 2017 B1
9811754 Schwartz Nov 2017 B2
9827683 Hance et al. Nov 2017 B1
9880009 Bell Jan 2018 B2
9928708 Lin et al. Mar 2018 B2
9953420 Wolski et al. Apr 2018 B2
9980009 Jiang et al. May 2018 B2
9994339 Colson et al. Jun 2018 B2
10019803 Venable et al. Jul 2018 B2
10111646 Nycz Oct 2018 B2
10121072 Kekatpure Nov 2018 B1
10127438 Fisher et al. Nov 2018 B1
10197400 Jesudason Feb 2019 B2
10210603 Venable et al. Feb 2019 B2
10229386 Thomas Mar 2019 B2
10248653 Blassin et al. Apr 2019 B2
10265871 Hance et al. Apr 2019 B2
10289990 Rizzolo et al. May 2019 B2
10336543 Sills et al. Jul 2019 B1
10349031 DeLuca Jul 2019 B2
10352689 Brown Jul 2019 B2
10373116 Medina et al. Aug 2019 B2
10394244 Song et al. Aug 2019 B2
10573134 Zalewski Feb 2020 B1
10672080 Maurer Jun 2020 B1
10705528 Wierzynski Jul 2020 B2
20010031069 Kondo et al. Oct 2001 A1
20010041948 Ross et al. Nov 2001 A1
20020006231 Jayant et al. Jan 2002 A1
20020097439 Braica Jul 2002 A1
20020146170 Rom Oct 2002 A1
20020158453 Levine Oct 2002 A1
20020164236 Fukuhara et al. Nov 2002 A1
20030003925 Suzuki Jan 2003 A1
20030094494 Blanford et al. May 2003 A1
20030174891 Wenzel et al. Sep 2003 A1
20040021313 Gardner et al. Feb 2004 A1
20040131278 Imagawa et al. Jul 2004 A1
20040240754 Smith et al. Dec 2004 A1
20050016004 Armstrong et al. Jan 2005 A1
20050114059 Chang et al. May 2005 A1
20050213082 DiBernardo et al. Sep 2005 A1
20050213109 Schell et al. Sep 2005 A1
20060032915 Schwartz Feb 2006 A1
20060045325 Zavadsky et al. Mar 2006 A1
20060106742 Bochicchio et al. May 2006 A1
20060285486 Roberts et al. Dec 2006 A1
20070036398 Chen Feb 2007 A1
20070074410 Armstrong et al. Apr 2007 A1
20070168382 Tillberg et al. Jul 2007 A1
20070272732 Hindmon Nov 2007 A1
20080002866 Fujiwara Jan 2008 A1
20080025565 Zhang et al. Jan 2008 A1
20080027591 Lenser et al. Jan 2008 A1
20080077511 Zimmerman Mar 2008 A1
20080159634 Sharma et al. Jul 2008 A1
20080164310 Dupuy et al. Jul 2008 A1
20080175513 Lai et al. Jul 2008 A1
20080181529 Michel et al. Jul 2008 A1
20080238919 Pack Oct 2008 A1
20080294487 Nasser Nov 2008 A1
20090009123 Skaff Jan 2009 A1
20090024353 Lee et al. Jan 2009 A1
20090057411 Madej et al. Mar 2009 A1
20090059270 Opalach et al. Mar 2009 A1
20090060349 Linaker et al. Mar 2009 A1
20090063306 Fano et al. Mar 2009 A1
20090063307 Groenovelt et al. Mar 2009 A1
20090074303 Filimonova et al. Mar 2009 A1
20090088975 Sato et al. Apr 2009 A1
20090103773 Wheeler et al. Apr 2009 A1
20090125350 Lessing et al. May 2009 A1
20090125535 Basso et al. May 2009 A1
20090152391 McWhirk Jun 2009 A1
20090160975 Kwan Jun 2009 A1
20090192921 Hicks Jul 2009 A1
20090206161 Olmstead Aug 2009 A1
20090236155 Skaff Sep 2009 A1
20090252437 Li et al. Oct 2009 A1
20090287587 Bloebaum et al. Nov 2009 A1
20090323121 Valkenburg et al. Dec 2009 A1
20100026804 Tanizaki et al. Feb 2010 A1
20100070365 Siotia et al. Mar 2010 A1
20100082194 Yabushita et al. Apr 2010 A1
20100091094 Sekowski Apr 2010 A1
20100118116 Tomasz et al. May 2010 A1
20100131234 Stewart et al. May 2010 A1
20100141806 Uemura et al. Jun 2010 A1
20100171826 Hamilton et al. Jul 2010 A1
20100208039 Stettner Aug 2010 A1
20100214873 Somasundaram et al. Aug 2010 A1
20100241289 Sandberg Sep 2010 A1
20100295850 Katz et al. Nov 2010 A1
20100315412 Sinha et al. Dec 2010 A1
20100326939 Clark et al. Dec 2010 A1
20110047636 Stachon et al. Feb 2011 A1
20110052043 Hyung et al. Mar 2011 A1
20110093306 Nielsen et al. Apr 2011 A1
20110137527 Simon et al. Jun 2011 A1
20110168774 Magal Jul 2011 A1
20110172875 Gibbs Jul 2011 A1
20110216063 Hayes Sep 2011 A1
20110242286 Pace et al. Oct 2011 A1
20110254840 Halstead Oct 2011 A1
20110286007 Pangrazio et al. Nov 2011 A1
20110288816 Thierman Nov 2011 A1
20110310088 Adabala et al. Dec 2011 A1
20120019393 Wolinsky et al. Jan 2012 A1
20120022913 Volkmann et al. Jan 2012 A1
20120069051 Hagbi et al. Mar 2012 A1
20120075342 Choubassi et al. Mar 2012 A1
20120133639 Kopf et al. May 2012 A1
20120307108 Forutanpour Jun 2012 A1
20120169530 Padmanabhan et al. Jul 2012 A1
20120179621 Moir et al. Jul 2012 A1
20120185112 Sung et al. Jul 2012 A1
20120194644 Newcombe et al. Aug 2012 A1
20120197464 Wang et al. Aug 2012 A1
20120201466 Funayama et al. Aug 2012 A1
20120209553 Doytchinov et al. Aug 2012 A1
20120236119 Rhee et al. Sep 2012 A1
20120249802 Taylor Oct 2012 A1
20120250978 Taylor Oct 2012 A1
20120269383 Bobbitt et al. Oct 2012 A1
20120287249 Choo et al. Nov 2012 A1
20120323620 Hofman et al. Dec 2012 A1
20130030700 Miller et al. Jan 2013 A1
20130090881 Janardhanan et al. Apr 2013 A1
20130119138 Winkel May 2013 A1
20130132913 Fu et al. May 2013 A1
20130134178 Lu May 2013 A1
20130138246 Gutmann et al. May 2013 A1
20130142421 Silver et al. Jun 2013 A1
20130144565 Miller et al. Jun 2013 A1
20130154802 O'Haire et al. Jun 2013 A1
20130156292 Chang et al. Jun 2013 A1
20130162806 Ding et al. Jun 2013 A1
20130176398 Bonner et al. Jul 2013 A1
20130178227 Vartanian et al. Jul 2013 A1
20130182114 Zhang et al. Jul 2013 A1
20130226344 Wong et al. Aug 2013 A1
20130228620 Ahern et al. Sep 2013 A1
20130235165 Gharib et al. Sep 2013 A1
20130236089 Litvak et al. Sep 2013 A1
20130278631 Border et al. Oct 2013 A1
20130299306 Jiang et al. Nov 2013 A1
20130299313 Baek, IV et al. Nov 2013 A1
20130300729 Grimaud Nov 2013 A1
20130303193 Dharwada et al. Nov 2013 A1
20130321418 Kirk Dec 2013 A1
20130329013 Metois et al. Dec 2013 A1
20130341400 Lancaster-Larocque Dec 2013 A1
20140002597 Taguchi et al. Jan 2014 A1
20140003655 Gopalkrishnan et al. Jan 2014 A1
20140003727 Lortz et al. Jan 2014 A1
20140016832 Kong et al. Jan 2014 A1
20140019311 Tanaka Jan 2014 A1
20140025201 Ryu et al. Jan 2014 A1
20140028837 Gao et al. Jan 2014 A1
20140047342 Breternitz et al. Feb 2014 A1
20140049616 Stettner Feb 2014 A1
20140052555 MacIntosh Feb 2014 A1
20140086483 Zhang et al. Mar 2014 A1
20140098094 Neumann et al. Apr 2014 A1
20140100813 Showering Apr 2014 A1
20140104413 McCloskey et al. Apr 2014 A1
20140129027 Schnittman May 2014 A1
20140156133 Cullinane et al. Jun 2014 A1
20140161359 Magri et al. Jun 2014 A1
20140192050 Qiu et al. Jul 2014 A1
20140195374 Bassemir et al. Jul 2014 A1
20140214547 Signorelli et al. Jul 2014 A1
20140214600 Argue et al. Jul 2014 A1
20140267614 Ding et al. Sep 2014 A1
20140267688 Aich et al. Sep 2014 A1
20140277691 Jacobus et al. Sep 2014 A1
20140300637 Fan et al. Oct 2014 A1
20140344401 Varney et al. Nov 2014 A1
20140351073 Murphy et al. Nov 2014 A1
20140369607 Patel et al. Dec 2014 A1
20150015602 Beaudoin Jan 2015 A1
20150019391 Kumar et al. Jan 2015 A1
20150029339 Kobres et al. Jan 2015 A1
20150039458 Reid Feb 2015 A1
20150088618 Basir et al. Mar 2015 A1
20150088703 Yan Mar 2015 A1
20150092066 Geiss et al. Apr 2015 A1
20150106403 Haverinen et al. Apr 2015 A1
20150117788 Patel et al. Apr 2015 A1
20150139010 Jeong et al. May 2015 A1
20150154467 Feng et al. Jun 2015 A1
20150161793 Takahashi Jun 2015 A1
20150170256 Pettyjohn et al. Jun 2015 A1
20150181198 Baele et al. Jun 2015 A1
20150212521 Pack et al. Jul 2015 A1
20150245358 Schmidt Aug 2015 A1
20150262116 Katircioglu et al. Sep 2015 A1
20150279035 Wolski et al. Oct 2015 A1
20150298317 Wang et al. Oct 2015 A1
20150310601 Rodriguez et al. Oct 2015 A1
20150352721 Wicks et al. Dec 2015 A1
20150363625 Wu et al. Dec 2015 A1
20150363758 Wu et al. Dec 2015 A1
20150365660 Wu et al. Dec 2015 A1
20150379704 Chandrasekar et al. Dec 2015 A1
20160026253 Bradski et al. Jan 2016 A1
20160044862 Kocer Feb 2016 A1
20160061591 Pangrazio et al. Mar 2016 A1
20160070981 Sasaki et al. Mar 2016 A1
20160092943 Vigier et al. Mar 2016 A1
20160012588 Taguchi et al. Apr 2016 A1
20160104041 Bowers et al. Apr 2016 A1
20160107690 Oyama et al. Apr 2016 A1
20160112628 Super et al. Apr 2016 A1
20160114488 Mascorro Medina et al. Apr 2016 A1
20160129592 Saboo et al. May 2016 A1
20160132815 Itoko et al. May 2016 A1
20160150217 Popov May 2016 A1
20160156898 Ren et al. Jun 2016 A1
20160163067 Williams et al. Jun 2016 A1
20160171336 Schwartz Jun 2016 A1
20160171429 Schwartz Jun 2016 A1
20160171707 Schwartz Jun 2016 A1
20160185347 Lefevre Jun 2016 A1
20160191759 Somanath et al. Jun 2016 A1
20160224927 Pettersson Aug 2016 A1
20160253735 Scudillo et al. Sep 2016 A1
20160253844 Petrovskaya et al. Sep 2016 A1
20160271795 Vicenti Sep 2016 A1
20160313133 Zang et al. Oct 2016 A1
20160328618 Patel et al. Nov 2016 A1
20160353099 Thomson et al. Dec 2016 A1
20160364634 Davis et al. Dec 2016 A1
20170004649 Collet Romea et al. Jan 2017 A1
20170011281 Dijkman et al. Jan 2017 A1
20170041553 Cao et al. Feb 2017 A1
20170054965 Raab et al. Feb 2017 A1
20170066459 Singh Mar 2017 A1
20170074659 Giurgiu et al. Mar 2017 A1
20170150129 Pangrazio May 2017 A1
20170178060 Schwartz Jun 2017 A1
20170193434 Shah et al. Jul 2017 A1
20170219338 Brown et al. Aug 2017 A1
20170219353 Alesiani Aug 2017 A1
20170227645 Swope et al. Aug 2017 A1
20170227647 Baik Aug 2017 A1
20170228885 Baumgartner Aug 2017 A1
20170261993 Venable et al. Sep 2017 A1
20170262724 Wu et al. Sep 2017 A1
20170280125 Brown et al. Sep 2017 A1
20170286773 Skaff et al. Oct 2017 A1
20170286901 Skaff et al. Oct 2017 A1
20170323376 Glaser et al. Nov 2017 A1
20170337508 Bogolea et al. Nov 2017 A1
20180001481 Shah et al. Jan 2018 A1
20180005035 Bogolea et al. Jan 2018 A1
20180005176 Williams et al. Jan 2018 A1
20180020145 Kotfis et al. Jan 2018 A1
20180051991 Hong Feb 2018 A1
20180053091 Savvides et al. Feb 2018 A1
20180053305 Gu et al. Feb 2018 A1
20180101813 Paat et al. Apr 2018 A1
20180114183 Howell Apr 2018 A1
20180143003 Clayton et al. May 2018 A1
20180174325 Fu et al. Jun 2018 A1
20180204111 Zadeh et al. Jul 2018 A1
20180251253 Taira et al. Sep 2018 A1
20180281191 Sinyavskiy et al. Oct 2018 A1
20180293442 Fridental et al. Oct 2018 A1
20180313956 Rzeszutek et al. Nov 2018 A1
20180314260 Jen et al. Nov 2018 A1
20180314908 Lam Nov 2018 A1
20180315007 Kingsford et al. Nov 2018 A1
20180315065 Zhang et al. Nov 2018 A1
20180315173 Phan et al. Nov 2018 A1
20180315865 Haist et al. Nov 2018 A1
20190057588 Savvides et al. Feb 2019 A1
20190065861 Savvides et al. Feb 2019 A1
20190073554 Rzeszutek Mar 2019 A1
20190073559 Rzeszutek et al. Mar 2019 A1
20190087663 Yamazaki et al. Mar 2019 A1
20190108606 Komiyama Apr 2019 A1
20190180150 Taylor et al. Jun 2019 A1
20190197728 Yamao Jun 2019 A1
20190236530 Cantrell et al. Aug 2019 A1
20190304132 Yoda et al. Oct 2019 A1
20190392212 Sawhney et al. Dec 2019 A1
Foreign Referenced Citations (36)
Number Date Country
2835830 Nov 2012 CA
3028156 Jan 2018 CA
104200086 Dec 2014 CN
107067382 Aug 2017 CN
0766098 Apr 1997 EP
1311993 May 2007 EP
2309378 Apr 2011 EP
2439487 Apr 2012 EP
2472475 Jul 2012 EP
2562688 Feb 2013 EP
2662831 Nov 2013 EP
2693362 Feb 2014 EP
2323238 Sep 1998 GB
2330265 Apr 1999 GB
101234798 Jan 2009 KR
1020190031431 Mar 2019 KR
WO 9923600 May 1999 WO
2003002935 Jan 2003 WO
2003025805 Mar 2003 WO
2006136958 Dec 2006 WO
2007042251 Apr 2007 WO
2008057504 May 2008 WO
2008154611 Dec 2008 WO
WO 2012103199 Aug 2012 WO
WO 2012103202 Aug 2012 WO
WO 2012154801 Nov 2012 WO
2013165674 Nov 2013 WO
WO 2014066422 May 2014 WO
2014092552 Jun 2014 WO
2014181323 Nov 2014 WO
2015127503 Sep 2015 WO
2016020038 Feb 2016 WO
WO 2018018007 Jan 2018 WO
WO 2018204308 Nov 2018 WO
WO 2018204342 Nov 2018 WO
WO 2019023249 Jan 2019 WO
Non-Patent Literature Citations (98)
Entry
International Search Report and Written Opinion for International Patent Application No. PCT/US2013/053212 dated Dec. 1, 2014.
Duda, et al., “Use of the Hough Transformation to Detect Lines and Curves in Pictures”, Stanford Research Institute, Menlo Park, California, Graphics and Image Processing, Communications of the ACM, vol. 15, No. 1 (Jan. 1972).
Bohm, “Multi-Image Fusion for Occlusion-Free Facade Texturing”, International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, pp. 867-872 (Jan. 2004).
Senthilkumaran, et al., “Edge Detection Techniques for Image Segmentation—A Survey of Soft Computing Approaches”, International Journal of Recent Trends in Engineering, vol. 1, No. 2 (May 2009).
Flores, et al., “Removing Pedestrians from Google Street View Images”, Computer Vision and Pattern Recognition Workshops, 2010 IEEE Computer Society Conference on, IEE, Piscataway, NJ, pp. 53-58 (Jun. 13, 2010).
Uchiyama, et al., “Removal of Moving Objects from a Street-View Image by Fusing Multiple Image Sequences”, Pattern Recognition, 2010, 20th International Conference on, IEEE, Piscataway, NJ, pp. 3456-3459 (Aug. 23, 2010).
Tseng, et al. “A Cloud Removal Approach for Aerial Image Visualization”, International Journal of Innovative Computing, Information & Control, vol. 9 No. 6, pp. 2421-2440 (Jun. 2013).
United Kingdom Intellectual Property Office, Combined Search and Examination Report dated Mar. 11, 2015 for GB Patent Application No. 1417218.3.
United Kingdom Intellectual Property Office, Examination Report dated Jan. 22, 2016 for GB Patent Application No. 1417218.3 (2 pages).
United Kingdom Intellectual Property Office, Combined Search and Examination Report dated Jan. 22, 2016 for GB Patent Application No. 1521272.3 (6 pages).
Notice of allowance for U.S. Appl. No. 13/568,175 dated Sep. 23, 2014.
Notice of allowance for U.S. Appl. No. 13/693,503 dated Mar. 11, 2016.
Notice of allowance for U.S. Appl. No. 14/068,495 dated Apr. 25, 2016.
Notice of allowance for U.S. Appl. No. 15/211,103 dated Apr. 5, 2017.
Notice of allowance for U.S. Appl. No. 14/518,091 dated Apr. 12, 2017.
U.S. Appl. No. 15/583,680, filed May 1, 2017.
U.S. Appl. No. 15/583,801, filed May 1, 2017.
U.S. Appl. No. 15/583,740, filed May 1, 2017.
U.S. Appl. No. 15/583,717, filed May 1, 2017.
U.S. Appl. No. 15/583,773, filed May 1, 2017.
U.S. Appl. No. 15/583,786, filed May 1, 2017.
International Patent Application Serial No. PCT/CN2017/083143 filed May 5, 2017.
“Fair Billing with Automatic Dimensioning” pp. 1-4, undated, Copyright Mettler-Toledo International Inc.
Schnabel et al. “Efficient RANSAC for Point-Cloud Shape Detection”, vol. 0, No. 0, pp. 1-12 (1981).
F.C.A. Groen et al., “The smallest box around a package,” Pattern Recognition, vol. 14, No. 1-6, Jan. 1, 1981, pp. 173-176, XP055237156, GB, ISSN: 0031-3203, DOI: 10.1016/0031-3203(81(90059-5 p. 176-p. 178.
Buenaposada et al. “Real-time tracking and estimation of plane pose” Proceedings of the ICPR (Aug. 2002) vol. II, IEEE pp. 697-700.
Tahir, Rabbani, et al., “Segmentation of point clouds using smoothness constraint,” International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences 365 (Sep. 2006): 248-253.
Ajmal S. Mian et al., “Three-Dimensional Model Based Object Recognition and Segmentation in Cluttered Scenes”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 28, No. 10, Oct. 2006.
Lecking et al: “Localization in a wide range of industrial environments using relative 3D ceiling features”, IEEE, pp. 333-337 (Sep. 15, 2008).
Golovinskiy, Aleksey, et al. “Min-cut based segmentation of point clouds.” Computer Vision Workshops (ICCV Workshops), 2009 IEEE 12th International Conference on. IEEE, 2009.
Datta, A., et al., “Accurate camera calibration using iterative refinement of control points,” in Computer Vision Workshops (ICCV Workshops), 2009.
N.D.F. Campbell et al. “Automatic 3D Object Segmentation in Multiple Views using Volumetric Graph-Cuts”, Journal of Image and Vision Computing, vol. 28, Issue 1, Jan. 2010, pp. 14-25.
Olson, Clark F., et al. “Wide-Baseline Stereo Vision for Terrain Mapping” in Machine Vision and Applications, Aug. 2010.
“Plane Detection in Point Cloud Data” dated Jan. 25, 2010 by Michael Ying Yang and Wolfgang Forstner, Technical Report 1, 2010, University of Bonn.
Douillard, Bertrand, et al. “On the segmentation of 3D LIDAR point clouds.” Robotics and Automation (ICRA), 2011 IEEE International Conference on. IEEE, 2011.
Puwein, J., et al., “Robust multi-view camera calibration for wide-baseline camera networks,” in IEEE Workshop on Applications of Computer Vision (WACV), Jan. 2011.
Lari, Z., et al., “An adaptive approach for segmentation of 3D laser point cloud.” International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. XXXVIII-5/W12, 2011, ISPRS Calgary 2011 Workshop, Aug. 29-31, 2011, Calgary, Canada.
Federico Tombari et al. “Multimodal cue integration through Hypotheses Verification for RGB-D object recognition and 6DOF pose estimation”, IEEE International Conference on Robotics and Automation, Jan. 2013.
Carreira et al., “Enhanced PCA-based localization using depth maps with missing data,” IEEE, pp. 1-8, Apr. 24, 2013.
Dubois, M., et al., “A comparison of geometric and energy-based point cloud semantic segmentation methods,” European Conference on Mobile Robots (ECMR), vol., No., pp. 88-93, Sep. 25-27, 2013.
Ziang Xie et al., “Multimodal Blending for High-Accuracy Instance Recognition”, 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 2214-2221.
“Swift Dimension” Trademark Omniplanar, Copyright 2014.
Rusu, et al. “Spatial change detection on unorganized point cloud data,” PCL Library, retrieved from Internet on Aug. 19, 2016 [http://pointclouds.org/documentation/tutorials/octree_change.php].
Rusu, et al., “How to incrementally register pairs of clouds,” PCL Library, retrieved from the Internet on Aug. 22, 2016 from <http://pointclouds.org/documentation/tutorials/pairwise_incremental_registration.php>.
International Search Report and Written Opinion for corresponding International Patent Application No. PCT/US2016/064110 dated Mar. 20, 2017.
International Search Report and Written Opinion for International Patent Application No. PCT/US2017/024847 dated Jul. 7, 2017.
Hackel et al., “Contour Detection in unstructured 3D point clouds,”IEEE, 2016 Conference on Computer vision and Pattern recognition (CVPR), Jun. 27-30, 2016, pp. 1-9.
Hao et al., “Structure-based object detection from scene point clouds,” Science Direct, v191, pp. 148-160 (2016).
Hu et al., “An improved method of discrete point cloud filtering based on complex environment,” International Journal of Applied Mathematics and Statistics, v48, i18 (2013).
International Search Report and Written Opinion for International Application No. PCT/US2019/025859 dated Jul. 3, 2019.
International Search Report and Written Opinion from International Patent Application No. PCT/US2018/030345 dated Sep. 17, 2018.
International Search Report and Written Opinion from International Patent Application No. PCT/US2018/030360 dated Jul. 9, 2018.
International Search Report and Written Opinion from International Patent Application No. PCT/US2018/030363 dated Jul. 9, 2018.
International Search Report and Written Opinion from International Patent Application No. PCT/US2019/025849 dated Jul. 9, 2019.
International Search Report and Written Opinion from International Patent Application No. PCT/US2019/050370 dated Nov. 19, 2019.
International Search Report and Written Opinion from International Patent Application No. PCT/US2019/064020 dated Feb. 19, 2020.
International Search Report and Written Opinion for International Patent Application No. PCT/US2013/070996 dated Apr. 2, 2014.
International Search Report and Written Opinion for International Patent Application No. PCT/US2020/028133 dated Jul. 24, 2020.
International Search Report and Written Opinion from International Patent Application No. PCT/US2020/029134 dated Jul. 27, 2020.
International Search Report and Written Opinion from International Patent Application No. PCT/US2020/028183 dated Jul. 24, 2020.
International Search Report and Written Opinion for International Application No. PCT/US2019/025870 dated Jun. 21, 2019.
Jadhav et al. “Survey on Spatial Domain dynamic template matching technique for scanning linear barcode,” International Journal of scieve and research v 5 n 3, Mar. 2016)(Year: 2016).
Jian Fan et al: “Shelf detection via vanishing point and radial projection”, 2014 IEEE International Conference on image processing (ICIP), IEEE, (Oct. 27, 2014), pp. 1575-1578.
Kang et al., “Kinematic Path-Tracking of Mobile Robot Using Iterative learning Control”, Journal of Robotic Systems, 2005, pp. 111-121.
Kay et al. “Ray Tracing Complex Scenes.” ACM SIGGRAPH Computer Graphics, vol. 20, No. 4, ACM, pp. 269-278, 1986.
Kelly et al., “Reactive Nonholonomic Trajectory Generation via Parametric Optimal Control”, International Journal of Robotics Research, vol. 22, No. 7-8, pp. 583-601 (Jul. 30, 2013).
Kim, et al. “Robust approach to reconstructing transparent objects using a time-of-flight depth camera”, Optics Express, vol. 25, No. 3; Published Feb. 6, 2017.
Lee et al. “Statistically Optimized Sampling for Distributed Ray Tracing.” ACM SIGGRAPH Computer Graphics, vol. 19, No. 3, ACM, pp. 61-67, 1985.
Li et al., “An improved RANSAC for 3D Point cloud plane segmentation based on normal distribution transformation cells,” Remote sensing, V9: 433, pp. 1-16 (2017).
Likhachev, Maxim, and Dave Ferguson. “Planning Long dynamically feasible maneuvers for autonomous vehicles.” The international journal of Robotics Reasearch 28.8 (2009): 933-945. (Year:2009).
Marder-Eppstein et al., “The Office Marathon: robust navigation in an indoor office environment,” IEEE, 2010 International conference on robotics and automation, May 3-7, 2010, pp. 300-307.
McNaughton, Matthew, et al. “Motion planning for autonomous driving with a conformal spatiotemporal lattice.” Robotics and Automation (ICRA), 2011 IEEE International Conference on. IEEE, 2011. (Year: 2011).
Mitra et al., “Estimating surface normals in noisy point cloud data,” International Journal of Computational geometry & applications, Jun. 8-10, 2003, pp. 322-328.
Ni et al., “Edge Detection and Feature Line Tracing in 3D-Point Clouds by Analyzing Geometric Properties of Neighborhoods,” Remote Sensing, V8 I9, pp. 1-20 (2016).
Norriof et al., “Experimental comparison of some classical iterative learning control algorithms”, IEEE Transactions on Robotics and Automation, Jun. 2002, pp. 636-641.
Oriolo et al., “An iterative learning controller for nonholonomic mobile Robots”, The international Journal of Robotics Research, Aug. 1997, pp. 954-970.
Ostafew et al., “Visual Teach and Repeat, Repeat, Repeat: Iterative learning control to improve mobile robot path tracking in challenging outdoor environment”, IEEE/RSJ International Conference on Intelligent robots and Systems, Nov. 2013, pp. 176-181.
Park et al., “Autonomous mobile robot navigation using passiv rfid in indoor environment,” IEEE, Transactions on industrial electronics, vol. 56, issue 7, pp. 2366-2373 (Jul. 2009).
Perveen et al. (An overview of template matching methodologies and its application, International Journal of Research in Computer and Communication Technology, v2n10, Oct. 2013) (Year: 2013).
Pivtoraiko et al., “Differentially constrained mobile robot motion planning in state lattices”, journal of field robotics, vol. 26, No. 3, 2009, pp. 308-333.
Pratt W K Ed: “Digital Image processing, 10-image enhancement, 17-image segmentation”, Jan. 1, 2001, Digital Image Processing: PIKS Inside, New York: John Wily & Sons, US, pp. 243-258, 551.
Szeliski, “Modified Hough Transform”, Computer Vision. Copyright 2011, pp. 251-254. Retrieved on Aug. 17, 2017 [http://szeliski.org/book/drafts/SzeliskiBook_20100903_draft.pdf].
Trevor et al., “Tables, Counters, and Shelves: Semantic Mapping of Surfaces in 3D,” Retrieved from Internet Jul. 3, 2018 @ http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.703.5365&rep=rep1&type=pdf, pp. 1-6.
United Kingdom Intellectual Property Office, “Combined Search and Examination Report” for GB Patent Application No. 1813580.6 dated Feb. 21, 2019.
United Kingdom Intellectual Property Office, Combined Search and Examination Report dated May 13, 2020 for GB Patent Application No. 1917864.9.
Varol Gul et al: “Product placement detection based on image processing”, 2014 22nd Signal Processing and Communication Applications Conference (SIU), IEEE, Apr. 23, 2014.
Varol Gul et al: “Toward Retail product recognition on Grocery shelves”, Visual Communications and image processing; Jan. 20, 2004; San Jose, (Mar. 4, 2015).
Weber et al., “Methods for Feature Detection in Point clouds,” visualization of large and unstructured data sets—IRTG Workshop, pp. 90-99 (2010).
Zhao Zhou et al.: “An Image contrast Enhancement Algorithm Using PLIP-based histogram Modification”, 2017 3rd IEEE International Conference on Cybernetics (CYBCON), IEEE, (Jun. 21, 2017).
Batalin et al., “Mobile robot navigation using a sensor network,” IEEE, International Conference on robotics and automation, Apr. 26, May 1, 2004, pp. 636-641.
Bazazian et al., “Fast and Robust Edge Extraction in Unorganized Point clouds,” IEEE, 2015 International Conference on Digital Image Computing: Techniques and Applicatoins (DICTA), Nov. 23-25, 2015, pp. 1-8.
Biswas et al. “Depth Camera Based Indoor Mobile Robot Localization and Navigation” Robotics and Automation (ICRA), 2012 IEEE International Conference on IEEE, 2012.
Bristow et al., “A Survey of Iterative Learning Control”, IEEE Control Systems, Jun. 2006, pp. 96-114.
Chen et al. “Improving Octree-Based Occupancy Maps Using Environment Sparsity with Application to Aerial Robot Navigation” Robotics and Automation (ICRA), 2017 IEEE.
Cleveland Jonas et al: “Automated System for Semantic Object Labeling with Soft-Object Recognition and Dynamic Programming Segmentation”, IEEE Transactions on Automation Science and Engineering, IEEE Service Center, New York, NY (Apr. 1, 2017).
Cook et al., “Distributed Ray Tracing ACM SIGGRAPH Computer Graphics”, vol. 18, No. 3, ACM pp. 137-145, 1984.
Deschaud, et al., “A Fast and Accurate Place Detection algoritm for large noisy point clouds using filtered normals and voxel growing,” 3DPVT, May 2010, Paris, France. [hal-01097361].
Glassner, “Space Subdivision for Fast Ray Tracing.” IEEE Computer Graphics and Applications, 4.10, pp. 15-24, 1984.
Related Publications (1)
Number Date Country
20180315007 A1 Nov 2018 US
Provisional Applications (1)
Number Date Country
62492670 May 2017 US