Autonomous robot systems can perform various tasks without human intervention.
Identifying when such tasks are completed and the outcome of such tasks can be a slow and error prone process, particularly when the tasks relate to physical objects being removed and replaced.
Illustrative embodiments are shown by way of example in the accompanying drawings and should not be considered as a limitation of the present disclosure:
Described in detail herein is an automated marking system. An autonomous robot device can autonomously roam through a facility, and can be in selective communication with a computing system via a communications network. The autonomous robot device can include a controller, a drive motor, a dispensing/marking device, a reader and an image capturing device. The autonomous robot device can locate and identify one or more cases stored in at least one of a plurality of bins in a first location of the facility, wherein each case contains one or more physical objects (a set of like physical objects). For example, a case can contain several individually packaged items (a case of cereal boxes), can form the packing for an item (a case of dog food). The case can be formed of various materials based on its contents, and can include cardboard, plastic, paper, wood, and the like.) A bin, as used herein, can refer to a specified location or slot on a shelf or a specified apparatus for storing cases. The autonomous robot device can extract and decode identifying information associated with at least one of the one or more cases and/or bins, and can transmit the identifying information of the at least one of the one or more cases and/or bins to the computing system via the network.
The computing system can receive the identifying information associated with the physical object contained by the at least one of the one or more cases and/or at the associated bin, and can query a data storage facility to retrieve information associated with a quantity of the physical objects disposed in a second location of the facility. The computing system can determine that the quantity is below a specified quantity, and can determine a priority for a specified quantity of the physical objects to be moved from the at least one of the one or more cases (in the first location) to the second location of the facility. Based on the specified quantity and/or the priority, the computing system can instruct the at least one autonomous robot device to mark the bin and/or at least one of the one or more cases with an identifying mark denoting the determined priority.
The autonomous robot device can receive the instructions to mark the bin and/or case, can locate and identify the bin and/or case, and can mark the case with the identifying mark.
In some embodiments, the autonomous robot device can retrieve the identifying information associated with physical objects contained in a case, can retrieve the quantity information for the physical objects at the second location, and can determine the priority independent and without input from the computing system.
In one embodiment, an autonomous marking system can include a computing system in communication with a data storage facility and autonomous robot devices in selective communication with the computing system via a communications network. The autonomous robot devices include a controller, a drive motor, a dispensing device, a reader and an image capturing device. An autonomous robot device can be configured to autonomously roam in a first location of a facility, locate and identify one or more cases stored in at least one of a plurality of bins in the first location of the facility. Each case can contain a set of like physical objects. The autonomous robot device can be further configured to extract and decode identifying information associated with at least one of the one or more cases, and transmit the identifying information of the at least one of the one or more cases to the computing system.
The computing system can be programmed to receive the identifying information associated with the at least one of the one or more cases, query the data storage facility to retrieve information associated with a first set of like physical objects disposed within the at least one of the one or more cases, identify an identifying mark associated with a priority of the at least one of the one or more cases, generate a virtual element depicting the identifying mark, and associate at least one of the one or more cases with the virtual element depicting the identifying mark in the data storage facility.
The system can further include a portable electronic device including an image capturing device, a processing device, computer memory, and a display. The processing device of the portable electronic device can execute an application, and can be in communication with the computing system. The application when executed can be configured to control the operation of the image capturing device to contemporaneously and continuously image an area within a field of view of the image capturing device, render on the display the physical scene including the at least one of the one or more cases and the identifying information associated with the at least one of the one or more cases when the at least one of the one or more cases is in the area within the field of view of the image capturing device, parse the physical scene rendered on the display into the discrete elements based on dimensions of items in the physical scene, extract and decode the identifying information associated with at least one of the one or more cases, transmit the identifying information of the at least one of the one or more cases to the computing system, and in response to receiving instructions from the computing system, augment the physical scene rendered on the display to superimpose the virtual element depicting the identifying mark on the at least one of the one or more cases.
In one embodiment, an autonomous marking system can include a computing system in communication with a data storage facility and autonomous robot devices in selective communication with the computing system via a communications network. Each of the autonomous robot devices can include a controller, a drive motor, a dispensing device, a reader and an image capturing device. At least one of the autonomous robot devices can be configured to autonomously roam in a first location of a facility, and locate and identify one or more cases stored in at least one of a plurality of bins in the first location of the facility. Each case can contain a set of like physical objects. The autonomous robot device is further configured to extract and decode identifying information associated with at least one of the one or more cases, transmit the identifying information of the at least one of the one or more cases to the computing system.
The computing system can be programmed to receive the identifying information associated with the at least one of the one or more cases, query the data storage facility to retrieve information associated with a first set of like physical objects disposed within the case, identify an identifying mark for the at least one of the one or more cases based on the priority, instruct the at least one autonomous robot device to embed a sensing device in the at least one of the one or more cases.
The autonomous robot device can be configured to navigate to the at least one bin storing the at least one of the one or more cases, locate and identify the at least one of the one or more cases, embed the sensing device in the at least one of the one or more cases, and transmit an identifier encoded in the sensing device to the computing system.
The computing system can be configured to store and associate the identifier of the sensing device with the at least one of the one or more cases and the identified identifying mark. The system can further include a portable electronic device executing an application and including processing device, computer memory, a reader, and a display. The portable electronic device can be in communication with the computing system. In response to execution of the application, the portable electronic device can be configured to scan, using the reader, the sensing device embedded in the at least one of the one or more cases, decode the identifier from the sending device, transmit the identifier to the computing system, render the identifying mark associated with the at least one of the one or more cases on the display, in response to receiving instructions.
The memory 206 can include any suitable, non-transitory computer-readable storage medium, e.g., read-only memory (ROM), erasable programmable ROM (EPROM), electrically-erasable programmable ROM (EEPROM), flash memory, and the like. In exemplary embodiments, an operating system 226 and applications 228 can be embodied as computer-readable/executable program code stored on the non-transitory computer-readable memory 206 and implemented using any suitable, high or low level computing language and/or platform, such as, e.g., Java, C, C++, C#, assembly code, machine readable language, and the like. In some embodiments, the applications 228 can include an assistance application configured to interact with the microphone, a web browser application, a mobile application specifically coded to interface with a computing system. The computing system is described in further detail with respect to
The processing device 204 can include any suitable single- or multiple-core microprocessor of any suitable architecture that is capable of implementing and/or facilitating an operation of the portable electronic device 200. For example, to perform an image capture operation, capture a voice input of the user (e.g., via the microphone), transmit messages including a captured image and/or a voice input and receive messages from a computing system, display data/information including GUIs of the user interface 210, captured images, voice input transcribed as text, and the like. The processing device 204 can be programmed and/or configured to execute the operating system 226 and applications 228 to implement one or more processes to perform an operation. The processing device 204 can retrieve information/data from and store information/data to the storage device 206.
The RF transceiver 214 can be configured to transmit and/or receive wireless transmissions via an antenna 215. For example, the RF transceiver 214 can be configured to transmit data/information, such as input based on user interaction with the portable electronic device. The RF transceiver 214 can be configured to transmit and/or receive data/information having at a specified frequency and/or according to a specified sequence and/or packet arrangement.
The touch-sensitive display 210 can render user interfaces, such as graphical user interfaces to a user and in some embodiments can provide a mechanism that allows the user to interact with the GUIs. For example, a user may interact with the portable electronic device 200 through touch-sensitive display 210, which may be implemented as a liquid crystal touch-screen (or haptic) display, a light emitting diode touch-screen display, and/or any other suitable display device, which may display one or more user interfaces (e.g., GUIs) that may be provided in accordance with exemplary embodiments.
The power source 212 can be implemented as a battery or capacitive elements configured to store an electric charge and power the portable electronic device 200. In exemplary embodiments, the power source 212 can be a rechargeable power source, such as a battery or one or more capacitive elements configured to be recharged via a connection to an external power supply.
A user can operate the portable electronic device 200 in a facility, and the graphical user interface can automatically be generated in response executing an augment application on the portable electronic device 200. The augment application can be associated with the facility. The image capturing device 208 can be configured to capture still and moving images and can communicate with the executed application. The touch-sensitive display 210 can render the area of the facility viewable to the image capturing device 208. The port able electronic device can be positioned so that the bins and/or cases can be within a viewable area of the image capturing device 208. The graphical user interface can render the bins and/or cases with virtual elements superimposed on the bins and/or cases.
The controller 308a can be programmed to control an operation of the actuator 305 of the dispensing instrument 304, the image capturing device 306, the optical scanner 308b, the drive motor 308c, the motive assemblies 302 (e.g., via the drive motor 308c), based on various inputs including inputs from the GPS receiver 308d, the accelerometer 308e, the gyroscope 308f, the image capturing device 306, the optical scanner 308, and/or from a remote computing system. The drive motor 308c can control the operation of the motive assemblies 302 directly and/or through one or more drive trains (e.g., gear assemblies and/or belts). The power source can power the motive assemblies 302, the dispensing instrument 304, the actuator 305 coupled to the dispensing instrument 304, the image capturing device 306, the controller 308a, the optical scanner 308b, the drive motor 308c, the GPS receiver 308d, RF transceiver 308e, the accelerometer 308f, the gyroscope 308g.
In this non-limiting example, the motive assemblies 302 can be rotors and blades affixed to the edges of the autonomous robot device 300. Other examples of the motive assemblies 302 can be, but are not limited to, wheels, tracks, and propellers. The motive assemblies 302 can facilitate 360 degree movement for the autonomous robot device 302. The image capturing device 305 can be a still image camera or a moving image camera.
The GPS receiver 308d can be an L-band radio processor capable of solving the navigation equations in order to determine a position of the autonomous robot device 300, determine a velocity and precise time (PVT) by processing the signal broadcasted by GPS satellites. The accelerometer 308f and gyroscope 308g can determine the direction, orientation, position, acceleration, velocity, tilt, pitch, yaw, and roll of the autonomous robot device 300. In exemplary embodiments, the controller can implement one or more algorithms, such as a Kalman filter, for determining a position of the autonomous robot device.
Alternatively or in addition to, the autonomous robot device 300 can navigate around the facility using beacon devices and triangulation. Beacon devices can be disposed in the facility. The beacon device can emit a signal encoded with an identifier, indicating a location of the facility. The RF transceiver 308e disposed on the autonomous robot device 300 can extract the unique identifier from the signal emitted by the beacon device, in response to the autonomous robot device 300 being within a specified distance of the beacon device. In response to extracting the identifier, the autonomous robot device 300 can determine its location within the facility.
The autonomous robot device 300 can navigate around a specified location of a facility and scan cases 310a-b containing one or more physical objects. The cases 310a-b can be disposed in fixtures 320a-c, respectively, which, in this non-limiting example, can correspond to bins disposed in the back room of a facility. For example, the cases 310a-b can be stacked on top of one another within the bins 320a-c or can be stacked back-to-back or side-to-side. Each of the bins 320a-c can be identified by labels 322a-c including alphanumeric text and/or machine-readable elements 330a-c disposed on the bins 320a-c, respectively. The machine-readable elements 330a-c can be encoded with identifiers associated with the respective bin 320. Labels 312 can be disposed on each of the cases 310. The labels 312 can include information associated with the physical objects disposed within the cases. The information can include name, type, color, size, quantity and/or a machine-readable element encoded with an identifier associated with the physical objects.
The autonomous robot device 300 can navigate through the specified location of the facility (e.g., the back room) using the motive assemblies 302 to the bins 320a-c. The autonomous robot device 300 can be programmed with a map of the facility and/or can generate a map of facility using simultaneous localization and mapping (SLAM). The autonomous robot device 300 can navigate around the facility based on inputs from the motive assemblies 302, GPS receiver 308d, RF transceiver 308e, the accelerometer 308f, the gyroscope 308g.
The autonomous robot device 300 can scan the labels 312 disposed on the cases 310 using the image capturing device 306. The image capturing device 306 can extract and decode the information on the labels 322a-c on the bins 320a-c and/or the labels on the bins 312, and the autonomous robot device 300 can transmit the information to a computing system. The autonomous robot device 300 can use optical character recognition or machine-vision to extract and decode the information from the labels. In other embodiments, the autonomous robot device can capture an image of the labels 312 and transmit the image to the computing system. The computing system will be described in further detail with respect to
The autonomous robot device 300 can receive instructions for identified cases at the bins 320a-c indicating a priority with which the cases are to be moved to a fixture in another location (e.g., the front room) in the facility 100. The autonomous robot device 300 can scan and locate the identified cases 310 within the bins 320 using the image capturing device. The actuator 305 can actuate the dispensing device 304 to mark the identified cases with a specified identifying mark, such as a dot, glyph, shape, character, or the like, and/or can be one or more colors. In addition, or alternatively, the autonomous robot device 300 can mark the bin corresponding to the cases with the identifying mark. Different identifying marks can correspond to different actions or tasks to be performed with respect to the marked cases 310. In one embodiment, the dispensing device 304 can be a paint dispenser and the identifying mark can be a particular color of paint, dispensed from the paint dispenser. For example, a green color can represent high priority, black can represent intermediate priority and red can represent low priority for moving physical objects from the bins 320 to fixtures at another location. In one embodiment, the paint dispenser can dispense the paint to be a particular shape, identifying mark, glyph, character, or the like, and/or can mark the bins or cases with the quantity of physical objects to be moved.
In another embodiment the dispensing device 304 can be a laser and the identifying mark can be an inscription indicating the priority and/or quantity. In another embodiment, the dispensing device 304 can dispense stickers marking the cases 310. The actuator 305 can be coupled to a compressed air device. In response, the actuator 305 being actuated compressed air can be released to force a sticker out of the dispensing device 304, and onto the cases 310 and/or bins. The dispensing device 304 can also include a writing instrument (i.e., chalk, graphite, ink). The dispensing device 304 can write an identifying mark on the identified cases 310 and/or bins, using the writing instrument.
In one embodiment, the autonomous robot device 300 can scan and decode the identifier from the machine-readable elements 230a-c of the bins 320a-c. The autonomous robot device 300 can transmit the identifiers to the computing system. The autonomous robot device 300 can receive instructions to mark specified cases 310 within each of the bins 320a-c, with an identifying mark. The autonomous robot device can search and locate the specified cases 310 within the bins 320a-c and mark the specified cases 310, with a specified identifying mark. In the event, the specified case is not visible to the autonomous robot device 300, the autonomous robot device 300 can mark the outside of the bin 320a-c, with a specified identifying mark.
In one embodiment, the autonomous robot device 300 can extract and decode information disposed on the outside of a bin 320a-c, using the image capturing device 306. Alternatively or in addition to, the autonomous robot device 300 can capture an image using the image capturing device 306 and transmit an image of the information disposed on the outside of the bin 320a-c to the computing system. As described above, the autonomous robot device can use OCR and machine-vision to extract and decode the information. The information can include identifying information of cases disposed within the bins 320a-c (e.g., including a quantity of cases and/or a quantity of physical objects in the bins). The autonomous robot device 300 can transmit the extracted and decoded information to the computing system. The autonomous robot device 300 can receive instructions to mark specified cases 310 within each of the bins 320a-c, with a specified identifying mark. The autonomous robot device can search and locate the specified cases 310 within the bins 320a-c and mark the specified cases 310. In the event, the specified case is not visible to the autonomous robot device 300, the autonomous robot device 300 can mark the outside of the bin 320a-c, with a specified identifying mark. The autonomous robot device 300 can mark portions of the information disposed on the outside of the bins 320a-c to identify the priority determined for the cases.
In one embodiment, the cases 310a-b can be stacked on top of each other in the bins 320a-c. The autonomous robot device 300 can use Lidar technology using a sensor to locate and scan cases which are disposed underneath other cases. The sensor can be configured to illuminate the cases using pulsed laser light and measuring the reflected pulses.
In one embodiment, the autonomous robot device 300 can transmit information associated with bins 320a-b and/or cases 310a-c which are disposed at facilities to a computing system. The information can include extracted text, images and/or identifiers of the bins 320a and/or cases 310a-c. The computing system can determine an identifying mark associated with the bins 320a-b and/or cases 310a-c. The computing system can convert the identifying mark into a virtual element and store and associate the virtual element with an identifier associated with a bins 320a and/or cases 310a-c on which the virtual element is to be superimposed.
In one embodiment, the autonomous robot device 300 can embed a sensing device in the bins 320a-b and/or cases 312a-b. The sensing device can be encoded with an identifier. The autonomous robot device 300 can transmit information associated with bins 320a-b and/or cases 310a-c which are disposed at facilities and the identifier of the sensing device to a computing system. The information can include extracted text, images and/or identifiers of the bins 320a and/or cases 310a-c. The computing system can determine an identifying mark associated with the bins 320a-b and/or cases 310a-c. The computing system can store and associate the identifier of the sensing device with the identifying mark and respective the bins 320a-b and/or cases 310a-c. As a non-limiting example, the sensing device can be one or more of a RFID tag, other electronic tag, pin, tack, or staple.
The sensing device can be scanned and/or detected by a portable electronic device (e.g., portable electronic device 200 as shown in
The autonomous robot device 300 can also place identifying marks 404a-c on cases 310a-c disposed within a bin 320b. For example, the identifying mark 404a can be placed on the case 312a, the identifying mark 404b can be placed on the case 312b, the identifying mark 304c can be placed on case 312c. Each of the identifying marks 404a-c can indicate a different level of priority of the physical objects disposed in the cases 310a-c to be placed on the shelving units in a different location in the facility.
In exemplary embodiment, the portable electronic device 200 can execute the augment application to instruct the portable electronic device 200 to power on the image capturing device 208 and control the operation of the image capturing device 208. An exemplary embodiment of the augment application is described herein with reference to
In one embodiment, in response to pointing the image capturing device 208 at a physical scene 520 for more than a specified amount of time (e.g., an amount of time the image capturing device captures the same scene—with minor variations/movement—exceeds a specified threshold), the image capturing device 208 can detect attributes associated with the physical scene 520. For example, the physical scene 220 can include the bin 320a or cases 310a-c, the image capturing device 208 can detect attributes (e.g. shapes, sizes, dimensions etc.) of a physical item in the physical space 520, such as the bins 320a-b and the corresponding alphanumeric text and/or machine-readable elements 330a-b on the respective bins 320a-b or alphanumeric text and/or machine-readable elements 312a-c on the cases 310a-c. In some embodiments, the touch-sensitive display 210 can display a visual indicator each time a physical item is detected. For example, the visual indicator can be a box superimposed around the physical item. The portable electronic device 100 can correlate the detected bins 320a-b and/or cases 310a-c and the corresponding alphanumeric text and/or machine-readable elements 330a-b on the respective bins 320a-b or alphanumeric text and/or machine-readable elements 312a-c on the cases 310a-c.
In one embodiment, the image capturing device 108 can transmit the detected alphanumeric text and/or machine-readable elements 330a-b on the respective bins 320a-b or alphanumeric text and/or machine-readable elements 312a-c on the cases 310a-c to a computing system. In response to receiving instructions from the computing system, the portable electronic device 200 can augment the physical scene 520 by superimposing a virtual element such as an identifying mark 402 and/or 404a-c on the bin 320a or cases 310a-c. The portable electronic device 200 can determine the coordinates along the X and Y axis on the display screen, of the location 210 in the viewable area to accurately position the virtual element on the bins 320a-b and/or cases 310a-c.
Alternatively, or in addition to, a writing instrument 610 can be disposed within the nozzle. The writing instrument 610 can be in a retracted position inside in the nozzle 602. The writing instrument 610 can extend out of the nozzle 602. The writing instrument 610 can be chalk, marker, pen and/or pencil.
In an example embodiment, one or more portions of the communications network 715 can be an ad hoc network, a mesh network, an intranet, an extranet, a virtual private network (VPN), a local area network (LAN), a wireless LAN (WLAN), a wide area network (WAN), a wireless wide area network (WWAN), a metropolitan area network (MAN), a portion of the Internet, a portion of the Public Switched Telephone Network (PSTN), a cellular telephone network, a wireless network, a WiFi network, a WiMax network, any other type of network, or a combination of two or more such networks.
The server 710 includes one or more computers or processors configured to communicate with the computing system 700, the portable electronic devices 200, the autonomous robotic devices 300, sensing devices 765, and the databases 705, via the network 715. The server 710 hosts one or more applications configured to interact with one or more components computing system 700 and/or facilitates access to the content of the databases 705. The databases 705 may store information/data, as described herein. For example, the databases 705 can include physical objects database 725 and a bins database 735. The physical objects database 725 can store information associated with physical objects disposed at a facility and can be indexed via the decoded identifier retrieved by the identifier reader. The bins database 735 can store information associated with bins and cases stored within the bins. The cases database 740 can store information associated with cases and physical objects stored within the cases. The databases 705 can be located at one or more geographically distributed locations from the computing system 700. Alternatively, the databases 705 can be located at the same geographically as the computing system 700.
In one embodiment, bins 760 housing cases 762 can be disposed in a facility. The bins 760 and cases 762 can embody, e.g., bins 320a-c as shown in
The control engine 720 can determine a priority for the physical objects disposed in one or more cases to be moved from the cases 762 and placed on the shelving units. The control engine 720 can instruct the autonomous robot device 200 to mark the identified one or more cases 762 with an identifying mark respective to the determined priority. As an example, the control engine 720 can determine a case contains a set of like physical objects. A quantity of the same like physical objects disposed on the shelving units is lower than a threshold amount. The control engine 720 can determine that the case 762 containing the set of like physical objects should be marked with an identifying mark indicating high priority to move the physical objects from the case 762 and placed on the shelving units. The identifying mark can also indicate a date or time at which the products should be moved from the cases 762 to the shelving units.
In one embodiment, the identifying mark can change color, shape, and/or size over time to indicate a change in priority. For example, the control engine 720 can determine a set of like physical objects will be absent from shelving units in 4 weeks from the present date. The identifying mark can change as the date approaches the 4th week and the physical objects are expected to be absent from the shelving unit.
In one embodiment, the computing system 700 can receive a decoded identifier associated with a bin 760 from an autonomous robot device 200. In another embodiment, the computing system 700 can receive an image of information disposed on the outside of a bin 760. The control engine 720 can use OCR and/or machine-vision to extract identifying information associated with the bin. The control engine 720 can query the bins database 735 using the identifier to retrieve information associated with the cases 762 within the bin using identifier received from the autonomous robot device 700, to retrieve information associated with the cases 762 within the bin 760. The control engine 720 can query the cases database 740 using the information associated with the cases 762, to retrieve information associated with physical objects disposed in the cases 762. The control engine 720 can query the physical objects database 725 to retrieve information associated with the physical objects stored in the cases 762.
The control engine 720 can determine a priority for the physical objects disposed in one or more cases 762 to be moved from the cases and placed on the shelving units. The control engine 720 can instruct the autonomous robot device 200 to mark the bins in which identified one or more cases are disposed, with a specified identifying mark. The identifying mark can include information associated with the one or more cases and the priority for each of the cases 762.
In one embodiment, identifying marks can be embodied as virtual elements to be superimposed on the bins 760 and/or cases 762 in a virtual scene. As stated above, the autonomous robot device 300 can transmit information associated with cases 762 disposed in bins 760 which are disposed at facilities to the computing system 700. The computing system 700 can execute the control engine 720 in response to receiving the information associated with the cases. The information can include extracted text, images and/or identifiers of the bins 760 and/or cases 762. The control engine 720 can query the bins database 735 and/or cases database 740 using the information received from the autonomous robot device 300, to retrieve information associated with physical objects stored in the cases 762. The control engine 720 can query the physical objects database 725 to retrieve information associated with the physical objects stored in the cases. The information can include, name, type, color, quantity of physical objects in the cases, and a quantity of physical objects disposed on shelving units in a different location in the facility.
The control engine 720 can determine a priority and/or urgency for the physical objects disposed in one or more cases to be moved from the cases 762 and placed on the shelving units. For example, the control engine 720 can determine that the physical objects are absent from the shelving units and immediately need to be moved from the cases 762 to the shelving units. The control engine 720 can determine an identifying mark associated with the determined priority. The control engine 720 can convert the identifying mark into a virtual element and store the virtual element in the bins database 735 and/or cases database 740 and associate the virtual element with an identifier associated with a bin 760 or case 762 on which the virtual element is to be superimposed.
The portable electronic device 200 can execute an augment application 745. In response to pointing the image capturing device 208 of the portable electronic device 200 at a physical scene 520 including the bins 760 and/or cases 762, the image capturing device 208 can detect attributes (e.g. shapes, sizes, dimensions etc.) of a physical item in the physical space, such as the bins 760 and/or cases 762 and the corresponding alphanumeric text and/or machine-readable elements on the respective bins 760 or alphanumeric text and/or machine-readable elements on the cases 762. In some embodiments, the touch-sensitive display 210 can display a visual indicator each time a physical item is detected. For example, the visual indicator can be a box superimposed around the image of the physical item rendered on the display. The portable electronic device 100 can correlate the detected bins 760 and/or cases 762 and the corresponding alphanumeric text and/or machine-readable elements on the respective bins 760 or alphanumeric text and/or machine-readable elements on the cases 762.
The portable electronic device 200, via the augment application 745 can transmit the detected alphanumeric text and/or machine-readable elements on the respective bins 760 or alphanumeric text and/or machine-readable elements on the cases 762 to the computing system 700. The control engine 720 can query the bins database 735 and/or the cases database 740 using the received identifier(s) decoded from the alphanumeric text and/or machine-readable elements on the respective bins 760 or alphanumeric text and/or machine-readable elements on the cases 762, to retrieve the respective virtual element associated with the identifier. In one embodiment, the augment application 745 can decode the identifiers from the alphanumeric text and/or machine-readable elements. Alternatively, the control engine 720 can decode the identifiers from the alphanumeric text and/or machine-readable elements. The control engine 720 can transmit instructions to the portable electronic device 200 to augment the display of the physical scene rendered on the touch-sensitive display 210 by superimposing the retrieved virtual element corresponding to each identifier(s). In response to receiving instructions from the computing system, the augment application 745 of the portable electronic device 200 can augment the physical scene by superimposing a virtual element such as an identifying mark on the bin 760 and/or cases 762.
In one embodiment, the autonomous robot device 300 can embed a sensing device 765 in the bins 760 and/or cases 762. The sensing device 765 can be encoded with an identifier. The autonomous robot device 300 can transmit information associated with bins 760 and/or cases 762 which are disposed at facilities and the identifier of the sensing device to the computing system 700. The information can include extracted text, images and/or identifiers of the bins 760 and/or cases 762. The control engine 720 can determine an identifying mark associated with the bins 760 and/or cases 762. The control engine 720 can store and associate the identifier of the sensing device with the identifying mark and respective the bins 760 and/or cases 762 in the bins database 735 and/or cases database 740.
The sensing device 765 can be scanned and/or detected by a portable electronic device 200. In response to the sensing device being scanned or detected by a portable electronic device 200, the portable electronic device 200 can transmit a decoded identifier of the sensing device to the computing system 700. The control engine 720 can query the bins database 735 and/or cases database 740 using the identifier to retrieve the identifying mark associated with the identifier of the sensing device and respective bin 760 or case 762. The control engine 720 can instruct the portable electronic device 200 to render the identifying mark associated with the identifier of the sensing device and respective bin 760 or case 762 on the touch-sensitive display 210.
As a non-limiting example, the automated robotic marking system 750 can be implemented in a retail store. Products can be disposed on shelving units on the sales floor. Products can also be disposed in cases 762 disposed in bins 760 located in the in a storage/stocking room. As an example, a retail store may have a rule to stock shelving units after a specified amount of products are remaining on the shelves. The products can be moved from the cases 762 in the stock/storage room to the shelving units. The automated robotic marking system 750 can determine a timeframe and/or priority at which products should be restocked on the shelving units. As an example, the control engine 720 can use on-hand data and rate of sales data retrieved from a POS system in the retail store to determine if the product has been put on the shelves.
The computing system 700 can execute the control engine 720 in response to receiving the information associated with the cases from the autonomous robot device 200. The control engine 720 can query the cases database 740 using the information received from the autonomous robot device 700, to retrieve information associated with physical objects stored in the cases. The control engine 720 can query the physical objects database 725 to retrieve information associated with the products stored in the cases 762. The information can include, name, type, color, quantity of products in the cases, and a quantity of products disposed on shelving units on the sales floor.
The control engine 720 can determine a priority for the products to be re-stocked from the storage/stock room to the shelving units on the sales floor. The control engine 720 can instruct the autonomous robot device 200 to mark the identified one or more cases 762 with an identifying mark respective to the determined priority. As an example, the control engine 720 can determine a case contains bottles of Pepsi®. The control engine 720 can also determine the Pepsi® bottles stock on the shelving units is lower than a threshold amount. The control engine 720 can determine that the case containing the set of like physical objects should be marked with an identifying mark indicating high priority to move the Pepsi® bottles from the case and placed on the shelving units.
As described above, in one embodiment the sensing device 765 can be embedded into the bins 760 and/or cases 762. The sensing device 765 can include a location module configured to determine the location of the sensing device 765. The sensing device 765 can periodically provide its location to the computing system 700. The control engine 720 can track the location of the bins and/or cases 762 based on the location information received from the sensing device 765. The control engine 720 can determine whether the items in the cases which need to be stocked have been stocked on the shelving units based on the location information of the sensing devices 765.
Virtualization may be employed in the computing device 800 so that infrastructure and resources in the computing device 800 may be shared dynamically. A virtual machine 812 may be provided to handle a process running on multiple processors so that the process appears to be using only one computing resource rather than multiple computing resources. Multiple virtual machines may also be used with one processor.
Memory 806 may include a computer system memory or random access memory, such as DRAM, SRAM, EDO RAM, and the like. Memory 806 may include other types of memory as well, or combinations thereof.
A user may interact with the computing device 800 through a visual display device 814, such as a computer monitor, which may display one or more graphical user interfaces 816, multi touch interface 820, a pointing device 818, a scanner 836 and a reader 832. The scanner 836 and reader 832 can be configured to read sensitive data.
The computing device 800 may also include one or more storage devices 826, such as a hard-drive, CD-ROM, or other computer readable media, for storing data and computer-readable instructions and/or software that implement exemplary embodiments (e.g., applications i.e. the control engine 720). For example, exemplary storage device 826 can include one or more databases 828 for storing information regarding physical objects, cases and bins. The databases 828 may be updated manually or automatically at any suitable time to add, delete, and/or update one or more data items in the databases.
The computing device 800 can include a network interface 808 configured to interface via one or more network devices 824 with one or more networks, for example, Local Area Network (LAN), Wide Area Network (WAN) or the Internet through a variety of connections including, but not limited to, standard telephone lines, LAN or WAN links (for example, 802.11, T1, T3, 56 kb, X.25), broadband connections (for example, ISDN, Frame Relay, ATM), wireless connections, controller area network (CAN), or some combination of any or all of the above. In exemplary embodiments, the computing system can include one or more antennas 822 to facilitate wireless communication (e.g., via the network interface) between the computing device 800 and a network and/or between the computing device 800 and other computing devices. The network interface 808 may include a built-in network adapter, network interface card, PCMCIA network card, card bus network adapter, wireless network adapter, USB network adapter, modem or any other device suitable for interfacing the computing device 800 to any type of network capable of communication and performing the operations described herein.
The computing device 800 may run operating system 810, such as versions of the Microsoft® Windows® operating systems, different releases of the Unix and Linux operating systems, versions of the MacOS® for Macintosh computers, embedded operating systems, real-time operating systems, open source operating systems, proprietary operating systems, or other operating systems capable of running on the computing device 800 and performing the operations described herein. In exemplary embodiments, the operating system 810 may be run in native mode or emulated mode. In an exemplary embodiment, the operating system 810 may be run on one or more cloud machine instances.
With reference to
In operation 1110, the computing system can query the data storage facility (e.g., the physical objects database 725, the bins database 735 and the cases database 740 as shown in
With reference to
With reference to
With reference to
In operation 1226, the computing system can store and associate the identifier of the sensing device with the at least one of the one or more cases and the identified identifying mark. In operation 1228, a portable electronic device (e.g., portable electronic device 200 as shown in
In describing exemplary embodiments, specific terminology is used for the sake of clarity. For purposes of description, each specific term is intended to at least include all technical and functional equivalents that operate in a similar manner to accomplish a similar purpose. Additionally, in some instances where a particular exemplary embodiment includes a multiple system elements, device components or method steps, those elements, components or steps may be replaced with a single element, component or step. Likewise, a single element, component or step may be replaced with multiple elements, components or steps that serve the same purpose. Moreover, while exemplary embodiments have been shown and described with references to particular embodiments thereof, those of ordinary skill in the art will understand that various substitutions and alterations in form and detail may be made therein without departing from the scope of the present disclosure. Further still, other aspects, functions and advantages are also within the scope of the present disclosure.
Exemplary flowcharts are provided herein for illustrative purposes and are non-limiting examples of methods. One of ordinary skill in the art will recognize that exemplary methods may include more or fewer steps than those illustrated in the exemplary flowcharts, and that the steps in the exemplary flowcharts may be performed in a different order than the order shown in the illustrative flowcharts.
This application claims priority to U.S. Provisional Application No. 62/632,548 filed on Feb. 20, 2018 and U.S. Provisional Application No. 62/802,543 filed on Feb. 7, 2019, the contents of each application are hereby incorporated by reference in their entirety.
Number | Date | Country | |
---|---|---|---|
62802543 | Feb 2019 | US | |
62632548 | Feb 2018 | US |