Embodiments described herein relate generally to an information processing device, a sorting system, and a recording medium.
A system for sorting packages such as letters, documents, and parcels into destinations or others by using a robot arm has been used in a work in postal and parcel delivery services and so forth.
In such a system as described above, processing of recognizing the shape of the package may be performed when the robot arm performs a holding operation for holding a package.
In such a case, if the shape of the package is irregular or if multiple packages are randomly stacked, it is difficult to accurately recognize the shape of each package, and there is a possibility that the holding operation is not appropriately performed.
An information processing device according to an embodiment includes a connection device and a hardware processor connected to the connection device. The connection device is configured to connect to an imaging device. The imaging device images a region on which an object to be handled by using a load handling device is placed. The hardware processor is configured to acquire image information about the region via the connection device. The hardware processor is configured to detect, from the image information, an identification region in which identification information attached to the object is included. The hardware processor is configured to recognize the identification information in the identification region. The hardware processor is configured to output region information related to the identification region and also output the identification information.
The box 2 is a container that contains the packages 3 to be sorted. Note that the box 2 is an example of means for storing the packages 3. A load-carrying platform, a pallet, a cage carriage, and so forth may be used instead of the box 2. A sorting pocket 4 is a sorting destination of the package 3 and is partitioned on the basis of a predetermined criterion (for example, a destination of the package 3).
The sorting system 1 includes a robot arm 11 (load handling device), a camera 12 (imaging device), and a control device 13 (information processing device). The robot arm 11, the camera 12, and the control device 13 are configured to be able to communicate with each other via a network 14. Although the specific configuration of the network 14 should not be limited, the network 14 may be, for example, a local area network (LAN) or a cloud system.
The configuration of the robot arm 11 will be described. The robot arm 11 is an apparatus that holds the package 3 stored in the box 2, lifts the held package 3, and moves the package 3 to the sorting pocket 4 which is a sorting destination. The robot arm 11 includes a holding mechanism 21, an arm mechanism 22, a contact sensor 23, and a drive control mechanism 24.
The holding mechanism 21 is a mechanism that holds the package 3. The holding mechanism 21 according to the present embodiment includes a suction pad that sucks the package 3. The suction pad sucks and holds the package 3 by making the internal space negative pressure in a state of being in contact with the surface of the package 3. The suction pad is controlled by a control signal from the drive control mechanism 24. Note that the configuration of the holding mechanism 21 is not limited thereto, and may be, for example, a configuration using a gripper that grips the package 3 by pinching the package 3 with finger-like members.
The arm mechanism 22 is a mechanism that moves the holding mechanism 21. The arm mechanism 22 includes arms and a joint mechanism that connects the arms. The joint mechanism incorporates an actuator which is controlled by a control signal from the drive control mechanism 24.
The contact sensor 23 is a sensor that detects stress applied to the holding mechanism 21. The contact sensor 23 detects, for example, stress applied to the holding mechanism 21 in the vertical direction. The contact sensor 23 transmits a detection result to the drive control mechanism 24. Note that the contact sensor 23 may also detect stress applied to the arm mechanism 22.
The drive control mechanism 24 controls operations of the holding mechanism 21 and the arm mechanism 22 on the basis of the operation information output by the control device 13. The drive control mechanism 24 is configured by using, for example, a microprocessor, a memory, an application specific integrated circuit (ASIC), and so forth. The drive control mechanism 24 generates a control signal for controlling the holding mechanism 21 and the arm mechanism 22 in accordance with operation information supplied by the control device 13. The operation information includes: information for implementing a holding operation of holding the package 3 by the holding mechanism 21, and information for implementing a sorting operation of moving the held package 3 toward a sorting destination. The drive control mechanism 24 may be configured as a sequencer.
Next, the configuration of the camera 12 will be described. The camera 12 is an apparatus that acquires image information about the package 3 stored in the box 2. The camera 12 outputs the image information to the control device 13 via the network 14.
The camera 12 is, for example, a monocular camera configured by using a lens and an imaging element which converts light formed by the lens into an electric signal. Such a configuration allows the camera 12 to acquire image information constituting a raster image in which coordinates (pixels) having color information are two-dimensionally arranged. Note that the raster image may be a color image or a monochrome image.
The angle of view of the lens of the camera 12 is adjusted so as to image a region including the box 2 in which the package 3 is stored. For example, the optical axis of the lens of the camera 12 is adjusted to face the bottom surface of the box 2, in other words, adjusted to be parallel to the vertical direction. The camera 12 images a predetermined range including the box 2 in a direction facing the bottom surface of the box 2 and then acquires a raster image. An image in a predetermined range including the box 2 is hereinafter referred to as a box image.
The configuration of the control device 13 will now be described. The control device 13 creates operation information for controlling the operation of the robot arm 11 on the basis of the image information (box image 51) acquired by the camera 12. The control device 13 outputs the operation information to the drive control mechanism 24.
The CPU 31 performs predetermined control arithmetic processing using the RAM 32 as a working area in accordance with computer programs stored in the ROM 33 and the auxiliary storage device 34. The auxiliary storage device 34 is a nonvolatile memory or the like. The auxiliary storage device 34 stores various kinds of data necessary for the CPU 31 to execute processing. The communication I/F 35 is a device that enables transmission and reception of information to and from external devices (the camera 12, the drive control mechanism 24, and so forth) via an appropriate computer network (the network 14 or the like). The user I/F 36 is a device that enables input and output of information between the control device 13 and a user, and is, for example, a keyboard, a mouse, a touch panel mechanism, a microphone, a display, a speaker, or the like. The CPU 31, the RAM 32, the ROM 33, the auxiliary storage device 34, the communication I/F 35, and the user I/F 36 are connected via the bus 37 so that they can communicate with each other.
The recognition control unit 100 acquires the image information (box image 51) acquired by the camera 12 via the connection unit (communication I/F 35 and so forth) connecting the control device 13 and the camera 12. The recognition control unit 100 detects, from the acquired image information, an identification region in which the identification information 10 attached to the package 3 is included, and generates region information related to the identification region. The recognition control unit 100 recognizes the identification information 10 in the detected identification region. The identification information 10 is information corresponding to a sorting destination of the package 3. The identification information 10 may be, for example, a character string, a barcode, or a two-dimensional code, each indicating a destination (a postal code, an address, or the like) or a transport destination (the sorting pocket 4 and so forth) of the package 3.
The output unit 101 outputs the identification information 10 recognized by the recognition control unit 100 to the placement control unit 102. The output unit 101 outputs, to the holding control unit 103, the region information related to the identification region detected by the recognition control unit 100.
The placement control unit 102 sets, on the basis of the identification information 10 output from the output unit 101, a sorting destination on which the package 3 held by the robot arm 11 is to be placed.
The holding control unit 103 sets, within the identification region, a holding position at which the holding mechanism 21 of the robot arm 11 holds the package 3, on the basis of the region information output from the output unit 101.
The operation information generation unit 104 generates operation information for controlling the operation of the robot arm 11, on the basis of the sorting destination information indicating the sorting destination set by the placement control unit 102 and the holding position information indicating the holding position set by the holding control unit 103.
As described above, according to the present embodiment, the holding position is set on the basis of the position of the identification information 10. Thus, an appropriate holding operation can be implemented without requiring information, which has difficulty in recognition, such as the shape of the package 3 itself or the position of the package 3 in a three-dimensional space.
The OCR processing unit 111 converts image data of a character string into text data (character code). The OCR processing unit 111 acquires, from the box image 51, text data indicating a destination and so forth written on the package 3. The OCR processing unit 111 performs image recognition processing by using a given parameter set. The parameter set refers to a set of parameters used for one or more processes included in the image recognition processing. The parameter set includes, for example, a threshold value for binarization processing and a threshold value for determining success/failure of character recognition. The parameter set may also include, for example, a threshold value used for processing red-green-blue (RGB) information when a color image is converted into a binary image. The threshold value used for processing the RGB information includes a threshold value for recognizing a frame line (background other than characters in an address entry field) colored by red, green, or the like, which is RGB information included in an image, or a threshold value for processing (erasing) the recognized frame line. The parameter set includes a threshold value for determining a contrast to be adjusted in accordance with a change in density of a printed or written character, a threshold value for determining a label size (a region in which an individual character is recognized in a character string) of an individual character included in an image, and others. The threshold value for determining the label size is a threshold value that is adjusted to appropriately recognize a character in a case where the character is blurred (the label size is large) or a part of the character disappears (the label size is small).
The barcode reading unit 112 reads a barcode attached to the package 3 from the box image 51 and acquires barcode data indicating the configuration of the barcode.
The two-dimensional code reading unit 113 reads a two-dimensional code attached to the package 3 from the box image 51 and acquires two-dimensional code data indicating the configuration of the two-dimensional code.
The identification information acquisition unit 114 outputs, to the output unit 101, the text data acquired by the OCR processing unit 111, the barcode data acquired by the barcode reading unit 112, or the two-dimensional code data acquired by the two-dimensional code reading unit 113.
The region information acquisition unit 115 acquires region information related to the identification region including the identification information 10, from the OCR processing unit 111, the barcode reading unit 112, or the two-dimensional code reading unit 113. The region information includes information indicating a position of an identification region in a region in which the package 3 is placed (in the present embodiment, a region inside the box 2). The region information may be acquired on the basis of, for example, coordinate information indicating a region (reading region) in the box image 51 read when the identification information 10 is detected. The region information acquisition unit 115 outputs the region information to the output unit 101.
Note that, in the above description, an example that the recognition control unit 100 includes the OCR processing unit 111, the barcode reading unit 112, and the two-dimensional code reading unit 113 is described, the configuration of the recognition control unit 100 is not limited thereto. The recognition control unit 100 may include at least one of these units in accordance with the type of the identification information 10 to be used.
On the basis of the identification information 10 (text data, barcode data, or two-dimensional code data) output by the output unit 101, the acquisition unit 121 acquires sorting destination information indicating the sorting destination of the package 3 to which the identification information 10 has been attached. The acquisition unit 121 outputs the sorting destination information to the operation information generation unit 104.
The destination DB 122 holds data in which the identification information 10 and the sorting destination are associated with each other (for example, a table in which an address and a tray of the sorting pocket 4 are associated with each other). The acquisition unit 121 searches the destination DB 122 to acquire sorting destination information indicating the sorting destination corresponding to the identification information 10 acquired from the output unit 101.
The external search unit 123 is a search engine capable of performing a search using text data as a keyword and extracting predetermined information. In a case where the identification information 10 is text data, the acquisition unit 121 extracts predetermined information from the external search unit 123 by using the text data acquired from the output unit 101 as a keyword, and treats the extracted information as sorting destination information.
The holding position setting unit 131 sets a holding position within the identification region on the basis of the region information acquired from the output unit 101.
Note that the method of setting the identification region 52 is not limited to the above-described example. The identification region 52 may be formed with, for example, a polygonal shape other than a rectangular shape, a circular shape, a free curve, or others.
The holding position setting unit 131 sets a holding position (a position at which the holding mechanism 21 holds the package 3) within the identification region 52 as described above. The holding position setting unit 131 sets, for example, a substantial surface gravity center or a substantial center of the identification region 52, as the holding position. The holding position setting unit 131 outputs holding position information indicating the holding position to the operation information generation unit 104.
As described above, the holding position 53 is set within the identification region 52, so that the package 3 can be reliably held even when the shape of the package 3 itself cannot be recognized.
The holding operation planning unit 141 plans a holding operation, which is an operation until the robot arm 11 holds the package 3, on the basis of the holding position information acquired from the holding control unit 103. The holding operation includes an operation of moving the arm mechanism 22 so as to move the holding mechanism 21 to the holding position 53, an operation of holding the package 3 by the holding mechanism 21 that has reached the holding position 53 (by making the inside of the suction pad negative pressure), and so forth.
The sorting operation planning unit 142 plans a sorting operation, which is an operation of moving the package 3 to the sorting destination by the robot arm 11, on the basis of the sorting destination information acquired from the placement control unit 102. The sorting operation includes an operation of moving the arm mechanism 22 so as to move the holding mechanism 21 holding the package 3 from the holding position 53 to a sorting destination (for instance, a specific tray in the sorting pocket 4), an operation of causing the holding mechanism 21 that has reached the sorting destination to release the package 3 (by releasing the negative pressure in the suction pad), and so forth.
The integration unit 143 integrates the holding operation planned by the holding operation planning unit 141 and the sorting operation planned by the sorting operation planning unit 142, and then generates operation information indicating an overall operation plan of the robot arm 11. The integration unit 143 outputs the operation information to the robot arm 11 (drive control mechanism 24).
A computer program for implementing various functions in the control device 13 according to the above-described embodiment can be provided by being recorded in a computer-readable recording medium such as a compact disc (CD) -ROM, a flexible disk (FD), a CD-recordable (CD-R), or a digital versatile disk (DVD) as a file in an installable format or an executable format. The computer program may be provided or distributed via a network such as the Internet.
Note that, in the above embodiment, the configuration is exemplified, in which the control device 13 and the drive control mechanism 24 are separately provided. However, the configuration may be implemented such that the control device 13 and the drive control mechanism 24 are integrated as one body.
Moreover, in the above-described embodiment, the configuration is exemplified, in which one robot arm 11 is controlled by one control device 13, but is not limited thereto. For example, a plurality of robot arms 11 may be controlled by one control device 13.
As described above, according to the present embodiment, the holding position 53 is set on the basis of the position of the identification information 10, so that an appropriate holding operation can be implemented regardless of the shape of the package 3 itself. Thus, it is possible to provide the sorting system 1 with high reliability.
While the embodiments and modifications of the present invention have been described above, the above-described embodiments and modifications have been presented by way of example only, and are not intended to limit the scope of the invention. The above-described novel embodiments and modifications may be implemented in various other forms, and various omissions, substitutions and changes may be made without departing from the gist of the invention. The above-described embodiments and modifications are included in the scope and gist of the invention, and are also included in the invention described in the claims and the scope equivalent thereto.
Number | Date | Country | Kind |
---|---|---|---|
2020-112017 | Jun 2020 | JP | national |
This application is national stage application of International Application No. PCT/JP2021/023807, filed Jun. 23, 2021, which designates the United States, incorporated herein by reference, and which claims the benefit of priority from Japanese Patent Application No. 2020-112017, filed Jun. 29, 2020, the entire contents of which are incorporated herein by reference.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2021/023807 | 6/23/2021 | WO |