This application claims priority based on Japanese Patent Application No. 2021-134391 filed in Japan on Aug. 19, 2021, the entire disclosure of which is incorporated herein by reference.
The present disclosure relates to a holding mode determination device for a robot, a holding mode determination method for a robot, and a robot control system.
A method for appropriately holding an object having a certain shape by a robot hand is known (see, for example, Patent Literature 1).
A holding mode determination device for a robot according to an embodiment of the present disclosure includes a controller and an interface. The controller is configured to output, to a display device, at least one holding mode of a robot for a target object together with an image of the target object, the at least one holding mode being inferred by classifying the target object into at least one of multiple inferable holding categories. The interface is configured to acquire, via the display device, a user's selection of the at least one holding mode.
A holding mode determination method for a robot according to an embodiment of the present disclosure includes outputting, to a display device, at least one holding mode of a robot for a target object together with an image of the target object, the at least one holding mode being inferred by classifying the target object into at least one of multiple inferable holding categories. The holding mode determination method includes acquiring, via the display device, a user's selection of the at least one holding mode.
A robot control system according to an embodiment of the present disclosure includes a robot, a robot control device configured to control the robot, a holding mode determination device configured to determine a mode in which the robot holds a target object, and a display device. The holding mode determination device is configured to output, to the display device, at least one holding mode of the robot for the target object together with an image of the target object, the at least one holding mode being inferred by classifying the target object into at least one of multiple inferable holding categories. The display device is configured to display the image of the target object and the at least one holding mode for the target object. The display device is configured to receive an input of a user's selection of the at least one holding mode, and output the user's selection to the holding mode determination device. The holding mode determination device is configured to acquire, from the display device, the user's selection of the at least one holding mode.
In a trained model that receives an image of an object and outputs a holding mode for the object, a user is unable to control the output of the trained model and is unable to control the holding mode for the object. There is a demand for increasing the convenience of a trained model that outputs a holding mode for an object. A holding mode determination device for a robot, a holding mode determination method for a robot, and a robot control system according to an embodiment of the present disclosure can increase the convenience of a trained model that outputs a holding mode for an object.
As illustrated in
As illustrated in
The robot 2 includes an arm 2A and the end effector 2B. The arm 2A may be, for example, a six-axis or seven-axis vertical articulated robot. The arm 2A may be a three-axis or four-axis horizontal articulated robot or a SCARA robot. The arm 2A may be a two-axis or three-axis orthogonal robot. The arm 2A may be a parallel link robot or the like. The number of axes constituting the arm 2A is not limited to that illustrated. In other words, the robot 2 includes the arm 2A connected by multiple joints, and is operated by driving the joints.
The end effector 2B may include, for example, a grasping hand configured to grasp the holding target object 80. The grasping hand may include multiple fingers. The number of fingers of the grasping hand may be two or more. Each finger of the grasping hand may include one or more joints. The end effector 2B may include a suction hand configured to suck and hold the holding target object 80. The end effector 2B may include a scooping hand configured to scoop and hold the holding target object 80. The end effector 2B is also referred to as a holding portion that holds the holding target object 80. The end effector 2B is not limited to these examples, and may be configured to perform other various operations. In the configuration illustrated in
The robot 2 is capable of controlling the position of the end effector 2B by operating the arm 2A. The end effector 2B may include a shaft serving as reference of a direction in which the end effector 2B acts on the holding target object 80. When the end effector 2B includes a shaft, the robot 2 is capable of controlling the direction of the shaft of the end effector 2B by operating the arm 2A. The robot 2 controls the start and end of an operation in which the end effector 2B acts on the holding target object 80. The robot 2 controls the operation of the end effector 2B while controlling the position of the end effector 2B or the direction of the shaft of the end effector 2B, thereby being capable of moving or processing the holding target object 80. In the configuration illustrated in
The robot control system 100 further includes a sensor. The sensor detects physical information of the robot 2. The physical information of the robot 2 may include information regarding the actual positions or postures of individual components of the robot 2 or the speeds or accelerations of the individual components of the robot 2. The physical information of the robot 2 may include information regarding forces that act on the individual components of the robot 2. The physical information of the robot 2 may include information regarding a current that flows through a motor that drives the individual components of the robot 2 or a torque of the motor. The physical information of the robot 2 represents a result of an actual operation of the robot 2. That is, the robot control system 100 acquires the physical information of the robot 2 and thereby being capable of grasping a result of an actual operation of the robot 2.
The sensor may include a force sensor or a tactile sensor that detects, as physical information of the robot 2, a force, a distributed pressure, a slip, or the like acting on the robot 2. The sensor may include a motion sensor that detects, as the physical information of the robot 2, the position or posture of the robot 2 or the speed or acceleration of the robot 2. The sensor may include a current sensor that detects, as the physical information of the robot 2, a current flowing through a motor that drives the robot 2. The sensor may include a torque sensor that detects, as the physical information of the robot 2, a torque of a motor that drives the robot 2.
The sensor may be installed at a joint of the robot 2 or a joint driving unit that drives the joint. The sensor may be installed at the arm 2A or the end effector 2B of the robot 2.
The sensor outputs detected physical information of the robot 2 to the robot control device 110. The sensor detects and outputs the physical information of the robot 2 at a predetermined timing. The sensor outputs the physical information of the robot 2 as time-series data.
In the example configuration illustrated in
The camera 4 is not limited to the configuration of being attached to the end effector 2B, and may be provided at any position at which the camera 4 is capable of photographing the holding target object 80. In a configuration in which the camera 4 is attached to a structure other than the end effector 2B, the above-described holding target image may be generated based on an image captured by the camera 4 attached to the structure. The holding target image may be generated by performing image conversion based on the relative position and relative posture of the end effector 2B with respect to the attachment position and attachment posture of the camera 4. Alternatively, the holding target image may be generated using computer-aided design (CAD) and drawing data.
As illustrated in
The controller 12 may include at least one processor to provide control and processing capabilities for executing various functions. The processor may execute a program for implementing the various functions of the controller 12. The processor may be implemented as a single integrated circuit. The integrated circuit is also referred to as an IC. The processor may be implemented as multiple communicably connected integrated circuits and discrete circuits. The processor may be implemented based on other various known techniques.
The controller 12 may include a storage unit. The storage unit may include an electromagnetic storage medium such as a magnetic disk, or may include a memory such as a semiconductor memory or a magnetic memory. The storage unit stores various pieces of information. The storage unit stores a program or the like to be executed by the controller 12. The storage unit may be configured as a non-transitory readable medium. The storage unit may function as a work memory of the controller 12. At least a part of the storage unit may be configured separately from the controller 12.
The interface 14 may include a communication device configured to be capable of wired or wireless communication. The communication device may be configured to be capable of communication in a communication scheme based on various communication standards. The communication device can be configured using a known communication technique.
The interface 14 may include an input device that receives an input of information, data, or the like from a user. The input device may include, for example, a touch panel or a touch sensor, or a pointing device such as a mouse. The input device may include a physical key. The input device may include an audio input device such as a microphone. The interface 14 may be configured to be connectable to an external input device. The interface 14 may be configured to acquire information input to the external input device from the external input device.
The interface 14 may include an output device that outputs information, data, or the like to a user. The output device may include, for example, an audio output device such as a speaker that outputs auditory information such as voice. The output device may include a vibration device that vibrates to provide tactile information to a user. The output device is not limited to these examples, and may include other various devices. The interface 14 may be configured to be connectable to an external output device. The interface 14 may output information to the external output device so that the external output device outputs the information to a user. The interface 14 may be configured to be connectable to the display device 20, which will be described below, as the external output device.
The display device 20 displays information, data, or the like to a user. The display device 20 may include a controller that controls display. The display device 20 may include, for example, a liquid crystal display (LCD), an organic electro-luminescence (EL) display, an inorganic EL display, a plasma display panel (PDP), or the like. The display device 20 is not limited to these displays, and may include other various types of displays. The display device 20 may include a light-emitting device such as a light emission diode (LED) or a laser diode (LD). The display device 20 may include other various devices.
The display device 20 may include an input device that receives an input of information, data, or the like from a user. The input device may include, for example, a touch panel or a touch sensor, or a pointing device such as a mouse. The input device may include a physical key. The input device may include an audio input device such as a microphone. The input device may be connected to, for example, the interface 14.
The robot control device 110 acquires information specifying a holding mode from the holding mode determination device 10 and controls the robot 2 such that the robot 2 holds the holding target object 80 in the holding mode determined by the holding mode determination device 10.
The robot control device 110 may include at least one processor to provide control and processing capabilities for executing various functions. The individual components of the robot control device 110 may include at least one processor. Multiple components among the components of the robot control device 110 may be implemented by one processor. The entire robot control device 110 may be implemented by one processor. The processor can execute a program for implementing the various functions of the robot control device 110. The processor may be configured to be the same as or similar to the processor used in the holding mode determination device 10.
The robot control device 110 may include a storage unit. The storage unit may be configured to be the same as or similar to the storage unit used in the holding mode determination device 10.
The robot control device 110 may include the holding mode determination device 10. The robot control device 110 and the holding mode determination device 10 may be configured as separate bodies.
The controller 12 of the holding mode determination device 10 determines, based on a trained model, a holding mode for the holding target object 80. The trained model is configured to receive, as an input, an image captured by photographing the holding target object 80, and output an inference result of a holding mode for the holding target object 80. The controller 12 generates the trained model. The controller 12 acquires an image captured by photographing the holding target object 80 from the camera 4, inputs the image to the trained model, and acquires an inference result of a holding mode for the holding target object 80 from the trained model. The controller 12 determines, based on the inference result of the holding mode for the holding target object 80, the holding mode, and outputs the holding mode to the robot control device 110. The robot control device 110 controls the robot 2 such that the robot 2 holds the holding target object 80 in the holding mode acquired from the holding mode determination device 10.
The controller 12 performs learning by using, as learning data, an image captured by photographing the holding target object 80 or an image generated from CAD data or the like of the holding target object 80, and generates a trained model for inferring a holding mode for the holding target object 80. The learning data may include teacher data used in so-called supervised learning. The learning data may include data that is used in so-called unsupervised learning and that is generated by a device that performs learning. An image captured by photographing the holding target object 80 or an image generated as an image of the holding target object 80 is collectively referred to as a target object image. As illustrated in
The class inference model 40 receives a target object image as an input. The class inference model 40 infers, based on the input target object image, a holding category into which the holding target object 80 is to be classified. That is, the class inference model 40 classifies the target object into any one of multiple inferable holding categories. The multiple holding categories that can be inferred by the class inference model 40 are categories outputtable by the class inference model 40. A holding category is a category indicating a difference in the shape of the holding target object 80, and is also referred to as a class.
The class inference model 40 classifies an input target image into a predetermined class. The class inference model 40 outputs, based on the target object image, a classification result obtained by classifying the holding target object 80 into a predetermined class. In other words, the class inference model 40 infers a class to which the holding target object 80 belongs when the holding target object 80 is classified into a class.
The class inference model 40 outputs a class inference result as class information. The class that can be inferred by the class inference model 40 may be determined based on the shape of the holding target object 80. The class that can be inferred by the class inference model 40 may be determined based on various characteristics such as the surface state, the material, or the hardness of the holding target object 80. In the present embodiment, the number of classes that can be inferred by the class inference model 40 is four. The four classes are referred to as a first class, a second class, a third class, and a fourth class, respectively. The number of classes may be three or less or may be five or more.
The holding mode inference model 50 receives, as an input, a target object image and class information output by the class inference model 40. The holding mode inference model 50 infers, based on the received target object image and class information, a holding mode for the holding target object 80, and outputs a holding mode inference result.
The class inference model 40 and the holding mode inference model 50 are each configured as, for example, a convolution neural network (CNN) including multiple layers. The layers of the class inference model 40 are represented as processing layers 42. The layers of the holding mode inference model 50 are represented as processing layers 52. Information input to the class inference model 40 and the holding mode inference model 50 is subjected to convolution processing based on a predetermined weighting coefficient in each layer of the CNN. In the learning of the class inference model 40 and the holding mode inference model 50, the weighting coefficient is updated. The class inference model 40 and the holding mode inference model 50 may be constituted by VGG16 or ResNet50. The class inference model 40 and the holding mode inference model 50 are not limited to these examples, and may be configured as other various models.
The holding mode inference model 50 includes a multiplier 54 that receives an input of class information. When the number of classes is four, the holding mode inference model 50 branches the holding target object 80 included in an input target object image to the processing layers 52 corresponding to the four classes, as illustrated in
The weighting coefficient by which each of the first multiplier 541, the second multiplier 542, the third multiplier 543, and the fourth multiplier 544 multiplies the output of the processing layer 52 is determined based on the class information input to the multiplier 54. The class information indicates which of the four classes the holding target object 80 has been classified into. It is assumed that, when the holding target object 80 is classified into the first class, the weighting coefficient of the first multiplier 541 is set to 1, and the weighting coefficients of the second multiplier 542, the third multiplier 543, and the fourth multiplier 544 are set to 0. In this case, the output of the processing layer 52 corresponding to the first class is output from the adder 56.
The class information may represent a probability that the holding target object 80 is classified into each of the four classes. For example, probabilities that the holding target object 80 is classified into the first class, the second class, the third class, and the fourth class may be represented by X1, X2, X3, and X4, respectively. In this case, the adder 56 adds an output obtained by multiplying the output of the processing layer 52 corresponding to the first class by X1, an output obtained by multiplying the output of the processing layer 52 corresponding to the second class by X2, an output obtained by multiplying the output of the processing layer 52 corresponding to the third class by X3, and an output obtained by multiplying the output of the processing layer 52 corresponding to the fourth class by X4, and outputs a result of the addition.
The controller 12 performs learning by using a target object image as learning data, thereby generating the class inference model 40 and the holding mode inference model 50 as a trained model. The controller 12 requires correct answer data to generate the trained model. For example, to generate the class inference model 40, the controller 12 requires learning data associated with information indicating a correct answer about a class into which an image used as learning data is classified. In addition, to generate the holding mode inference model 50, the controller 12 requires learning data associated with information indicating a correct answer about a grasping position of a target object appearing in an image used as learning data.
The trained model used in the robot control system 1 according to the present embodiment may include a model that has learned the class or the grasping position of the target object in a target object image through learning, or may include a model that has not learned the class or the grasping position. Even when the trained model has not learned the class or the grasping position, the controller 12 may infer the class and the grasping position of the target object with reference to the class and the grasping position of a learning object similar to the target object. Thus, in the robot control system 1 according to the present embodiment, the controller 12 may or may not learn the target object itself to generate a trained model.
As illustrated in
The class information output from the class inference model 40 indicates the class to which the holding target object 80 included in the input target object image is classified among the first to fourth classes. Specifically, it is assumed that the class inference model 40 is configured to output “1000” as the class information when the holding category (class) into which the holding target object 80 is to be classified is inferred as the first class. It is assumed that the class inference model 40 is configured to output “0100” as the class information when the holding category (class) into which the holding target object 80 is to be classified is inferred as the second class. It is assumed that the class inference model 40 is configured to output “0010” as the class information when the holding category (class) into which the holding target object 80 is to be classified is inferred as the third class. It is assumed that the class inference model 40 is configured to output “0001” as the class information when the holding category (class) into which the holding target object 80 is to be classified is inferred as the fourth class.
The conversion unit 60 converts the class information received from the class inference model 40 and inputs the converted class information to the multiplier 54 of the holding mode inference model 50. For example, the conversion unit 60 may be configured to, when the class information received from the class inference model 40 is “1000”, convert the class information into another character string such as “0100” and output the converted class information to the holding mode inference model 50. The conversion unit 60 may be configured to output the class information “1000” received from the class inference model 40 to the holding mode inference model 50 as it is. The conversion unit 60 may have a table for specifying a rule for converting the class information and may be configured to convert the class information based on the table. The conversion unit 60 may be configured to convert the class information by a matrix, a mathematical expression, or the like.
As illustrated in
In the present embodiment, it is assumed that shape images 81 each representing the shape of the holding target object 80 illustrated in
In response to inferring that the holding target object 80 included in the shape image 81 is classified into the first class, the class inference model 40 outputs “1000” as class information. In response to acquiring “1000” as class information, the holding mode inference model 50 infers, as a holding position 82, an inner side of the O-shaped holding target object 80 in the shape image 81. In response to inferring that the holding target object 80 included in the shape image 81 is classified into the second class, the class inference model 40 outputs “0100” as class information. In response to acquiring “0100” as class information, the holding mode inference model 50 infers, as holding positions 82, positions on both sides of the I-shape near the center of the I-shape in the shape image 81. In response to inferring that the holding target object 80 included in the shape image 81 is classified into the third class, the class inference model 40 outputs “0010” as class information. The holding mode inference model 50 infers the holding position(s) 82 of the holding target object 80 included in the shape image 81 in accordance with the class into which the holding target object 80 included in the shape image 81 has been classified. For example, in response to acquiring “0010” as class information, the holding mode inference model 50 infers, as the holding positions 82, positions on both sides of the J-shape near an end of the J-shape in the shape image 81. In other words, the holding mode inference model 50 infers, as the holding positions 82, positions on both sides of the I-shape near an end of the I-shape far from the O-shape in the shape obtained by combining the I-shape and the O-shape.
The controller 12 is capable of setting a rule for converting class information in the conversion unit 60. The controller 12 receives an input from a user via the interface 14 and sets a rule based on the input from the user.
The controller 12 inputs a target object image to the trained model and causes the trained model to output holding modes inferred when the holding target object 80 is classified into each of multiple classes. Specifically, as illustrated in
The controller 12 presents to a user results of inferring the holding position(s) 82 in the shape image 81 when the trained model classifies the holding target object 80 included in the shape image 81 into each class. Specifically, the controller 12 causes the display device 20 to display, as candidates for the holding position 82 for holding the holding target object 80, the images in which the holding positions 82 are superimposed on the shape image 81 illustrated in
When causing the robot 2 to hold the screw as the holding target object 80, the controller 12 causes the holding mode inference model 50 to infer the holding position(s) 82 of the class determined to be appropriate by the user. Specifically, the controller 12 is capable of controlling the holding position(s) 82 to be inferred by the holding mode inference model 50 by controlling the class information input to the holding mode inference model 50.
For example, when the holding target object 80 included in a target object image is classified into a class corresponding to a screw (for example, the second class), the controller 12 can cause the holding mode inference model 50 to perform inference to output the holding positions 82 as an inference result on the assumption that the holding target object 80 is classified into the third class. In this case, the controller 12 inputs, to the holding mode inference model 50, “0010” indicating the third class as the class information to which the holding target object 80 belongs. Even when the class information output by the class inference model 40 is originally “0100” indicating the second class, the controller 12 inputs “0010” as the class information to the holding mode inference model 50. In other words, although the holding mode for the holding target object 80 classified into the second class is inferred as the holding mode corresponding to the second class in a normal case, the holding mode inference model 50 is caused to infer the holding mode on the assumption that the holding target object 80 has been classified into the third class. The controller 12 sets a conversion rule in the conversion unit 60 such that “0010” is input as class information to the holding mode inference model 50 when the class inference model 40 outputs “0100” as the class information of the holding target object 80.
The controller 12 can cause the class into which the holding target object 80 is classified to be converted into another class and cause the holding mode inference model 50 to perform inference to output an inference result. For example, the controller 12 can cause the holding mode inference model 50 to perform inference such that even when the holding target object 80 is classified into the second class, the holding positions 82 are output as an inference result on the assumption that the holding target object 80 is classified into the third class. The controller 12 can cause the holding mode inference model 50 to perform inference such that even when the holding target object 80 is classified into the third class, the holding positions 82 are output as an inference result on the assumption that the holding target object 80 is classified into the second class. In this case, the controller 12 sets the conversion rule in the conversion unit 60 such that when the class information from the class inference model 40 is “0100”, the class information is converted into “0010” and input to the holding mode inference model 50, and that when the class information from the class inference model 40 is “0010”, the class information is converted into “0100” and input to the holding mode inference model 50.
As described above, the controller 12 is configured to be capable of setting a conversion rule. Thus, for example, when multiple target objects are consecutively processed, a situation in which the user selects a holding mode every time can be reduced. In this case, for example, a conversion rule may be set such that when a holding mode is once determined by the user, the determined holding mode is applied to the multiple target objects.
In the class inference model 40, a class having no holding mode may be set as a holding category. In this case, the holding mode inference model 50 may infer, in response to input of a certain class, that there is no holding mode. In this case, for example, in the case of consecutively processing multiple target objects, the controller 12 may determine that a target object classified into a class other than the classes assumed by the user among the multiple target objects is a foreign object. The controller 12 is capable of performing control so as not to hold the foreign object by converting the class of the target object determined to be a foreign object into a class having no holding mode according to the set conversion rule.
The controller 12 of the holding mode determination device 10 may execute a holding mode determination method including the procedure of the flowchart illustrated in
The controller 12 inputs a target object image to a trained model (step S1).
The controller 12 causes the class inference model 40 to classify the holding target object 80 included in the target object image into a class (step S2). Specifically, the class inference model 40 infers a class into which the holding target object 80 is classified, and outputs an inference result as class information.
The controller 12 infers holding modes for the holding target object 80 for cases where the holding target object 80 is classified into the class inferred by the class inference model 40 (for example, the third class) or other classes (for example, the first, second, and fourth classes), generates an inference result as candidates for the holding mode, and causes the display device 20 to display the inference result (step S3). Specifically, the controller 12 displays images in which the holding position(s) 82 is (are) superimposed on the target object image. At this time, the images may be displayed such that the class inferred by the class inference model 40 can be identified.
The controller 12 causes the user to select a holding mode from among the candidates (step S4). Specifically, the controller 12 causes the user to input, via the I/F 14, which holding mode is to be selected from among the candidates for the holding mode, and acquires information specifying the holding mode selected by the user. In other words, when the user thinks that the holding mode for the holding target object 80 in the class inferred by the class inference model 40 is not appropriate, the user is allowed to select another holding mode.
The controller 12 causes the trained model to infer a holding mode such that the holding target object 80 is held in the holding mode selected by the user, and outputs the inferred holding mode to the robot control device 110 (step S5). After executing the procedure of step S5, the controller 12 ends the execution of the procedure of the flowchart in
As described above, the holding mode determination device 10 according to the present embodiment is capable of causing a user to select a holding mode inferred by a trained model and controlling the robot 2 in the holding mode selected by the user. Accordingly, even when the user is unable to control an inference result of the holding mode made by the trained model, the user is able to control the holding mode of the robot 2. As a result, the convenience of the trained model that outputs the holding mode for the holding target object 80 is increased.
Other embodiments will be described below.
In the above embodiment, the configuration in which the holding mode determination device 10 determines the holding position 82 as a holding mode has been described. The holding mode determination device 10 is capable of determining not only the holding position(s) 82 but also another mode as the holding mode.
For example, the holding mode determination device 10 may determine, as the holding mode, a force to be applied by the robot 2 to hold the holding target object 80. In this case, the holding mode inference model 50 infers and outputs a force to be applied to hold the holding target object 80.
For example, the holding mode determination device 10 may determine, as the holding mode, the type of a hand by which the robot 2 holds the holding target object 80. In this case, the holding mode inference model 50 infers and outputs the type of a hand used to hold the holding target object 80.
The holding mode determination device 10 may cause the display device 20 to display the holding mode inferred by the holding mode inference model 50 such that the holding mode is superimposed on the target object image. When the robot 2 includes at least two fingers for pinching the holding target object 80, the holding mode determination device 10 may cause the positions at which the holding target object 80 is pinched by the fingers to be displayed as the holding positions 82. The holding mode determination device 10 may cause the display device 20 to display a position within a predetermined range from the positions at which the holding target object 80 is pinched.
An embodiment of the holding mode determination device 10 and the robot control system 100 has been described above. The embodiment of the present disclosure can include an embodiment of, in addition to a method or a program for implementing the system or the device, a storage medium (for example, an optical disc, a magneto-optical disk, a CD-ROM, a CD-R, a CD-RW, a magnetic tape, a hard disk, or a memory card) storing the program.
The implementation form of the program is not limited to an application program such as an object code compiled by a compiler or a program code executed by an interpreter, and may be a form such as a program module incorporated in an operating system. The program may or may not be configured such that all processing is performed only in a CPU on a control board. The program may be configured such that a part or the entirety of the program is executed by another processing unit mounted on an expansion board or an expansion unit added to the board as necessary.
The embodiment according to the present disclosure has been described based on the drawings and examples. Note that a person skilled in the art could make various variations or changes based on the present disclosure. Thus, note that the variations or changes are included in the scope of the present disclosure. For example, the functions or the like included in the individual components or the like can be reconfigured without logical inconsistency. Multiple components or the like can be combined into one component or can be divided.
All the structural elements described in the present disclosure and/or all the disclosed methods or all the steps of a process may be combined in any combination except for combinations in which these features are mutually exclusive. Each of the features described in the present disclosure may be replaced with an alternative feature serving for an identical, equivalent, or similar purpose, unless explicitly denied. Thus, unless explicitly denied, each of the disclosed features is merely one example of a comprehensive series of identical or equivalent features.
Furthermore, the embodiment according to the present disclosure is not limited to any specific configuration of the above-described embodiment. The embodiment according to the present disclosure may be extended to all novel features described in the present disclosure, or any combination thereof, or all novel methods described, or processing steps, or any combination thereof.
In the present disclosure, descriptions such as “first” and “second” are identifiers for distinguishing corresponding elements from each other. In the present disclosure, the elements distinguished by “first”, “second”, and the like may have ordinal numbers exchanged with each other. For example, “first” and “second” serving as identifiers may be exchanged between the first class and the second class. The exchange of the identifiers is performed simultaneously. Even after the exchange of the identifiers, the elements are distinguished from each other. The identifiers may be deleted. The elements whose identifiers have been deleted are distinguished from each other by reference signs. The identifiers such as “first” and “second” in the present disclosure alone are not to be used as a basis for interpreting the order of corresponding elements or the existence of identifiers with smaller numbers.
Number | Date | Country | Kind |
---|---|---|---|
2021-134391 | Aug 2021 | JP | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2022/031456 | 8/19/2022 | WO |