TEACHING DEVICE FOR ROBOT

Information

  • Patent Application
  • 20200353614
  • Publication Number
    20200353614
  • Date Filed
    July 27, 2020
    4 years ago
  • Date Published
    November 12, 2020
    4 years ago
Abstract
A teaching device for a robot, which includes a microphone, voice recognition circuity, specific-word extraction circuitry configured to extract a specific word matching with a word recognized by the voice recognition circuitry, and operation command generation circuitry configured to generate an operation command for the robot based on operational information associated with the specific word extracted by the specific-word extraction circuitry. The specific word includes a first word for specifying a pitch corresponding to a moving distance when the robot is moved in a given direction from a specified location, and a second word for specifying a moving direction of the robot, When the pitch specified by the first word is not updated and the moving direction of the robot is specified by the second word, the operation command generation circuitry generates the operation command so that the robot is moved in the specified direction at the specified pitch.
Description
TECHNICAL FIELD

The present disclosure relates to a teaching device for a robot.


BACKGROUND ART

Generally, there is a direct teaching method as a method of teaching a robot an operation. In this direct teaching method, an operator holds a tip end of the robot, moves the robot to a desired location or makes the robot take a desired posture, and causes the robot to store this location or posture to teach the robot the operation. The operator needs to operate a teaching pendant in addition to the direct movement of the robot to the instructed location. The teaching pendant has an inching mode in which the robot is moved in a desired direction by a remote operation. In the inching mode, the operator must watch the teaching pendant while watching the robot because he/she uses operation keys of the teaching pendant.


In recent years, in order to reduce the operator's burden, teaching devices for robots which perform the inching by an audio input have been proposed (see Patent Documents 1 and 2). However, these documents have deficiencies.


PATENT DOCUMENTS



  • [Patent Document 1] JP2003-080482A

  • [Patent Document 2] JP2005-088114A



DESCRIPTION OF THE DISCLOSURE

The present disclosure improves a speech-recognition accuracy of an audio input in an inching mode of a robot.


SUMMARY OF THE DISCLOSURE

In order to solve the described issue, a teaching device for a robot according to one aspect of the present disclosure is provided, which is configured to teach the robot an operation and includes a voice input part configured to accept an input of voice of an operator, a voice recognition part configured to recognize the voice of the operator inputted into the voice input part, a memory part configured to store beforehand a specific word associated with operation of the robot, a specific-word extraction part configured to extract from the memory part the specific word matching with a word recognized by the voice recognition part, and an operation command generation part configured to generate an operation command for the robot based on operational information associated with the specific word extracted by the specific-word extraction part. The specific word includes a first word for specifying a pitch corresponding to a moving distance when the robot is moved in a given direction from a specified location, and a second word for specifying a moving direction of the robot. When the pitch specified by the first word is not updated and the moving direction of the robot is specified by the second word, the operation command generation part generates the operation command so that the robot is moved in the specified direction at the specified pitch.


According to this configuration, when the operator utters to the voice input part (e.g., microphone) in an inching mode, the voice recognition part recognizes the voice of the operator inputted into the voice input part, and the specific-word extraction part extracts from the memory part the specific word matching with the word recognized by the voice recognition part. The operation command generation part generates the operation command for the robot based on the operational information (specify command) associated with the specific word extracted by the specific-word extraction part. Here, the specific word includes a word for specifying a pitch when the robot is moved in a given direction from a specified location by a direct teaching etc., and a word for specifying the moving direction of the robot. Thus, when the specific-word extraction part extracts the second word followed by the first word, the pitch specified by the first word is not updated and the moving direction of the robot is specified by the second word. In this case, the operation command is generated so that the robot is moved in the specified direction at the specified pitch. Therefore, the moving pitch can be specified by the audio input (e.g., the operator utters “PITCH 2 MM”), and then, the moving direction of the robot can be specified (e.g., the operator utters “RIGHT, RIGHT, RIGHT, DOWN”). Since the utterance by the operator each time becomes shorter, the precision of the speech recognition improves.


The specific word may further include a third word easy to utter for moving the robot forward or backward in the specified direction at the specified pitch. When the pitch specified by the first word is not updated and the moving direction of the robot is specified by the second word, the operation command generation part may generate the operation command so that the robot is moved forward or backward by the third word in the specified direction at the specified pitch.


According to this configuration, since the specific word includes the third word easy to utter, the robot can be moved forward or backward in the specified direction at the specified pitch by this word easy to utter after the moving direction of the robot is specified. For example, suppose that “2 MM” is specified as the pitch and “30 DEGREES UPPER RIGHT” is specified as the direction. Then, without the pitch and the direction being updated, when the operator speaks once “GO,” the robot moves upward and rightward by 2 mm. Further, without the pitch and the direction being updated, when the operator speaks once “BACK,” the robot moves upward and rightward by −2 mm. That is, it moves 30 degrees downward and leftward by 2 mm. Further, without the pitch and the direction being updated, when the operator speaks “GO, GO, GO,” the robot moves upward and rightward by the total of 6 mm. Thus, since the robot is moved forward or backward with the easy-to-utter word, the precision of the speech recognition improves.


The first word may include an ambiguous expression for specifying a pitch when the robot is moved in a given direction from a specified location. The memory part may store beforehand a numeric value and a unit thereof corresponding to the ambiguous expression so as to be associated with the ambiguous expression. When the moving direction specified by the second word is not updated and the pitch of the robot is specified by the numeric value and the unit corresponding to the ambiguous expression, the operation command generation part may generate the operation command so that the robot is moved in the specified moving direction at the specified pitch.


The robot may have a plurality of robotic arms. The specific word may further include a word for specifying a robotic arm to be used as a target of movement among the plurality of robotic arms.


According to this configuration, one robotic arm to be used as the target of movement among the plurality of robotic arms can be specified by the utterance (e.g., “UPPER ARM,” “LOWER ARM,” “BOTH ARMS”).


The teaching device may teach the robot the operation by the direct teaching.


Effects of the Disclosure

The present disclosure has the configuration described above so that a speech-recognition accuracy of an audio input in an inching mode of a robot can be improved.





BRIEF DESCRIPTION OF DRAWINGS


FIG. 1 is a block diagram illustrating a configuration of a teaching device for a robot according to one embodiment.



FIG. 2 is a table illustrating one example of specific words extracted by a speech recognition.



FIG. 3 is a view illustrating a situation of a teaching work of the robot.



FIG. 4 is a schematic diagram illustrating flows of signals in the teaching work of the robot.



FIG. 5 is a table illustrating other examples of the specific words extracted by the speech recognition.





MODE FOR CARRYING OUT THE DISCLOSURE

Hereinafter, a desirable embodiment is described with reference to the drawings. Note that, below, the same reference characters are assigned to the same or corresponding components throughout the drawings to omit redundant description. Moreover, the drawings are to schematically illustrate the components to facilitate understandings.



FIG. 1 is a block diagram illustrating a configuration of a teaching device for a robot according to one embodiment of the present disclosure. A teaching device 1 is a teaching device for teaching a robot 3 operation by a direct teaching. In this embodiment, the teaching device 1 includes a speech recognition function, and is comprised of a teaching terminal communicatably connected to a control device 4 of the robot. The teaching device 1 is connected to a microphone (voice input part) 2 which collects voice or other sound uttered by an operator 50, whether local or remote. The microphone 2 is desirable to be provided with a noise shield which reduces an input of sound other than the voice uttered by the operator 50. Such a noise shield may include a reflecting plate which reflects the voice uttered from the mouth of the operator 50 and collects the voice in the microphone, and an insulating member which interrupts the sound from a direction other than the mouth of the operator 50. Further, the noise shield is also desirable to use a directional microphone having high sensitivity for voices from the direction of the mouth of the operator 50. Alternatively or additionally, a bone conductive microphone which collects bone conducting sound of the operator 50 is suitably used. Such microphones eliminate noise at a location where the robot 3 is installed, such as a production line of a factory, and clearly collect the voice of the operator 50.


The teaching device 1 includes a voice recognition part or circuitry 11 which recognizes the voice of the operator inputted into the microphone 2, a memory or memory part 12 which stores beforehand specific word(s) associated with operation of the robot 3, a specific-word extraction part or circuitry 13 which extracts from the memory part 12 the specific word which matches with the word recognized by the voice recognition part 11, and an operation command generation part or circuitry 14 which creates an operation command for the robot based on operational information (specify command) associated with the specific word extracted by the specific-word extraction part 13. Here, the teaching device 1 is a computer provided with, for example, one or more processors, a memory, an input/output interface, and a communication interface. The voice recognition part 11, the specific-word extraction part 13, and the operation command generation part 14 are functional blocks implemented by the processor executing a given program.


The memory part 12 stores beforehand the specific words to be extracted by a speech recognition, various data, such as the commands, related to the robot operation associated with the respective specific words, and other speech recognition processing programs. FIG. 2 is a chart illustrating examples of the specific words to be extracted by the speech recognition. As illustrated in FIG. 2, each specific word(s) is associated with operation of the robot 3. For example, a specific word “RECORD” is associated with “a command for creating a move command to a current position” in a teaching mode. The specific word “TRANSMIT” is associated with “a program transfer command of the robot” generated in the teaching mode. The specific word “START” is associated with “a program execution command” in a playback mode. The specific word “PITCH ∘∘ MM” is associated with “a pitch specify command” in an inching mode. This is a word which specifies a pitch when moving the robot in a given direction from the location specified by the direct teaching. In this context, pitch may be considered a distance, for example. ∘∘ corresponds to a numeric value. The specific word “RIGHT (OR LEFT)” is associated with “a specify command to the right or to the left in the moving direction” of the robot in the inching mode. The specific word “UP (OR DOWN)” is associated with “a specify command to upward or downward in the moving direction” of the robot in the inching mode. The specific word “FRONT (OR REAR)” is associated with “a specify command to forward or rearward in the moving direction” of the robot in the inching mode. The specific words “UPPER ARM (OR LOWER ARM)” and “BOTH ARMS” are associated with “a command for specifying a target arm” in the inching mode of a dual-arm robot.


The specific-word extraction part 13 extracts from the memory part 12 the specific word which matches with the word recognized by the voice recognition part 11. The operation command generation part 14 generates the operation command for the robot based on the operational information (specify command) associated with the specific word extracted by the specific-word extraction part 13, and outputs it to the control device 4.


The robot 3 uses one of the teaching mode for teaching the operation based on the operation command from the control device 4 and the playback mode for automatically performing the taught operation. The robot 3 is connected to the control device 4. The control device 4 is a robot controller provided with, for example, an arithmetic processor, a servo amplifier, a memory, an input/output interface, and a communication interface. The control device 4 includes a direct teaching part or circuitry 21, an operation command generation part or circuitry 22, and a memory part or memory 23. The direct teaching part 21 detects a location or a posture of the robot which is led by the operator 50 in the teaching mode, and stores in the memory part 23 teaching information, such as a teaching point. The operation command generation part 22 transmits to the robot 3 a signal commanding operation of the robot 3 based on the operation command from the teaching device 1 in the teaching mode. In the playback mode, a signal for commanding operation of the robot 3 in order to perform a playback of the taught operation program etc.


is transmitted to the robot 3. The memory part 23 stores teaching information, such as the operation program generated in the teaching mode.


Next, the direct teaching of the robot 3 by the teaching device 1 is described. FIG. 3 is a view illustrating a situation of a teaching work of the robot 3. As illustrated in FIG. 3, the robot 3 is the dual-arm robot having a pair of robotic arms 31 supported by a base 30. The robot 3 is installed in a limited space corresponding to a size of one person (e.g., 610 mm×620 mm). In the teaching mode, the location and posture of the robotic arm 31 (angles of each joint and a tip end of the arm) are manually operated by the operator 50. Moreover, in this embodiment, since the operator 50 wears the microphone 2 which hands-free collects the voice uttered by the operator 50, he/she can operate the robot 3 by the audio input.


The operator 50 grabs the tip end of the robotic arm 31, and moves the robotic arm 31 to a desired location or changes it to take a desired posture, and this location or posture is stored. Here, when the operator 50 utters the word “RECORD,” the teaching device 1 recognizes the voice of the operator 50 inputted into the microphone 2 by the voice recognition part 11. The specific-word extraction part 13 extracts from the memory part 12 the specific word which matches with the word recognized by the voice recognition part 11 (see FIG. 2). Here, the specific-word extraction part 13 extracts from the memory part 12 “the command for creating the move command to the current position” associated with the specific word “RECORD,” and outputs it to the operation command generation part 14. The operation command generation part 14 generates the operation command for the robot based on “the command for creating the move command to the current position,” and outputs it to the control device 4. The control device 4 receives the operation command, and the current position or posture of the robot is stored in the memory part 23.


Moreover, the teaching device 1 of this embodiment can perform by the audio input the inching mode in which the robot is moved to a given direction from the location and posture of the robot which are registered by the direct teaching. The memory part 12 stores a word which specifies a pitch when moving the robot in the given direction from the location specified by the direct teaching, and a word which specifies the moving direction of the robot. As illustrated in FIG. 2, the specific word(s) “PITCH ∘∘ MM” is associated with “the pitch specify command” in the inching mode. The specific word “RIGHT (OR LEFT)” is associated with “the specify command to the right or to the left in the moving direction” of the robot in the inching mode.



FIG. 4 is a schematic diagram illustrating flows of signals in robot operation by the audio input. As illustrated in FIG. 4, when the operator 50 utters the words of “PITCH 2 MM,” the teaching device 1 recognizes the voice of the operator 50 inputted into the microphone 2 by the voice recognition part 11. The specific-word extraction part 13 extracts from the memory part 12 the specific words which match with the words recognized by the voice recognition part 11 (see FIG. 2). Here, the specific-word extraction part 13 extracts from the memory part 12 “the pitch specify command” associated with the specific words “PITCH ∘∘ MM,” and outputs it to the operation command generation part 14. The operation command generation part 14 generates the operation command for the robot based on “the pitch specify command,” and outputs it to the control device 4. The control device 4 receives the operation command and the pitch in the inching mode is specified. The specified pitch is stored in the memory part 23.


Next, when the operator 50 utters the word “RIGHT,” “RIGHT,” “RIGHT,” and “DOWN,” the teaching device 1 recognizes the voice of the operator 50 inputted into the microphone 2 by the voice recognition part 11. The specific-word extraction part 13 extracts from the memory part 12 the specific word which matches with the word recognized by the voice recognition part 11 (see FIG. 2). Here, after the specific-word extraction part 13 extracts three times from the memory part 12 “the specify command to the right in the moving direction” associated with the specific word “RIGHT,” it extracts once “the specify command to downward in the moving direction” associated with the specific word “DOWN,” and outputs them to the operation command generation part 14.


The operation command generation part 14 generates the operation command for the robot based on the three “specify commands to the right in the moving direction” and one “specify command to downward in the moving direction,” and outputs it to the control device 4. The control device 4 receives the operation command, refers to the pitch (2 mm) stored in the memory part 23 based on the received operation command, and moves the robot to 6 mm to the right and 2 mm to downward in the inching mode. Thus, when the moving direction of the robot is specified without the specified pitch (2 mm) being updated, the operation command generation part 14 generates the operation command so that the robot is moved in the specified direction at the specified pitch.


Therefore, according to this embodiment, since the utterance by the operator 50 each time becomes shorter in the inching mode using the audio input, the recognition precision of the speech recognition improves.


Note that, in this embodiment, once the pitch is specified, when only the word indicative of the direction is uttered without the pitch being updated, the robot is moved in the uttered direction by a distance corresponding to the pitch. However, it is sometimes difficult to repeatedly utter the word indicative of the direction. For example, “RIGHT” is not so difficult to utter repeatedly, but the words “30 DEGREES UPPER RIGHT” are difficult to utter repeatedly.


Thus, the specific word further includes a word easy to utter for moving the robot forward or backward in the specified direction at the specified pitch, and after the moving direction of the robot is specified, the robot may be moved in the specified direction at the given pitch by the easy-to-utter word.



FIG. 5 is a table illustrating other examples of the specific words extracted by the speech recognition. As illustrated in FIG. 5, the easy-to-utter word for moving the robot forward or backward is “GO OR BACK.” Since these words have one vowel, utterance of these words is easier to be recognized by the speech recognition. Further, a shorter number of syllables is easier to utter then a longer number of syllables. For example, suppose that 2 mm is specified as the pitch and “30 DEGREES UPPER RIGHT” is specified as the direction. Then, without the pitch and the direction being updated, when the operator speaks once “GO,” the robot moves upward and rightward by 2 mm. Further, without the pitch and the direction being updated, when the operator speaks once “BACK,” the robot moves upward and rightward by −2 mm (i.e., it moves 30 degrees downward and leftward by 2 mm). Further, without the pitch and the direction being updated, when the operator speaks “GO, GO, GO,” the robot moves upward and rightward by the total of 6 mm.


That is, when the operator utters the word “GO” and the teaching device recognizes it, the operation command for moving the robot in the direction already specified by a distance corresponding to the pitch already specified is generated. Moreover, when the operator utters the word “BACK” and the teaching device 1 (voice recognition part 11) recognizes it, the operation command for moving the robot in the direction opposite to the direction already specified by a distance corresponding to the pitch already specified is generated. Thus, since the robot is moved forward or backward with the easy-to-utter word, the recognition precision of the speech recognition improves.


Note that although in this embodiment the robot is moved in the specified direction at the specified pitch when the moving direction of the robot is specified without the specified pitch being updated, the robot may be moved in the specified moving direction at the specified pitch, when the pitch is specified without the specified moving direction being updated. For example, when the right is specified as the moving direction, and the operator utters the word “2 MM” without the moving direction being updated, the operation command for moving the robot rightward by 2 mm may be generated. Further, when the operator utters the word “0.2 MM” without the moving direction being changed, the operation command for moving the robot rightward by 0.2 mm may be generated.


The word for specifying the pitch may be “2 MM” and “0.2 MM” which include a concrete numeric value, and as illustrated in FIG. 5, it may be “A BIT,” “ONLY A BIT,” “LITTLE MORE,” and “JUST LITTLE MORE” which may ordinarily be considered an ambiguous expression. These words are registered (stored) and a relation of the concrete pitch (a numeric value and its unit) corresponding to each word is also registered (stored). For example, the words “A BIT” and “ONLY A BIT” are registered (stored) and 2 mm and 0.2 mm are associated with them as the pitches corresponding to the words. That is, the word “A BIT” is associated with the pitch of 2 mm and the word “ONLY A BIT” is associated with the pitch of 0.2 mm. For example, when the right is already specified as the moving direction and the operator utters the word “A BIT” without the moving direction being updated, the operation command for moving the robot rightward by 2 mm is generated. Further, when the operator utters the word “ONLY A BIT” without the moving direction being changed, the operation command for moving the robot rightward by 0.2 mm is generated. Further, when the operator utters “A BIT, ONLY A BIT, ONLY A BIT” without the moving direction being changed, the robot moves rightward by the total of 2.4 mm. In the human's ordinary conversation, various degrees are expressed by using ambiguous expressions in many cases. Therefore, if the word includes the ambiguous expression rather than the concrete numeric value, the operator can more easily and naturally specify the pitch in his/her intuitive sense.


Note that since the robot 3 is the dual-arm robot having the two robotic arms 31, one robotic arm may be specified by the audio input in the inching mode as a moving target. For example, the operator 50 can specify the robotic arm 31 which becomes the inching target by uttering either the “UPPER ARM,” the “LOWER ARM” or “BOTH ARMS.”


Note that although in this embodiment the teaching device 1 is to teach the robot the operation by the direct teaching, it may be to teach the robot operation by other teaching.


The functionality of the elements disclosed herein may be implemented using circuitry or processing circuitry which includes general purpose processors, special purpose processors, integrated circuits, ASICs (“Application Specific Integrated Circuits”), conventional circuitry and/or combinations thereof which are configured or programmed to perform the disclosed functionality. Processors are considered processing circuitry or circuitry as they include transistors and other circuitry therein. In the disclosure, the circuitry, units, or means are hardware that carry out or are programmed to perform the recited functionality. The hardware may be any hardware disclosed herein or otherwise known which is programmed or configured to carry out the recited functionality. When the hardware is a processor which may be considered a type of circuitry, the circuitry, means, or units are a combination of hardware and software, the software being used to configure the hardware and/or processor.


It is apparent for the person skilled in the art that many improvements and other embodiments of the present disclosure are possible from the above description. Therefore, the above description is to be interpreted only as illustration, and it is provided in order to teach the person skilled in the art the best mode that implements the present disclosure. The details of the configuration and/or the function may be changed substantially without departing from the spirit of the present disclosure.


INDUSTRIAL APPLICABILITY

The present disclosure is useful for teaching robots.


DESCRIPTION OF REFERENCE CHARACTERS




  • 1 Teaching Device


  • 2 Microphone (Voice Input Part)


  • 3 Robot


  • 4 Control Device (Robot Controller)


  • 11 Voice Recognition Part


  • 12 Memory Part


  • 13 Specific-Word Extraction Part


  • 14 Operation Command Generation Part


  • 21 Direct Teaching Part


  • 22 Operation Command Part


  • 23 Memory Part


  • 31 Robotic Arm


  • 50 Operator


Claims
  • 1. A teaching device for a robot to teach the robot an operation, comprising: a microphone to input a voice of an operator;voice recognition circuitry configured to recognize the voice of the operator inputted into by the microphone;a memory to store a specific word associated with operation of the robot;specific-word extraction circuitry configured to extract from the memory the specific word matching a word recognized by the voice recognition part; andoperation command generation circuitry configured to generate an operation command for the robot based on operational information associated with the specific word extracted by the specific-word extraction circuitry,wherein the specific word includes a first word for specifying a pitch corresponding to a moving distance when the robot is moved in a given direction from a specified location, and a second word for specifying a moving direction of the robot, andwherein when the pitch specified by the first word is not updated and the moving direction of the robot is specified by the second word, the operation command generation circuitry generates the operation command so that the robot is moved in the specified direction at the specified pitch.
  • 2. The teaching device of claim 1, wherein the specific word further includes a third word for moving the robot forward or backward in the specified moving direction at the specified pitch, and wherein when the pitch specified by the first word is not updated and the moving direction of the robot is specified by the second word, the operation command generation circuitry generates the operation command so that the robot is moved forward or backward by the third word in the specified direction at the specified pitch.
  • 3. The teaching device of claim 1, wherein: the first word includes an ambiguous expression for specifying a pitch when the robot is moved in a given direction from a specified location,the memory stores a numeric value and a unit thereof corresponding to the ambiguous expression so as to be associated with the ambiguous expression, andwhen the moving direction specified by the second word is not updated and the pitch of the robot is specified by the numeric value and the unit corresponding to the ambiguous expression, the operation command generation circuitry generates the operation command so that the robot is moved in the specified moving direction at the specified pitch.
  • 4. The teaching device of claim 1, wherein: the robot has a plurality of robotic arms, andthe specific word further includes a word for specifying one of the robotic arms to be used as a target of movement.
  • 5. The teaching device of claim 1, wherein the teaching device teaches the robot the operation by a direct teaching.
  • 6. A method, comprising: inputting a voice of an operator;recognizing the voice of the operator;extracting from a memory a specific word matching a word recognized by the recognizing; andgenerating an operation command for a robot based on operational information associated with the specific word extracted by the extracting,wherein the specific word includes a first word for specifying a pitch corresponding to a moving distance when the robot is moved in a given direction from a specified location, and a second word for specifying a moving direction of the robot, andwherein when the pitch specified by the first word is not updated and the moving direction of the robot is specified by the second word, the generating generates the operation command so that the robot is moved in the specified direction at the specified pitch.
  • 7. The method of claim 6, wherein the specific word further includes a third word for moving the robot forward or backward in the specified moving direction at the specified pitch, and wherein when the pitch specified by the first word is not updated and the moving direction of the robot is specified by the second word, the generating generates the operation command so that the robot is moved forward or backward by the third word in the specified direction at the specified pitch.
  • 8. The method of claim 6, wherein: the first word includes an ambiguous expression for specifying a pitch when the robot is moved in a given direction from a specified location, andwhen the moving direction specified by the second word is not updated and the pitch of the robot is specified by a numeric value and a unit which are stored in a memory and correspond to the ambiguous expression, the generating generates the operation command so that the robot is moved in the specified moving direction at the specified pitch.
  • 9. The method of claim 6, wherein: the robot has a plurality of robotic arms, andthe specific word further includes a word for specifying one of the robotic arms to be used as a target of movement.
  • 10. The method of claim 6, wherein the teaching device teaches the robot the operation by a direct teaching.
  • 11. A non-transitory computer readable medium including computer instructions which when executed causes a computer to perform a method, comprising: inputting a voice of an operator;recognizing the voice of the operator;extracting from a memory a specific word matching a word recognized by the recognizing; andgenerating an operation command for a robot based on operational information associated with the specific word extracted by the extracting,wherein the specific word includes a first word for specifying a pitch corresponding to a moving distance when the robot is moved in a given direction from a specified location, and a second word for specifying a moving direction of the robot, andwherein when the pitch specified by the first word is not updated and the moving direction of the robot is specified by the second word, the generating generates the operation command so that the robot is moved in the specified direction at the specified pitch.
  • 12. The non-transitory computer readable medium of claim 11, wherein the specific word further includes a third word for moving the robot forward or backward in the specified moving direction at the specified pitch, and wherein when the pitch specified by the first word is not updated and the moving direction of the robot is specified by the second word, the generating generates the operation command so that the robot is moved forward or backward by the third word in the specified direction at the specified pitch.
  • 13. The non-transitory computer readable medium of claim 11, wherein: the first word includes an ambiguous expression for specifying a pitch when the robot is moved in a given direction from a specified location, andwhen the moving direction specified by the second word is not updated and the pitch of the robot is specified by a numeric value and a unit which are stored in a memory and correspond to the ambiguous expression, the generating generates the operation command so that the robot is moved in the specified moving direction at the specified pitch.
  • 14. The non-transitory computer readable medium of claim 11, wherein: the robot has a plurality of robotic arms, andthe specific word further includes a word for specifying one of the robotic arms to be used as a target of movement.
  • 15. The non-transitory computer readable medium of claim 11, wherein the teaching device teaches the robot the operation by a direct teaching.
Priority Claims (2)
Number Date Country Kind
2018-010445 Jan 2018 JP national
2018-097118 May 2018 JP national
Continuations (1)
Number Date Country
Parent PCT/JP2019/001825 Jan 2019 US
Child 16939496 US