This application claims priority to Chinese Patent Application No. 201710861591.5 filed on Sep. 21, 2017, the contents of which are incorporated by reference herein.
The subject matter herein generally relates to data processing field, and particularly, to a robot, a system, and a method with configurable service contents.
In prior art, robot's hardware, software, and service content are bound up with each other, which is inconvenient to modify or change the robot's software and service content for a prototyping robot. Thus, the robot is largely inflexible.
Implementations of the present disclosure will now be described, by way of example only, with reference to the attached figures.
It will be appreciated that for simplicity and clarity of illustration, where appropriate, reference numerals have been repeated among the different figures to indicate corresponding or analogous elements. In addition, numerous specific details are set forth in order to provide a thorough understanding of the embodiments described herein. However, it will be understood by those of ordinary skill in the art that the embodiments described herein can be practiced without these specific details. In other instances, methods, procedures, and components have not been described in detail so as not to obscure the related relevant feature being described. Also, the description is not to be considered as limiting the scope of the embodiments described herein. The drawings are not necessarily to scale and the proportions of certain parts may be exaggerated to better illustrate details and features of the present disclosure.
The present disclosure, including the accompanying drawings, is illustrated by way of examples and not by way of limitation. Several definitions that apply throughout this disclosure will now be presented. It should be noted that references to “an” or “one” embodiment in this disclosure are not necessarily to the same embodiment, and such references mean “at least one”.
The term “module”, as used herein, refers to logic embodied in hardware or firmware, or to a collection of software instructions, written in a programming language, such as, Java, C, or assembly. One or more software instructions in the modules can be embedded in firmware, such as in an EPROM. The modules described herein can be implemented as either software and/or hardware modules and can be stored in any type of non-transitory computer-readable medium or other storage device. Some non-limiting examples of non-transitory computer-readable media include CDs, DVDs, BLU-RAY, flash memory, and hard disk drives. The term “comprising” means “including, but not necessarily limited to”; it specifically indicates open-ended inclusion or membership in a so-described combination, group, series, and the like.
Exemplary embodiments of the present disclosure will be described in relation to the accompanying drawings.
The pressure sensor 104 is used to detect pressure information of the robot 1 when a user presses the robot 1 and transmit the pressure information to the processor 114. The infrared sensor 105 is used to detect temperature information around the robot 1 and transmit the information to the processor 114. The positioning unit 106 is used to acquire position information of the robot 1 and transmit the position information of the robot 1 to the processor 114. The touch unit 107 is used to receive touch information of the robot 1 and transmit the touch information to the processor 114. In at least one exemplary embodiment, the touch unit 107 can be a touch screen.
The voice output unit 108 is used to output voice information under control of the processor 114. In at least one exemplary embodiment, the voice output unit 108 can be a loudspeaker 108. The expression output unit 109 is used to output visual and vocal expressions under the control of the processor 114. In at least one exemplary embodiment, the expression output unit 109 includes an eye and a mouth. The eye and the mouth can be opened or closed. The expression output unit 109 controls the eye or the mouth to open and close under the control of the processor 114. The display unit 110 is used to display information of the robot 1 under control of the processor 114. For example the display unit 110 can display word, picture, or video information under the control of the processor 114. In another embodiment, the display unit 110 is used to display an image of an expression. For example, the expression image can be a happiness, misery, or other expression of mood. In at least one exemplary embodiment, the touch unit 107 and the display unit 110 can be a touch screen.
The motion output unit 111 controls the robot 1 to move under the control of the robot 1. In at least one exemplary embodiment, the motion output unit 111 includes a first shaft 1111, two second shafts 1112, and a third shaft 1113.
The robot 1 communicates with the server 2. In at least one exemplary embodiment, the communication unit 112 can be a WIFI communication module, a ZIGBEE communication module, or a BLUETOOTH module. In another embodiment, the robot 1 can communicate with a household appliance through the communication unit 112. For example, the household appliance can be an air conditioning, a light, or a TV, and the communication unit 112 can be an infrared communication module.
The storage device 113 stores data and program of the robot 1. For example, the storage device 113 can store the system 100 with configurable service contents, preset face images, and preset voices. In at least one exemplary embodiment, the storage device 113 can include various types of non-transitory computer-readable storage mediums. For example, the storage device 113 can be an internal storage system of the robot 1, such as a flash memory, a random access memory (RAM) for temporary storage of information, and/or a read-only memory (ROM) for permanent storage of information. The storage device 113 can also be an external storage system, such as a hard disk, a storage card, or a data storage medium. In at least one exemplary embodiment, the processor 114 can be a central processing unit (CPU), a microprocessor, or other data processor chip that performs functions of the system 100 with configurable service contents.
The content editing module 210 provides at least one editing interface 300 to edit service of the robot 1 content for a user.
The storing module 220 stores the service content edited by the content editing module 210.
The control module 230 controls the robot 1 to execute the service content.
In at least one exemplary embodiment, the content editing module 210 includes a display content editing sub-module 211. The display content editing sub-module 211 provides the display content editing interface 310. The display content editing interface 310 enables a user to edit display content of the robot 1. For example, the user can edit an expression image of the robot 1 through the display content editing interface 310. The expression image can be a smile and blink expression image of the robot 1, a cute expression image of the robot 1, and so on. The expression image can also be the dynamic expression image that expresses happiness, irritability, joy, depression emotion. In another embodiment, the display content editing interface 310 can edit text or video information. The format of the video information includes formats such as SWF, GIF, AVOX, PNG and the like.
In at least one exemplary embodiment, the content editing module 210 includes a conversation content editing sub-module 212. The conversation content editing sub-module 212 provides the conversation editing interface 320. The conversation editing interface 320 enables a user to edit conversation content of the robot 1. In at least one exemplary embodiment, the conversation content of the robot 1 includes user conversation content and robot conversation content. The conversation editing interface 320 acquires user conversation content and robot conversation content through the voice acquiring unit 102, and establishes a relationship T1 (refer to
In at least one exemplary embodiment, the content editing module 210 includes a positioning and navigation content editing sub-module 213. The positioning and navigation content editing sub-module 213 provides the positioning and navigation editing interface 330. The positioning and navigation editing interface 330 is used to edit positioning and navigation content of the robot 1. In at least one exemplary embodiment, the positioning and navigation editing interface 330 acquires location of the robot 1 through a positioning unit 106, and marks the acquired location of the robot 1 in an electronic map, thus the robot 1 is positioned. In at least one exemplary embodiment, the electronic map is stored in the storing device 113, the positioning and navigation editing interface 330 acquires the electronic map from the storing device 113. In another embodiment, the electronic map is stored in the server 2, the positioning and navigation editing interface 330 acquires the electronic map from the server 2.
In at least one exemplary embodiment, the positioning and navigation content editing sub-module 213 also acquires a destination location input by the user. For example, the positioning and navigation content editing sub-module 213 acquires the destination location through the voice acquiring unit 102 by acquiring user's voice. The positioning and navigation content editing sub-module 213 can further mark the acquired destination location in the electronic map, and generate a route from the location of the robot 1 to the destination location.
In at least one exemplary embodiment, the content editing module 210 includes a motion control content editing sub-module 214. The motion control content editing sub-module 214 provides the motion control interface 340. The motion control interface 340 enables a user to edit motion control content of the robot 1. In at least one exemplary embodiment, the motion control content of the robot 1 includes controlled object and control parameter corresponding to the controlled object. The controlled object can be the head 120, the couple of arms 124, or the wheelpair 125. The control parameter can be motion parameter corresponding to the head 120, the couple of arms 124, or the wheelpair 125. In at least one exemplary embodiment, the motion parameter corresponding to the head 120 of the robot 1 is a rotation angle, the motion parameter corresponding to the arm 123 of the robot 1 is swinging and swing amplitude, and the motion parameter corresponding to the wheelpair 125 of the robot 1 is number of rotations. The motion control interface 340 controls the first shaft 1111 connected to the head 120 to rotate according to the rotation angle. Thus, the head 120 is controlled to move by the motion control interface 340. The motion control interface 340 controls the second shaft 1112 connected to the arm 123 to swing according to the swing amplitude. Thus, the arm 123 is controlled to move by the motion control interface 340. The motion control interface 340 controls the third shaft 1113 connected to the wheelpair 125 to rotate a certain number of rotations. Thus, the wheelpair 125 is controlled to move by the motion control interface 340.
In at least one exemplary embodiment, a target service content can be edited by the display content editing sub-module 211, the conversation content editing sub-module 212, the positioning and navigation content editing sub-module 213, and the motion control content editing sub-module 214. In at least one exemplary embodiment, the target service content can be meal delivery service content. For example, the display content editing interface 310 provided by the display content editing sub-module 211 can edit a smile and blink expression image of the robot 1. Then, the conversation editing interface 320 provided by the conversation content editing sub-module 212 can edit “delivery meal to first table” as the user conversation content, edits “OK, first table” as the responsive conversation content of the robot, and establishes the edit user conversation content and the conversation content of the robot in the relationship table T1. The navigation editing interface 330 provided by the navigation content editing sub-module 213 acquires the location of the robot 1, and marks the location of the robot 1 in the electronic map. The navigation editing interface 330 further acquires the “first table” as the destination location through the voice acquiring unit 102, and generates a route from the location of the robot 1 to the destination location. Finally, the motion control interface 340 provided by the motion control content editing sub-module 214 rotates the wheelpair 125 of robot 1 to move according to the route.
In at least one exemplary embodiment, the content editing module 210 further includes an identifying content editing sub-module 215. The identifying content editing sub-module 215 provides the identification interface 350. The identification interface 350 enables a user to edit identifying content of the robot 1. In at least one exemplary embodiment, the identifying content of the robot 1 includes human face identification. For example, the identification interface 350 acquires human face image through the camera unit 101, compares the acquired human face image with preset user face images to identify the acquired human face image. In at least one exemplary embodiment, the identifying content of the robot 1 includes human body identification. For example, the identification interface 350 identifies the human body around the robot 1 through the infrared sensor 105. In another embodiment, the identifying content of the robot 1 includes smoke identification. For example, the identification interface 350 identifies smoke around the robot 1 through the smoke sensor 103. In other embodiment, the identifying content of the robot 1 includes pressure identification. For example, the identification interface 350 identifies the pressure put on the robot 1 through the pressure sensor 104.
In at least one exemplary embodiment, the content editing module 210 further includes a function content editing sub-module 216. The function content editing sub-module 216 provides the function editing interface 360. The function editing interface 360 enables a user to edit function content of the robot 1. In at least one exemplary embodiment, the function content of the robot 1 includes intelligent home control content. For example, the function editing interface 360 receives a control command input by a user. The control command includes a second controlled object and a control operation corresponding to the second controlled object. In at least one exemplary embodiment, the second controlled object includes, but is not limited to, air conditioner, TV, light, and refrigerator. The control operation includes, but is not limited to, turning on or turning off such device. In at least one exemplary embodiment, the function editing interface 360 receives the control command through the voice acquiring unit 102, sends the control command to the second controlled object included in the control command, and controls the second controlled object according to the control operation included in the control command. Thus, the function editing interface 360 edits the intelligent home control content.
In another embodiment, the function content of the robot 1 includes payment content. For example, the function editing interface 360 communicates with a fee payment center through the communication unit 112. The function editing interface 360 also provides a payment interface to receive the payment amount information and payment verification information input by user, and sends the received the payment amount information and payment verification information to the fee payment center to accomplish payment. Thus, the function editing interface 360 edits the payment content.
In at least one exemplary embodiment, the system with configurable service contents further includes a simulation module 240 and a compiling and packaging module 250. The simulation module 240 is used to simulate the edited service content. The simulation module 240 further provides a simulation interface (not shown) to display the simulation result. The compiling and packaging module 250 is used to compile the edited service content, and package the edited service content to create an application or program.
At block 701, a robot provides at least one editing interface to edit service content of the robot for a user. The editing interface includes a display editing interface, a conversation editing interface, a positioning and navigation editing interface, a motion control interface, an identification interface, and a function editing interface.
The robot stores the service content edited by the user.
The robot executes the service content.
In at least one exemplary embodiment, the robot provides the display content editing interface. The display content editing interface enables a user to edit display content of the robot. For example, the user can edit an expression image of the robot through the display content editing interface. The expression image can be a smile and blink expression image of the robot, a cute expression image of the robot, and so on. The expression image can also be the dynamic expression image that expresses the happiness, irritability, joy, depression emotion. In another embodiment, the display content editing interface can edit text or video information. The format of the video information comprises formats such as SWF, GIF, AVOX, PNG and the like.
In at least one exemplary embodiment, the robot provides the conversation editing interface. The conversation editing interface enables a user to edit conversation content of the robot. In at least one exemplary embodiment, the conversation content of the robot includes user conversation content and robot conversation content. The conversation editing interface acquires user conversation content and robot conversation content through the voice acquiring unit, and establishes a relationship (referring to
In at least one exemplary embodiment, the robot provides the positioning and navigation editing interface. The positioning and navigation editing interface is used to edit positioning and navigation content of the robot. In at least one exemplary embodiment, the positioning and navigation editing interface acquires location of the robot through a positioning unit, and marks the acquired location of the robot in an electronic map, thus, the robot is positioned. In at least one exemplary embodiment, the electronic map is stored in the storing device, the positioning and navigation editing interface acquires the electronic map from the storing device. In another embodiment, the electronic map is stored in the server, the positioning and navigation editing interface acquires the electronic map from the server 2.
In at least one exemplary embodiment, the robot also acquires a destination location input by the user. For example, the robot acquires the destination location through the voice acquiring unit by acquiring user's voice. The robot further can mark the acquired destination location in the electronic map, and generate a route from the location of the robot to the destination location.
In at least one exemplary embodiment, the robot provides the motion control interface. The motion control interface enables a user to edit motion control content of the robot. In at least one exemplary embodiment, the motion control content of the robot includes a controlled object and a control parameter corresponding to the controlled object. The controlled object can be the head, the couple of arms or the wheelpair. The control parameter can be motion parameter corresponding to the head, the couple of arms or the wheelpair. In at least one exemplary embodiment, the motion parameter corresponding to the head of the robot is rotation angle, the motion parameter corresponding to the arm of the robot is swing amplitude, and the motion parameter corresponding to the wheelpair of the robot is number of rotations. The robot controls the first shaft connected to the head to rotate according to the rotation angle. Thus, the head is controlled to move by the robot. The robot controls the second shaft connected to the arm to swing according to the swing amplitude. Thus, the arm is controlled to move by the robot. The robot 1 controls the third shaft connected to the wheelpair to rotate according to a curtain number of rotations. Thus, the wheelpair is controlled to move by the robot.
In at least one exemplary embodiment, a target service content can be edited by the robot. In at least one exemplary embodiment, the target service content can be a meal delivery service content. For example, the display content editing interface edits a smile and blink expression image of the robot. Then, the conversation editing interface edits “delivery meal to first table” as the user conversation content, edits “OK, first table” as the conversation content of the robot, and establish the edit user conversation content and the conversation content of the robot in the relationship table. The navigation editing interface acquires the location of the robot, and marks the location of the robot in the electronic map. The navigation editing interface further acquires the “first table” as the destination location through the voice acquiring unit, and generates a route from the location of the robot to the destination location. Last, the motion control interface controls the wheelpair of robot to rotate to drive the robot to move according to the route.
In at least one exemplary embodiment, the robot provides the identification interface. The identification interface enables a user to edit identifying content of the robot. In at least one exemplary embodiment, the identifying content of the robot includes human face identification. For example, the identification interface acquires human face image through the camera unit, compares the acquired human face image with a preset user face image to identify the acquired human face image. In at least one exemplary embodiment, the identifying content of the robot includes human body identification. For example, the identification interface identifies the human body around the robot through the infrared sensor. In another embodiment, the identifying content of the robot includes smoke identification. For example, the identification interface identifies smoke around the robot through the smoke sensor. In other embodiment, the identifying content of the robot includes pressure identification. For example, the identification interface identifies the pressure put on the robot through the pressure sensor.
In at least one exemplary embodiment, the robot provides the function editing interface. The function editing interface enables a user to edit function content of the robot. In at least one exemplary embodiment, the function content of the robot includes intelligent home control content. For example, the function editing interface receives a control command input by a user. The control command includes a second controlled object and a control operation corresponding to the second controlled object. In at least one exemplary embodiment, the second controlled object includes, but is not limited to air conditioner, TV, light, refrigerator. The control operation includes, but is not limited to turning on or turning off such device. In at least one exemplary embodiment, the function editing interface receives the control command through the voice acquiring unit, sends the control command to the second controlled object included in the control command, and control the second control object according to the control operation included in the control command. Thus, the function editing interface edits the intelligent home control content.
In another embodiment, the function content of the robot includes payment content. For example, the function editing interface communicates with a fee payment center through the communication unit. The function editing interface also provides a payment interface to receive the payment amount information and payment verification information input by user, and sends the received the payment amount information and payment verification information to the fee payment center to accomplish payment. Thus, the function editing interface edits the payment content.
In at least one exemplary embodiment, the method further includes: simulate the edited service content; and compile the edited service content, and packaging the edited service content to create an application.
The exemplary embodiments shown and described above are only examples. Even though numerous characteristics and advantages of the present disclosure have been set forth in the foregoing description, together with details of the structure and function of the present disclosure, the disclosure is illustrative only, and changes may be made in the detail, including in matters of shape, size and arrangement of the parts within the principles of the present disclosure up to, and including, the full extent established by the broad general meaning of the terms used in the claims.
Number | Date | Country | Kind |
---|---|---|---|
201710861591.5 | Sep 2017 | CN | national |