The embodiments herein generally relate to a hyper configurable humanoid robot, and, more particularly, to a system and method for operating and controlling of hyper configurable humanoid robot to perform multiple applications in various work environments.
Robots are automated robotic device implementation. It can accept the human commands, and you can run pre-programmed procedures, it may be based on the principles of artificial intelligence technology developed by the Program of Action. Its mission is to assist or replace human work tasks such as production, construction, or dangerous work.
In recent years, humanoid robots have become a massive research field of robotics. The humanoid robot compared to other types of robots has incomparable advantages, ease of integration into our daily life and work environment to help humanity accomplish specific tasks. Thus requirement of a single platform which can be customized for wide variety of applications is of prime importance. However humanoid robot as a complex system device needs for an effective use of their multi-sensor information to sense changes in the external environment and their own state, and make adjustments to the movement of the actuator, thus requiring their control system to be highly reliable and real-time. The Design must be highly flexible in terms of hardware and software to accomplish task of any nature in various work environments to handle unforeseen situations. Providing customization according to user requirements.
So that there is a need for an improved humanoid design adaptable, configurable and undergo morphological changes for robot to perform one or more applications. Accordingly, there remains a need for a system for the humanoid robot to perform a list of tasks on various work environmental condition and one or more application in an efficient way.
In view of the foregoing, an embodiment herein provides a system for controlling and operating a hyper configurable humanoid robot. The system includes a master control unit. The master control unit includes a memory, and a processor. The memory unit stores a data locally or through cloud, and a set of modules. The memory obtains the data from a perception unit. The processor executes the set of modules. The set of modules includes a work environment accessing module, a communication module, a vision system and LIDAR (Light Detecting and Ranging) module, a feedback analyzing module, an input module, a brain machine interface module, a myoelectric signal detection module, and a finger impression identification module. The work environment accessing module, executed by the processor, is configured to (i) obtain a data from the perception unit to analyze a work conditions, and (ii) perform a list of tasks for the humanoid robot based on one or more sensors. The communication module, executed by the processor, is configured to provide communication between (i) the humanoid robot and a cloud server, and (ii) the cloud server and one or more robots to perform the list of tasks based on one or more sensors. The vision system and LIDAR (Light Detecting and Ranging) module, executed by the processor, is configured to detect an acquisition of image and distance information about a working environmental condition or one or more applications to create a map of the working environmental condition or the one or more applications for navigation. The feedback analyzing module, executed by the processor, is configured to provide a feedback and control information to the humanoid robot. The input module, executed by the processor, is configured to provide an input to the humanoid robot based on (i) an output of the one or more sensors or (ii) the user devices or the user. The brain machine interface module, executed by the processor, is configured to receive an Electroencephalogram (EEG) signal from electrical activity of a human brain of the user to control the humanoid robot. The myoelectric signal detection module, implemented by the processor, is configured to detect an EMG signal from a changing muscle condition of the user to control the humanoid robot. The finger impression identification module, executed by the processor, is configured to identify a finger print of the user for security purposes.
In one embodiment, the system further includes a perception unit that is configured to provide an input/data to the humanoid robot to perform necessary action according to the working environmental condition or the one or more applications based on the one or more sensors, or the user input. The humanoid robot further includes a navigation and control unit, and a monitoring and safety unit. The navigation and control unit is configured to receive a multiple responses from the processor to execute the multiple responses on the humanoid robot for navigation. The humanoid robot acts individually or as a swarm. The monitoring and safety unit is configured to (i) check a right commands given by the user in an operational environment, and (ii) check commands executed during autonomous operation. In another embodiment, the navigation and control unit tracks/maps the working environmental condition or the one or more applications for navigation of the humanoid robot and control an actuator of the humanoid robot. The working environmental condition or the one or more applications are selected from at least one of but not limited to (i) Agriculture application, (ii) Industries application, (iii) Medical application, (iv) Military application, (v) Weather monitoring, (v) Disaster management, and (vi) Domestic application. The humanoid robot includes different types of chassis. The different type of chassis are selected from at least one of but not limited to (i) Biped chassis, (ii) Tracked chassis, (iii) Hexapod chassis, and (iv) Differential drive chassis based on the working environmental condition or the one or more applications.
In another aspect, a processor implemented method for performing and controlling a humanoid robot is provided. The method includes the following steps: (i) obtaining, using a work environment accessing module, a data from a perception unit to analyze a work environmental conditions, (ii) providing, using a communication module, communication between (i) the humanoid robot and a cloud server, and (ii) the cloud server and one or more robots, (iii) detecting, using a vision system and LIDAR module, an acquisition of image and distance information about the working environmental condition or one or more applications to create a map of the working environmental condition for navigation, (iv) providing, using a feedback analyzing module, a feedback and control information to the humanoid robot, and (v) providing, using an input module, an input to the humanoid robot based on the one or more sensors or the user devices or the user to perform a necessary action for the working environmental condition or the one or more applications.
In one embodiment, the method further includes the following steps: (i) receiving, using a brain machine interface module, an Electroencephalogram (EEG) signal from electrical activity of a human brain of the user, (ii) detecting, using a myoelectric signal detection module, an EMG signal from a changing muscle condition of the user, (iii) controlling, the humanoid robot, based on the data, the Electroencephalogram (EEG) signal, and the EMG signal, (iv) identifying, using a finger impression identification module, a finger print of the user for security purpose of the humanoid robot, (v) receiving, using a navigation and control unit, a multiple responses from the processor to execute the multiple responses on the humanoid robot, (vi) tracking/mapping, using the navigation and control unit, the working environmental condition or the one or more applications for navigating the humanoid robot, (vii) checking, using a monitoring and safety unit, a right commands given by the user in an operational environment, and a commands executed during autonomous operation. In another embodiment, the working environmental condition or the one or more applications are selected from at least one of but not limited to (i) Agriculture application, (ii) Industries application, (iii) Medical application, (iv) Military application, (v) Weather monitoring, (v) Disaster management, and (vi) Domestic application. In yet another embodiment, the humanoid robot having a different type of chassis. In yet another embodiment, the different type of chassis are selected from at least one of but not limited to (i) Biped chassis, (ii) Tracked chassis, (iii) Hexapod chassis, and (iv) Differential drive chassis based on the working environmental condition or the one or more applications.
In yet another aspect, a humanoid robot is provided. The humanoid robot includes a perception unit, a master control unit, a monitoring and safety unit, and a navigation and control unit. The perception unit is configured to provide an input/data to the humanoid robot to perform necessary action to a working environmental condition or one or more applications based on one or more sensors, or a user input. The perception unit includes a brain machine interface unit, a myo band and inertial measure unit, a vision and LIDAR system, a biometrics and voice receptor, and a fire and explosive detection unit. The brain machine interface unit is interfaced with a human brain for obtaining an EEG signal from the human brain by providing a biosensor. The EEG signal is transmitted to a microcontroller of the humanoid robot to perform spontaneous and predefined logics. The myo band and inertial measure unit is configured to detect an EMG signal from a muscle of the user to control the humanoid robot. The vision and LIDAR (Light Detecting and Ranging) system is configured to provide a vision and distance information about the working environment conditions or the one or more applications enabling to create a map of the working environment conditions for navigating the humanoid robot. The biometrics and voice receptor that is configured to (i) identify a finger print of the user for security purpose of the humanoid robot, (ii) check the finger print in secured places, and (iii) provide voice commands for the humanoid robot for controlling the movement and/or actions of the humanoid robot. The fire and explosive detection unit is configured to detect a fire accident of the working environmental conditions or the one or more application. The master control unit includes a memory, a processor. The memory unit stores a data locally or through cloud, and a set of modules. The memory unit obtains the data from a perception unit. The processor executes the set of modules. The set of modules includes a work environment accessing module, a communication module, a vision system and LIDAR module, a feedback analyzing module, an input module, a brain machine interface module, a myoelectric signal detection module, and a finger impression identification module. The work environment accessing module, executed by the processor, is configured to (i) obtain a data from the perception unit to analyze a work conditions, and (ii) perform a list of tasks for the humanoid robot based on one or more sensors. The communication module, executed by the processor, is configured to provide communication between (i) the humanoid robot and a cloud server, and (ii) the cloud server and one or more robots to perform the list of tasks based on one or more sensors. The vision system and LIDAR module, executed by the processor, is configured to detect an acquisition of image and distance information about a working environmental condition or one or more applications to create the map of the working environmental condition or the one or more applications for navigation. The feedback analyzing module, executed by the processor, is configured to provide a feedback and control information to the humanoid robot. The input module, executed by the processor, is configured to provide an input to the humanoid robot based on (i) an output of the one or more sensors or (ii) the user devices or the user. The brain machine interface module, executed by the processor, is configured to receive an Electroencephalogram (EEG) signal from electrical activity of a human brain of the user to control the humanoid robot. The myoelectric signal detection module, implemented by the processor, is configured to detect an EMG signal from a changing muscle condition of the user to control the humanoid robot wirelessly. The finger impression identification module, executed by the processor, is configured to identify a finger print of the user for security purpose of the humanoid robot. The monitoring and safety unit is configured to (i) check a right commands given by the user in an operational environment, and (ii) check commands executed during autonomous operation. The navigation and control unit is configured to receive a multiple responses from the processor to execute the multiple responses on the humanoid robot. The humanoid robot acts individually or as a swarm.
These and other aspects of the embodiments herein will be better appreciated and understood when considered in conjunction with the following description and the accompanying drawings. It should be understood, however, that the following descriptions, while indicating preferred embodiments and numerous specific details thereof, are given by way of illustration and not of limitation. Many changes and modifications may be made within the scope of the embodiments herein without departing from the spirit thereof, and the embodiments herein include all such modifications.
The embodiments herein will be better understood from the following detailed description with reference to the drawings, in which:
The embodiments herein and the various features and advantageous details thereof are explained more fully with reference to the non-limiting embodiments that are illustrated in the accompanying drawings and detailed in the following description. Descriptions of well-known components and processing techniques are omitted so as to not unnecessarily obscure the embodiments herein. The examples used herein are intended merely to facilitate an understanding of ways in which the embodiments herein may be practiced and to further enable those of skill in the art to practice the embodiments herein. Accordingly, the examples should not be construed as limiting the scope of the embodiments herein.
As mentioned, there remains a need of a system for a humanoid robot that can perform a list of tasks on various working environmental condition or one or more application in an efficient way. The embodiments herein achieve this by providing the humanoid robot that automatically interacts with the working environmental condition or one or more application for performing the list of tasks using a cloud server and a user which acts autonomously or by manual operation. Referring now to the drawings, and more particularly to
Digital content may also be stored in the memory 1102 for future processing or consumption. The memory 1102 may also store program specific information and/or service information (PSI/SI), including information about digital content (e.g., the detected information bits) available in the future or stored from the past. A user of the personal communication device may view this stored information on display 1106 and select an item of for viewing, listening, or other uses via input, which may take the form of keypad, scroll, or other input device(s) or combinations thereof. When digital content is selected, the processor 1110 may pass information. The content and PSI/SI may be passed among functions within the personal communication device using the bus 1104.
The techniques provided by the embodiments herein may be implemented on an integrated circuit chip (not shown). The chip design is created in a graphical computer programming language, and stored in a computer storage medium (such as a disk, tape, physical hard drive, or virtual hard drive such as in a storage access network). If the designer does not fabricate chips or the photolithographic masks used to fabricate chips, the designer transmits the resulting design by physical means (e.g., by providing a copy of the storage medium storing the design) or electronically (e.g., through the Internet) to such entities, directly or indirectly.
The stored design is then converted into the appropriate format (e.g., GDSII) for the fabrication of photolithographic masks, which typically include multiple copies of the chip design in question that are to be formed on a wafer. The photolithographic masks are utilized to define areas of the wafer (and/or the layers thereon) to be etched or otherwise processed.
The resulting integrated circuit chips can be distributed by the fabricator in raw wafer form (that is, as a single wafer that has multiple unpackaged chips), as a bare die, or in a packaged form. In the latter case the chip is mounted in a single chip package (such as a plastic carrier, with leads that are affixed to a motherboard or other higher level carrier) or in a multichip package (such as a ceramic carrier that has either or both surface interconnections or buried interconnections). In any case the chip is then integrated with other chips, discrete circuit elements, and/or other signal processing devices as part of either (a) an intermediate product, such as a motherboard, or (b) an end product. The end product can be any product that includes integrated circuit chips, ranging from toys and other low-end applications to advanced computer products having a display, a keyboard or other input device, and a central processor.
The embodiments herein can take the form of, an entirely hardware embodiment, an entirely software embodiment or an embodiment including both hardware and software elements. The embodiments that are implemented in software include but are not limited to, firmware, resident software, microcode, etc. Furthermore, the embodiments herein can take the form of a computer program product accessible from a computer-usable or computer-readable medium providing program code for use by or in connection with a computer or any instruction execution system. Software may be provided for drag and drop programming and specific Operating System may be provided it may also include a cloud based service for virtual software processing/teleprocessing. For the purposes of this description, a computer-usable or computer readable medium can be any apparatus that can comprise, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
The medium can be an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system (or apparatus or device) or a propagation medium. Examples of a computer-readable medium include a semiconductor or solid state memory, magnetic tape, a removable computer diskette, a random access memory (RAM), a read-only memory (ROM), a rigid magnetic disk and an optical disk. Current examples of optical disks include compact disk-read only memory (CD-ROM), compact disk-read/write (CD-R/W) and DVD.
A data processing system suitable for storing and/or executing program code will include at least one processor coupled directly or indirectly to memory elements through a system bus. The memory elements can include local memory employed during actual execution of the program code, bulk storage, and cache memories which provide temporary storage of at least some program code in order to reduce the number of times code must be retrieved from bulk storage during execution.
Input/output (I/O) devices (including but not limited to keyboards, displays, pointing devices, remote controls, etc.) can be coupled to the system either directly or through intervening I/O controllers. Network adapters may also be coupled to the system to enable the data processing system to become coupled to other data processing systems or remote printers or storage devices through intervening private or public networks. Modems, cable modem and Ethernet cards are just a few of the currently available types of network adapters.
A representative hardware environment for practicing the embodiments herein is depicted in
The system further includes a user interface adapter 19 that may connects to a keyboard 15, mouse 17, speaker 24, microphone 22, and/or other user interface devices such as a touch screen device (not shown) or a remote control to the bus 12 to gather user input. Additionally, a communication adapter 20 connects the bus 12 to a data processing network 25, and a display adapter 21 connects the bus 12 to a display device 23 which may be embodied as an output device such as a monitor, printer, or transmitter, for example.
The humanoid robot 102 design is a common platform that can be automated and customized based on the specified task providing greater flexibility to support military applications such as land mine detection and mapping of safe path to soldiers and vehicles, to aid agriculture in deciding and applying right amount of fertilizers and irrigation solutions, in rescue missions to locate humans and industrial safety monitoring in factories and to help the disabled and elderly. The Architecture for operation and control of humanoid robot 102 can be used for but not limited to autonomous cars, exoskeleton, prosthetics, drones, autonomous material handling systems, Co-working robots, general autonomous machinery, heavy vehicles and machines for logistics.
The foregoing description of the specific embodiments will so fully reveal the general nature of the embodiments herein that others can, by applying current knowledge, readily modify and/or adapt for various applications such specific embodiments without departing from the generic concept, and, therefore, such adaptations and modifications should and are intended to be comprehended within the meaning and range of equivalents of the disclosed embodiments. It is to be understood that the phraseology or terminology employed herein is for the purpose of description and not of limitation. Therefore, while the embodiments herein have been described in terms of preferred embodiments, those skilled in the art will recognize that the embodiments herein can be practiced with modification within the spirit and scope of the appended claims.
Number | Date | Country | Kind |
---|---|---|---|
7012/CHE/2015 | Dec 2015 | IN | national |
This application claims priority from PCT Patent Application number PCT/IN2016/050458 filed on Dec. 26, 2016 the complete disclosure of which, in its entirely, is herein incorporated by reference
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/IN2016/050458 | 12/26/2016 | WO | 00 |