This application is related to co-owned and co-pending U.S. patent application Ser. No. 14/208,709 filed on Mar. 13, 2014 and entitled “TRAINABLE MODULAR ROBOTIC APPARATUS AND METHODS”, and co-owned and co-pending U.S. patent application Ser. No. 14/209,578 filed on Mar. 13, 2014 also entitled “TRAINABLE MODULAR ROBOTIC APPARATUS AND METHODS”, each incorporated herein by reference in its entirety. This application is also related to co-pending U.S. patent application Ser. No. 13/829,919, entitled “INTELLIGENT MODULAR ROBOTIC APPARATUS AND METHODS”, filed on Mar. 14, 2013, co-owned and co-pending U.S. patent application Ser. No. 13/830,398, entitled “NEURAL NETWORK LEARNING AND COLLABORATION APPARATUS AND METHODS”, filed on Mar. 14, 2013, co-owned and co-pending U.S. patent application Ser. No. 14/102,410, entitled APPARATUS AND METHODS FOR HAPTIC TRAINING OF ROBOTS”, filed on Dec. 10, 2013, co-owned U.S. patent application Ser. No. 13/623,820, entitled “APPARATUS AND METHODS FOR ENCODING OF SENSORY DATA USING ARTIFICIAL SPIKING NEURONS, filed Sep. 20, 2012 and issued as U.S. Pat. No. 9,047,568 on Jun. 2, 2015, co-owned U.S. patent application Ser. No. 13/540,429, entitled “SENSORY PROCESSING APPARATUS AND METHODS, filed Jul. 2, 2012 and issued as U.S. Pat. No. 9,014,416 on Apr. 21, 2015, co-owned U.S. patent application Ser. No. 13/548,071, entitled “SPIKING NEURON NETWORK SENSORY PROCESSING APPARATUS AND METHODS”, filed Jul. 12, 2012 and issued as U.S. Pat. No. 8,977,582 on Mar. 10, 2015, co-owned and co-pending U.S. patent application Ser. No. 13/660,982, entitled “SPIKING NEURON SENSORY PROCESSING APPARATUS AND METHODS FOR SALIENCY DETECTION”, filed Oct. 25, 2012, co-owned and co-pending U.S. patent application Ser. No. 13/842,530, entitled “ADAPTIVE PREDICTOR APPARATUS AND METHODS”, filed Mar. 15, 2013, co-owned and co-pending U.S. patent application Ser. No. 13/918,338, entitled “ROBOTIC TRAINING APPARATUS AND METHODS”, filed Jun. 14, 2013, co-owned and co-pending U.S. patent application Ser. No. 13/918,298, entitled “HIERARCHICAL ROBOTIC CONTROLLER APPARATUS AND METHODS”, filed Jun. 14, 2013, Ser. No. 13/918,620, entitled “PREDICTIVE ROBOTIC CONTROLLER APPARATUS AND METHODS”, filed Jun. 14, 2013, co-owned and co-pending U.S. patent application Ser. No. 13/953,595, entitled “APPARATUS AND METHODS FOR CONTROLLING OF ROBOTIC DEVICES”, filed Jul. 29, 2013, Ser. No. 14/040,520, entitled “APPARATUS AND METHODS FOR TRAINING OF ROBOTIC CONTROL ARBITRATION”, filed Sep. 27, 2013, co-owned and co-pending U.S. patent application Ser. No. 14/088,258, entitled “DISCREPANCY DETECTION APPARATUS AND METHODS FOR MACHINE LEARNING”, filed Nov. 22, 2013, co-owned and co-pending U.S. patent application Ser. No. 14/070,114, entitled “APPARATUS AND METHODS FOR ONLINE TRAINING OF ROBOTS”, filed Nov. 1, 2013, co-owned and co-pending U.S. patent application Ser. No. 14/070,239, entitled “REDUCED DEGREE OF FREEDOM ROBOTIC CONTROLLER APPARATUS AND METHODS”, filed Nov. 1, 2013, co-owned and co-pending U.S. patent application Ser. No. 14/070,269, entitled “APPARATUS AND METHODS FOR OPERATING ROBOTIC DEVICES USING SELECTIVE STATE SPACE TRAINING”, filed Nov. 1, 2013, and co-owned U.S. patent application Ser. No. 13/841,980 entitled “ROBOTIC TRAINING APPARATUS AND METHODS”, filed on Mar. 15, 2013 and issued as U.S. Pat. No. 8,996,177 on Mar. 31, 2015, each of the foregoing being incorporated herein by reference in its entirety.
A portion of the disclosure of this patent document contains material that is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent files or records, but otherwise reserves all copyright rights whatsoever.
1. Field
The present disclosure relates to trainable modular robotic devices.
2. Description of the Related Art
Existing robotic devices may comprise a robotic platform (e.g., a body, a remote control (RC) car, a rover, and/or other platforms) and one or more actuators embodied within the robotic platform, and an electronics module configured to control operation of the robotic device. The electronics module may be a printed circuit board with onboard integrated circuits (processors, flash memory, random access memory (RAM), and/or other), connectors for power, sensor, actuator interface and/or data input/output.
One example of a robotic device is the Rover App-Controlled Wireless Spy Tank by Brookstone® which is a popular mobile rover comprising a controller, a movement motor, a microphone, camera(s), wireless interface, battery, and other components embedded within the rover body. A user, desiring a different rover body functionality and/or body shape may be required to purchase a completely new rover (e.g., the Rover 2.0 App-Controlled Wireless Spy Tank). Embedding costly components (e.g., electronics, sensors, actuators, radios, and/or other components) within the robot's body may deter users from obtaining additional robotic bodies and reduce the reuse of costly components in another robot.
Thus, there is a salient need for improved robotic apparatus wherein high-cost components may be packaged in a module that may be interfaced to multiple robotic bodies. Ideally such improved apparatus and methods would also incorporate a highly modular and interchangeable architecture.
The present disclosure satisfies the foregoing needs by disclosing, inter alia, robotic apparatus and methods.
In one aspect of the present disclosure, a robotic apparatus is disclosed. In one embodiment, the robotic apparatus is operable to conduct one or more assigned tasks and comprises a control module configured to mate to an otherwise inoperable robotic body having one or more degrees of freedom, the control module further configured to produce one or more inputs and communicate the one or more inputs with the robotic body to enable it to conduct the one or more assigned tasks using at least one of the one or more degrees of freedom.
In one variant, the control module comprises a learning apparatus capable of being trained to conduct the one or more assigned tasks via at least feedback; wherein the robotic apparatus is configured to train the learning apparatus to conduct the one or more assigned tasks via the at least feedback.
In another variant, the robotic apparatus comprises one or more motive sources, and the one or more inputs comprise one or more mechanical force inputs.
In yet another variant, the robotic apparatus is configured to operate the robotic apparatus, the robotic apparatus comprising a processor configured to operate an adaptive learning process in order to conduct the one or more assigned tasks, the adaptive learning process being characterized by a plurality of trials; an interface configured to provide at least one actuation output to the robotic body, the at least one actuation output comprising first and second portions configured to effectuate movement of a first and a second controllable elements of the robotic body, respectively; and a first actuator and a second actuator each in operable communication with the processor, the first and the second actuators being configured to provide the first and the second portions of the at least one actuation output, respectively; wherein the adaptive learning process is configured to determine, during a trial of the plurality of trials, the at least one actuation output, the at least one actuation output having a first trajectory associated therewith; and the adaptive learning process is further configured to determine, during a subsequent trial of the plurality of trials, another actuation output having a second trajectory associated therewith, the second trajectory being closer to a target trajectory of the one or more assigned tasks than the first trajectory.
In one variant, the first controllable element is configured to effectuate movement of the robotic body in a first degree of freedom (DOF); the second controllable element is configured to effectuate movement of the robotic body in a second DOF independent from the first DOF; and the first and the second portions of the at least one actuation output are configured based on one or more instructions from the processor.
In one variant, the operation of the adaptive learning process by the processor is configured based on one or more computer-executable instructions; and the processor is configured to upload the one or more computer-executable instructions to a computer readable medium disposed external to an enclosure of the control module.
In another variant, the movement in the first DOF of the first controllable element and the movement in the second DOF of the second controllable element cooperate to effectuate conduction of the one or more assigned tasks by the robotic body; the adaptive learning process comprises a haptic learning process characterized by at least a teaching input provided by a trainer; and the teaching input is configured based on an adjustment of the first trajectory via a physical contact of the trainer with the robotic body.
In one variant, the adjustment of the first trajectory is configured based at least on an observation of a discrepancy between the first trajectory and the target trajectory during the trial; and the adjustment of the first trajectory is configured to cause a modification of the learning process so as to determine a second control input configured to transition the first trajectory towards the target trajectory during another trial subsequent to the trial.
In one variant, the modification of the learning process is characterized by a determination of one or more values by the processor; the robotic apparatus is further configured to provide the another actuation output to another robotic body, the another actuation output being configured to effectuate movement of the another body in a first DOF; and the another actuation output is configured based on the one or more values.
In another variant, a detachable enclosure is configured to house a camera adapted to provide sensory input to the processor, the sensory input being used for determination of the at least one actuation output in accordance with a target task, and the one or more instructions are configured based on at least the sensory input.
In yet another variant, the processor is configured to receive an audio input, the audio input being used for determining the at least one actuation output in accordance with a target task, and the one or more instructions are configured based on at least the audio input.
In one variant, a detachable enclosure is configured to house a sound receiving module configured to effectuate provision of the audio input to the processor; and the audio input is configured based at least on a command of a trainer.
In a further variant, a detachable enclosure is configured to house one or more inertial sensors configured to provide information related to a movement characteristic of the robotic body to the processor; and the one or more instructions are configured based at least on the information.
In another variant, the processor is configured to determine a displacement of a first joint and a second joint associated, respectively, with the movement of the first controllable element in a first degree of freedom (DOF) and the movement of the second controllable element in a second DOF.
In one variant, the determination of the displacement is configured based at least on feedback information provided from the first and second actuators to the processor, and the feedback information comprises one or more of actuator displacement, actuator torque, and/or actuator current draw.
In yet another variant, the robotic body comprises an identifier configured to convey information related to a configuration of the robotic body; and the adaptive learning process is configured to adapt a parameter based on receipt of the information, the adaptation of the parameter being configured to enable the adaptive learning process to adapt the at least one actuation output consistent with the configuration of the robotic body.
In one variant, the configuration information comprises a number of joints of the robotic body; the information comprises a number and identification of degrees of freedom of the joints of the robotic body; the actuation output configured consistent with the configuration of the robotic body comprises an output configured to operate one or more joints of the robotic body in a respective degree of freedom; and the adaptation of the parameter is configured based on one or more instructions executed by the processor, the one or more instructions being related to two or more of the degrees of freedom of the joints of the robotic body.
In a further variant, the robotic apparatus further comprises an interface, the interface comprising a first interface portion comprising a shape; and a second interface portion particularly adapted to interface only with other interface portions comprising the shape; wherein a mating of the first and second interface portions is configured to animate the robotic body via at least a mechanical force transferred over the mated first and second interface portions.
In one variant, the first interface portion comprises a substantially male feature, and the second interface portion comprises a substantially female feature, the substantially male and substantially female features being configured to rigidly but separably attach to one other.
In another variant, the mated first and second interface portions comprise at least one mechanical interface configured to transfer a force, and at least one electrical interface configured to transfer electrical signals or power across the mated first and second interface portions; and the shape comprises a substantially male feature.
Further features and various advantages will be apparent from the accompanying drawings and the following detailed description.
All Figures disclosed herein are © Copyright 2014 Brain Corporation. All rights reserved.
Implementations of the present disclosure will now be described in detail with reference to the drawings, which are provided as illustrative examples so as to enable those skilled in the art to practice the principles and architectures described herein. Notably, the figures and examples below are not meant to limit the scope of the present disclosure to a single embodiment or implementation, but other embodiments and implementations are possible by way of interchange of or combination with some or all of the described or illustrated elements. Wherever convenient, the same reference numbers will be used throughout the drawings to refer to same or like parts.
Where certain elements of these implementations can be partially or fully implemented using known components, only those portions of such known components that are necessary for an understanding of the present disclosure will be described, and detailed descriptions of other portions of such known components will be omitted so as not to obscure the principles and architectures described herein.
In the present specification, an embodiment or implementation showing a singular component should not be considered limiting; rather, the disclosure is intended to encompass other embodiments or implementations including a plurality of the same component, and vice-versa, unless explicitly stated otherwise herein.
Further, the present disclosure encompasses present and future known equivalents to the components referred to herein by way of illustration.
As used herein, the term “bus” is meant generally to denote all types of interconnection or communication architecture that are used to access the synaptic and neuron memory. The “bus” could be optical, wireless, infrared or another type of communication medium. The exact topology of the bus could be for example a standard “bus”, hierarchical bus, network-on-chip, address-event-representation (AER) connection, or other type of communication topology used for accessing e.g., different memories in a pulse-based system.
As used herein, the terms “computer”, “computing device”, and “computerized device”, include, but are not limited to, personal computers (PCs) and minicomputers, whether desktop, laptop, or otherwise, mainframe computers, workstations, servers, personal digital assistants (PDAs), handheld computers, embedded computers, programmable logic devices, personal communicators, tablet computers, portable navigation aids, cellular telephones, smart phones, personal integrated communication or entertainment devices, or any other devices capable of executing a set of instructions and processing an incoming data signal.
As used herein, the term “program”, “computer program” or “software” is meant to include any sequence of human or machine cognizable steps which perform a function. Such program may be rendered in virtually any programming language or environment including, for example, C/C++, C#, Fortran, COBOL, MATLAB™, PASCAL, Python, assembly language, markup languages (e.g., HTML, SGML, XML, VoXML), and the like, as well as object-oriented environments such as the Common Object Request Broker Architecture (CORBA), Java™ (including J2ME, Java Beans, and/or other), Binary Runtime Environment (e.g., BREW), and the like.
As used herein, the term “memory” includes any type of integrated circuit or other storage device adapted for storing digital data including, without limitation: ROM. PROM, EEPROM, DRAM, Mobile DRAM, SDRAM, DDR/2 SDRAM, EDO/FPMS, RLDRAM, SRAM, “flash” memory (e.g., NAND/NOR), memristor memory, and PSRAM.
As used herein, the terms “microprocessor” and “digital processor” are meant generally to include all types of digital processing devices including, without limitation: digital signal processors (DSPs), reduced instruction set computers (RISC), general-purpose (CISC) processors, microcontrollers, microprocessors, gate arrays (e.g., field programmable gate arrays (FPGAs)), PLDs, reconfigurable computer fabrics (RCFs), array processors, secure microprocessors, and application-specific integrated circuits (ASICs). Such digital processors may be contained on a single unitary IC die, or distributed across multiple components.
As used herein, the term “network interface” refers to any signal, data, or software interface with a component, network or process including, without limitation: those of the IEEE Std. 1394 (e.g., FW400, FW800, and/or other), USB (e.g., USB2), Ethernet (e.g., 10/100, 10/100/1000 (Gigabit Ethernet), Thunderbolt™, 10-Gig-E, and/or other), Wi-Fi (802.11), WiMAX (802.16), PAN (e.g., 802.15), cellular (e.g., 3G, LTE/LTE-A/TD-LTE, GSM, and/or other) or IrDA families.
As used herein, the term “Wi-Fi” refers to, without limitation: any of the variants of IEEE-Std. 802.11 or related standards including 802.11 a/b/g/n/s/v.
As used herein, the term “wireless” means any wireless signal, data, communication, or other interface including without limitation Wi-Fi, Bluetooth, 3G (3GPP/3GPP2), HSDPA/HSUPA, TDMA, CDMA (e.g., IS-95A, WCDMA, and/or other), FHSS, DSSS, GSM, PAN/802.15, WiMAX (802.16), 802.20, narrowband/FDMA, OFDM, PCS/DCS, LTE/LTE-A/TD-LTE, analog cellular, CDPD, satellite systems, millimeter wave or microwave systems, acoustic, and infrared (i.e., IrDA).
Overview
Existing robotic device systems often have limited modularity, in part because most cost-bearing components are tightly integrated with the body of the robot. Barebones control boards, while offering flexibility in structure, may require significant engineering skill on the part of the user in order to install and integrate the board within a robot.
It will be apparent in light of the present disclosure, that the aforementioned problem may be addressed by a modular robotic device architecture configured to separate all or most high cost components into one or more module(s) that is separate from the rest of the robotic body. By way of illustration, a robotic toy stuffed animal (e.g., a teddy bear) autonomy module (configured to control head and limb actuators, sensors, and a communication interface, and/or other components.) may be configured to interface with the body of the robotic toy stuffed bear. In one embodiment, the autonomy module may comprise linear actuators with sensory feedback that may be connected to tendons within the bear limbs. In one or more implementations, the tendons may comprise one or more of a rope, a string, elastic, a rubber cord, a movable plastic connector, a spring, a metal and/or plastic wire, and/or other connective structure. During training and/or operation, the controller may position limbs of the toy in a target position. A user may utilize a haptic training approach (e.g., as described for example, in U.S. patent application Ser. No. 14/102,410, entitled “APPARATUS AND METHODS FOR HAPTIC TRAINING OF ROBOTS”, filed on Dec. 10, 2013, incorporated supra) in order to enable the robotic toy to perform one or more target action(s). During training, the user may apply corrections to the state of the robotic body (e.g., limb position) using physical contact (also referred to as the haptic action). The controller within the autonomy module may utilize the sensory feedback in order to determine user interference and/or infer a teaching signal associated therewith. Modular configuration of the disclosure enables users to replace one toy body (e.g., the bear) with another (e.g., a giraffe) while using the same hardware provided by the autonomy module.
Consolidation of high cost components (e.g., one or more processing modules, power conditioning and supply modules, motors with mechanical outputs, sensors, communication modules, and/or other components) within the autonomy module alleviates the need to provide high cost components within a body of a robot thereby enabling robot manufacturers to reduce the cost of robotic bodies. Users may elect to purchase a single autonomy module (also referred to throughout as an AM) with e.g., two or more inanimate robotic bodies, vehicles, and/or other bodies thereby reducing overall cost of ownership and/or improving user experience. Each AM may interface with existing electro-mechanical appliances thus enabling user to extract additional value from their purchase of an AM.
Detailed Description of the Exemplary Implementations
Exemplary implementations of the various facets of the disclosure are now described in detail. It will be appreciated that while described substantially in the context of modular robotic devices, the present disclosure is in no way so limited, the foregoing merely being but one possible approach. The principles and architectures described herein are contemplated for use with any number of different artificial intelligence, robotic or automated control systems.
The autonomy module (AM) 140 provides sensory, motor, and learning functionality associated with performing one or more tasks by the robotic toy 100 (e.g., dance, bend, turn, and/or other). The AM may comprise one or more actuators configured to operate the tendons of the figurine 110. The AM may further comprise a processing module configured to execute an adaptive control application (also referred to as controller) in order to manipulate the actuator interfaces 142, 144. In some implementations, the actuators of the AM 140 may provide mechanical activation signal, e.g., rotational and/or translational motion via the interfaces 142, 144 to the controllable elements of the body 110. In one or more implementations, the actuators of the AM 140 may comprise one or more solenoids and actuator operation may comprise application of electromagnetic energy. The controller may be programmable and/or teachable, for example through standard machine learning algorithms, such as supervised learning, unsupervised learning, and reinforcement learning. The controller may be trained by the manufacturer of the robot and/or by the end user of the robot. In some implementations, the AM may house all motor, sensory, power, and processing components needed to operate one or more robotic bodies.
The figurine base 120 may be adapted to interface tendons to actuators within the AM 140, as shown by broken line 122 in
In some implementations, the AM 140 may comprise one or more sound wave and/or electromagnetic wave (e.g., radio frequency (RF)) sensing modules (not shown). An audio interface may be utilized in order to receive user generated auditory input during training and/or operation. The AM 140 may comprise one or more inertial motion sensors (e.g., 1, 2, 3 axis gyroscopes, accelerometers (e.g., micro-electrical mechanical systems (MEMS)), ultrasonic proximity sensors and/or other sensors that may be useful for determining the motion of the robot's body.
In implementations targeted at cost-conscious consumers, the robotic toy body (e.g., the figurine 110) may be available without sensors, actuators, processing, and/or power modules. In some implementations, wherein additional costs may be acceptable to users, the robotic body may be outfitted with additional sensors and/or motor actuators that may interface to the AM via one or more connectors (e.g., as shown and described in greater detail hereinafter).
In one or more implementations, an autonomy module (e.g., AM 140 in
The body 300 and head 322 may comprise one or more cameras and/or optical interfaces (not shown) configured to provide sensory input to the autonomy module 302. The sensory input may be used to e.g., implement stereo vision, object recognition (e.g., face of the user), control the head 322 to track an object (e.g., user's face), and/or other applications.
Various implementations may enable different types of bodies with the same AM module (for example, body 110 of
In one or more implementations, individual ones of the plurality of robotic bodies that may interface to a given AM module (e.g., the module 260 of
In some implementations, the AM described herein may be utilized in order to upgrade (retrofit) a remote controlled (RC) aerial vehicle (e.g., a plane, a blimp) wherein the original RC receiver may be augmented and/or replaced with the learning AM thereby turning the RC plane into an autonomous trainable aerial vehicle.
In another embodiment,
Autonomy module (e.g., AM 140, AM 200, AM 240, AM 260 may be configured to provide a mechanical output to one or more robotic bodies (e.g., giraffe figurine 110, stuffed bear 300, toy plane 340). In one or more implementations, the mechanical output may be characterized by one or more of a rotational momentum, a linear velocity, an angular velocity, a pressure, a linear force, a torque, and/or other parameter. The coupling mechanism between an AM and the body may comprise a mechanical, electrical, and/or electromechanical interface optimized for transmission of relevant parameters. In some variants, the coupling may be proprietary or otherwise specific to application; in other variants, the coupling may be generic. In some implementations, the portions of the coupling interface between the AM and the body may be configured to be mechanically adjusted relative one another (e.g., via a slide-rotate, extend/contract, and/or other motion) in order to provide sufficient target coupling.
In some implementations, a user, and/or a manufacturer modify (and/or altogether remove) the AM enclosure in order to attain target performance of the robot. By way of an illustration, the user may elect to remove the enclosure 272 of
Robotic bodies may comprise an identification means (e.g., element 150 of
The autonomy module 420 of
The AM 402 may comprise one or more coupling elements 432 denoted by open circles in
The one or more modules 512, 516 may provide sensory input. The sensory input may be used to e.g., implement stereo vision, perform object recognition (e.g., face of the user), control the robot's body in order to, for example, track an object (e.g., user's face), and/or other applications.
In some implementation, the sensing module (e.g., 516) may be coupled to an optical interface 506 (e.g., a waveguide, one or more mirrors, a lens, a light-pipe, a periscope, and/or other means). The interface 506 may conduct ambient light 522 (with respect to the enclosure 502) in a direction shown by arrow 524 to the sensing module 516. The AM 500 may comprise one or more light emitting modules e.g., 518. In some implementations, the module 512 may comprise a light emitting diode (LED), a laser, and/or other light source. Output of the module 518 may be communicated via optical waveguide 508 (as shown by arrow 510) to a target location 520 (e.g., eye) within the robotic body. In one or more implementations, the light emitting and receiving sensors may be combined into a single module that may share one or more components e.g., the lens and/or the input/output waveguide. In some implementations, the optical waveguide functionality may be implemented using one or more reflective surfaces (e.g., optical mirrors), transparent or translucent media, and/or other means. The AM 500 may be employed with a toy robot comprising eyes, and/or light indicators. Use of a camera module 512, 516 may enable visual sensory functionality in the robot. The AM may acquire and form a hierarchy of sensory (e.g. visual or multi-sensory) features based on the observed spatio-temporal patterns in its sensory input, with or without external supervision. Sensory processing may be implemented using e.g., spiking neuron networks described, for example, in U.S. patent application Ser. No. 13/623,820, entitled “APPARATUS AND METHODS FOR ENCODING OF SENSORY DATA USING ARTIFICIAL SPIKING NEURONS, filed Sep. 20, 2012, Ser. No. 13/540,429, entitled “SENSORY PROCESSING APPARATUS AND METHODS, filed Jul. 2, 2012, Ser. No. 13/548,071, entitled “SPIKING NEURON NETWORK SENSORY PROCESSING APPARATUS AND METHODS”, filed Jul. 12, 2012, and Ser. No. 13/660,982, entitled “APPARATUS AND METHODS FOR ACTIVITY-BASED PLASTICITY IN A SPIKING NEURON NETWORK”, filed Oct. 25, 2012, each of the foregoing being incorporated herein by reference in its entirety.
The AM 500 may utilize a predictive capacity for the sensory and/or sensory-motor features. Predictive-based vision, attention, and feature selection based on relevance may provide context used to determine motor activation and/or task execution and planning Some implementations of sensory context for training and predicting motor actions are described in U.S. patent application Ser. No. 13/842,530, entitled “ADAPTIVE PREDICTOR APPARATUS AND METHODS”, filed Mar. 15, 2013, Ser. No. 13/918,338, entitled “ROBOTIC TRAINING APPARATUS AND METHODS”, filed Jun. 14, 2013, Ser. No. 13/918,298, entitled “HIERARCHICAL ROBOTIC CONTROLLER APPARATUS AND METHODS”, filed Jun. 14, 2013, Ser. No. 13/918,620 entitled “PREDICTIVE ROBOTIC CONTROLLER APPARATUS AND METHODS”, filed, Jun. 14, 2013, Ser. No. 13/953,595 entitled “APPARATUS AND METHODS FOR TRAINING AND CONTROL OF ROBOTIC DEVICES”, filed Jul. 29, 2013, each of the foregoing being incorporated herein by reference in its entirety.
In some implementations, the AM 500 may comprise one or more inertial motion sensors (e.g., 1, 2, 3 axis gyroscopes, accelerometers (e.g., MEMS), ultrasonic proximity sensors and/or other sensors that may be useful for determining motion of the robot's body.
Various methodologies may be employed in order to broaden functionality of the robotic bodies for a given AM.
The arm 600 may comprise a sensing module 630. In some implementations, the element 630 may comprise a video camera configured to provide visual input to a processor of autonomy module. The arm 600 may be controlled to position the module 630 to implement e.g., face tracking. In one or more implementations, the sensing module 630 may comprise a radio frequency (RF) antenna configured to track an object.
In one or more implementations, another arm, characterized by a kinematic chain that may differ from the configuration of the arm 600 (e.g., comprising multiple articulated joints) may interface to the actuator 610. Additional actuators may be utilized with the arm in order to control additional DOF.
In some implementations, the robotic brain 712 interfaces with the mechanical components 718, sensory components 720, electrical components 722, power components 724, and network interface 726 via one or more driver interfaces and software abstraction layers. In one or more implementations, the power components 724 may comprise one or more of a direct current, an alternating current source, Da mechanical coupling, energy accumulator (ethical capacitor) and/or mechanical (e.g., a flywheel, a wind-up module), wireless charger, radioisotope thermoelectric generator, thermocouple, piezo-generator, a dynamo generator, a fuel cell, an internal or external combustion engine, a pneumatic, a hydraulic, and/or other energy source. In some implementations, the power components 724 may be built into the AM. In one or more implementations, the power components 724 may comprise a module that may be removed and/or replaced without necessitating disconnecting of the actuator interfaces (e.g., 142, 144 in
Additional processing and memory capacity may be used to support these processes. However, it will be appreciated that these components may be fully controlled by the robotic brain. The memory and processing capacity may also aid in management for the autonomy module (e.g. loading executable code (e.g., a computational brain image), replacing the code, executing operations during startup, and/or other operations). As used herein, a “computational brain image” may comprise executable code (e.g., binary image files), object code, bytecode, an array of weights for an artificial neuron network (ANN), and/or other computer formats.
Consistent with the present disclosure, the various components of the device may be remotely disposed from one another, and/or aggregated. For example, robotic brain software may be executed on a server apparatus, and control the mechanical components of an autonomy module via a network or a radio connection. Further, multiple mechanical, sensory, or electrical units may be controlled by a single robotic brain via network/radio connectivity.
The mechanical components 718 may include virtually any type of component capable of motion (e.g., to move the robotic apparatus 700, and/or other.) or configured to perform a desired function or task. These may include, without limitation: motors, servos, pumps, hydraulics, pneumatics, stepper motors, rotational plates, micro-electro-mechanical devices (MEMS), electro-active polymers, and/or other motive components. The components interface with the robotic brain and enable physical interaction and manipulation of the device.
The sensory components 720 allow the robotic device to accept stimulus from external entities. These may include, without limitation: video, audio, haptic, capacitive, radio, accelerometer, ultrasonic, infrared, thermal, radar, lidar, sonar, and/or other sensing components.
The electrical components 722 include virtually any electrical component for interaction and manipulation of the outside world. These may include, without limitation: light/radiation generating components (e.g. light emitting diodes (LEDs), infrared (IR) sources, incandescent light sources, and/or other.), audio components, monitors/displays, switches, heating elements, cooling elements, ultrasound transducers, lasers, and/or other. Such components enable a wide array of potential applications in industry, personal hobbyist, building management, medicine, military/intelligence, and other fields (as discussed below).
The network interface includes one or more connections configured to interact with external computerized devices to allow for, inter alia, management and/or control of the robotic device. The connections may include any of the wireless or wireline interfaces discussed above, and further may include customized or proprietary connections for specific applications.
The power system 724 is configured to support various use scenarios of the device. For example, for a mobile robot, a wireless power solution (e.g. battery, solar cell, inductive (contactless) power source, rectification, and/or other.) may be appropriate. However, for fixed location applications which consume significant power (e.g., to move heavy loads, and/or other.), a wall power supply may be a better fit. In addition, in some implementations, the power system and or power consumption may be configured with the training of the robotic apparatus 700. Thus, the robot may improve its efficiency (e.g., to consider power consumption efficiency) through learned management techniques specifically tailored to the tasks performed by the robotic apparatus.
The apparatus 800 of
The coupling 826, 822 may provide one or more electrical signals (e.g., current, voltage, and/or other.), mechanical inputs (e.g., rotational momentum, linear velocity, angular velocity, pressure, force, torque, and/or other.), electromagnetic signals (e.g., light, radiation, and/or other.), mass transfer (e.g., pneumatic gas flow, hydraulic liquid flow, and/or other.) from one module (e.g., 820) of the apparatus 800 to another module (830) or vice versa, as denoted by arrows 824. For example, the AM 830 may receive feedback from the body 820 via the interface 822, 826. Thereafter, the processing module of the AM (e.g., 716 in
In one exemplary embodiment, the AM 830 may be configured to provide a mechanical output 824 to robotic body 820. In one or more implementations, the mechanical output 824 may be characterized by one or more of a rotational momentum, a linear velocity, an angular velocity, a pressure, a linear force, a torque, and/or other parameter. The coupling mechanism between the AM and the body (e.g., 822, 826) may comprise a proprietary mechanical, electrical, and/or electromechanical interface optimized for transmission of relevant parameters. In some implementations, the interface portion 822 and/or 826 may be configured to be mechanically adjusted relative the complementary interface portion (e.g., via a slide-rotate motion, extend/contract, and/or other.) in order to provide secure coupling.
By way of a non-limiting example,
It will be recognized by those skilled in the arts that coupling configurations shown and described above with respect to
In some implementations, two (or more) trained AM may exchange brain images with one another, e.g., as shown by arrow 918 in
By way of a non-limiting example, personnel of a hobby store may pre-train a given robot (e.g., the bear 300 of
In one or more applications that may require computational power in excess of that that may be provided by a processing module of the AM 910_2 the local computerized interface device 904 may be used to perform computations associated with training and/or operation of the robotic body coupled to the AM 910_2. The local computerized interface device 904 may comprise a variety of computing devices including, for example, a desktop PC, a laptop, a notebook, a tablet, a phablet, a smartphone (e.g., an iPhone®), a printed circuit board and/or a system on a chip (SOC) comprising one or more of general processor unit (GPU), field programmable gate array (FPGA), multi-core central processing unit (CPU), an application specific integrated circuit (ASIC), and/or other computational hardware (e.g., a bitcoin mining card BitForce®).
In one exemplary embodiment, the configuration shown in
Robotic devices comprising an autonomy module of the present disclosure may be trained using online robot training methodologies described herein, so as to perform a target task in accordance with a target trajectory.
Training may be implemented using a variety of approaches including those described in U.S. patent application Ser. No. 14/040,520 entitled “APPARATUS AND METHODS FOR TRAINING OF ROBOTIC CONTROL ARBITRATION”, filed Sep. 27 2013, Ser. No. 14/088,258 entitled “APPARATUS AND METHODS FOR TRAINING OF NOVELTY DETECTION IN ROBOTIC CONTROLLERS”, filed Nov. 22, 2013, Ser. No. 14/070,114 entitled “APPARATUS AND METHODS FOR ONLINE TRAINING OF ROBOTS”, filed Nov. 1, 2013, Ser. No. 14/070,239 entitled “REDUCED DEGREE OF FREEDOM ROBOTIC CONTROLLER APPARATUS AND METHODS”, filed Nov. 1, 2013, Ser. No. 14/070,269 entitled “APPARATUS AND METHODS FOR OPERATING ROBOTIC DEVICES USING SELECTIVE STATE SPACE TRAINING”, filed Nov. 1, 2013, and Ser. No. 14/102,410 entitled “APPARATUS AND METHODS FOR HAPTIC TRAINING OF ROBOTS”, filed Dec. 10, 2013, each of the foregoing being incorporated herein by reference in their entireties.
The training entity may comprise a human user and/or a computerized agent. During a given trial, the training entity may observe an actual trajectory of the robot e.g., the trajectory 1142 during the trial 1124 in
In another example (not shown), the human user can train a manipulator arm based on haptic input. The haptic input may comprise the trainer grabbing and moving the arm along a target trajectory. The arm may be equipped with a force/torque sensor. Based on the sensor readings (from the force/torque vectors generated by the trainer), the controller infers the appropriate control commands that are configured to repeat the motion of the arm.
Referring back to
Based on the training input associated with the state adjustment 1148, the controller of the robot infers the appropriate behavior of the robot. In some instances, the controller may further adjust its learning process to take into account the teaching input. For example, based on the adjusted learning process, robot action during a subsequent trial (e.g., 1126, may be characterized by the trajectory 1152 of the robot being closer to the target trajectory 1130 (e.g., the discrepancy 1150 for the trial 1126 being smaller than the discrepancy 1148 for the trial 1124).
Various approaches may be utilized in order to determine a discrepancy between the current trajectory and the target trajectory along the trajectory. In one or more implementations, the discrepancy may be represented as a measured distance, a normalized distance (“norm”), a maximum absolute deviation, a signed/unsigned difference, a correlation, a point-wise comparison, and/or a function of an n-dimensional distance (e.g., a mean squared error). In one or more implementations, the distance D between the actual x and the predicted state xp may be determined as follows:
D=D(xp−x), (Eqn. 1)
D=D(sign(xp)−sign(x)), (Eqn. 2)
D=D(sign(xp−x)), (Eqn. 3)
where D denotes an n-dimensional norm operation.
Exemplary Methods
In some implementations, methods 1200, 1300 may be implemented using one or more processing devices (e.g., a digital processor, an analog processor, a digital circuit designed to process information, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information). The one or more processing devices may include one or more devices executing some or all of the operations of methods 1200, 1300 in response to instructions stored electronically on an electronic storage medium. The one or more processing devices may include one or more devices configured through hardware, firmware, and/or software to be specifically designed for execution of one or more of the operations of methods 1200, 1300.
At operation 1202 an autonomy module may be adapted to comprise two or more actuators, a controller, and/or a sensor. The controller may be configured to operate individual actuators so as to control two or more degrees of freedom distinct from one another. In some implementations, the two or more DOF may comprise motion with respect to two or more orthogonal axes, two or more motions of different kinematics (e.g., translation and rotation), and/or other.
At operation 1204 the AM may be coupled to a first robotic body. In some implementations, the first body may comprise, e.g., a robotic toy (e.g., a giraffe), a plane, a car and/or other. Coupling may be effectuated using a dedicated interface (e.g., a combination of proprietary locking male/female connectors). The first body may comprise two or more elements configured to be operated in first and second DOF that are distinct kinematically from one another.
At operation 1206 the controller of the AM may be trained to operate the first body. The operation may comprise manipulating the two or more elements of the first body in the first and the second DOF to accomplish a task (e.g., manipulating a two joint arm to touch a target).
At operation 1208 the AM may be coupled to a second robotic body. In some implementations, the second body may comprise, e.g., a robotic toy (e.g., a giraffe), a plane, a car and/or other. Coupling may be effectuated using a dedicated interface (e.g., a combination of a proprietary locking male/female connectors). The second body may be characterized by kinematic chain that is configured different from the kinematic chain of the first body as e.g., in a two single joint arms vs one two individually controller joints arm, a four-limbed animal (e.g., the bear 300) vs. a plane 340, and/or other configurations. The second body may comprise two or more elements configured to be operated in at least two of the first, the second, and a third DOF that are distinct kinematically from one another.
At operation 1210 the controller of the AM may be trained to operate the second body. The operation may comprise manipulating the two or more elements of the second body two DOF to accomplish a task (e.g., manipulating a two single-joint arms to touch a target).
At operation 1301 of method 1300, a context is determined. In some implementations, the context may comprise one or more aspects of sensory input and/or feedback that may be provided by the robot platform to the controller. In one or more implementations, the sensory aspects may include: detection of an object, a location of an object, an object characteristic (color/shape), a sequence of movements (e.g., a turn), a sensed characteristic of an environment (e.g., an apparent motion of a wall and/or other surroundings turning a turn and/or approach) responsive to a movement, and/or other. In some implementation, the sensory input may be collected while performing one or more training trials of the robotic apparatus.
At operation 1302 of method 1300, the robot is operated in accordance with an output determined by a learning process of the robot based on the context. For example, referring back to
At operation 1304 of method 1300, the state of the robot is observed by the trainer. In one implementation, the state may represent the position of a rover along a trajectory (for example, in
At operation 1306 of method 1300, a teaching input is provided to the robot when the trainer modifies the robot's state via e.g., physical contact with the robot platform. In some implementations, the physical contact comprises a haptic action which may include one or more of a push, a pull, a movement (e.g., pick up and move, move forward, backwards, rotate, reach for an object, pick up, grasp, manipulate, release, and/or other movements), a bump, moving the robot or a portion thereof along a target trajectory, holding the robot in place, and/or other physical interaction of the trainer with the robot. In manipulator arm embodiments, training with haptic input may comprise the trainer grabbing and moving the arm along the target trajectory.
At operation 1308 of method 1300, the learning process of the robot is updated based on the training input (e.g., haptic input). In one or more implementations, the learning process may comprise a supervised learning process configured based on the teaching signal. In some embodiments, the teaching signal may be inferred from a comparison of the robot's actual state with a predicted state (for example, based on Eqn. 1-Eqn. 3, and/or other). During subsequent time instances, the robot may be operated in accordance with the output of the updated learning process (for example, as previously discussed with respect to
Exemplary Uses and Applications
In some implementations, the autonomy module (AM) may house motor, sensory, power, and processing components needed to operate one or more robotic bodies. The robotic bodies may comprise one or more swappable limbs, and/or other body parts configured to interface to the AM. In one or more implementations, the AM may be adapted to interface to existing robotic bodies (e.g., for retro-fitting of existing robotic bodies with newer AMs). In such implementations, the AM may provide power, processing, and/or learning capabilities to existing non-learning robots.
A variety of connectivity options may be employed in order to couple an AM to a body of a robot including, for example, screws, bolts, rivets, solder, glue, epoxy, zip-ties, thread, wire, friction, pressure, suction, and/or other means for attachment. The modular architecture described herein may be utilized with a variety of robotic devices such as e.g., inanimate toys, robotic manipulators, appliances, and/or vehicles.
A variety of business methods may be utilized in order to trainable modular robotic devices. In some implementations, a supplier (e.g., Brain Corporation) may develop, build, and provide complete AM modules (e.g., 200, 240, 260) to one or more clients (e.g., original equipment manufacturers (OEM), e.g., toy manufacturing company), and/or resellers and/or distributors (e.g., Avnet, Arrow, Amazon). The client may install the AM within one or more trainable robotic toys. An agreement between the supplier and the client may comprise a provision for recurring maintenance and updates of the AM software (e.g., drivers for new sensors, actuators, updates to processing and/or learning code and/or other).
In one or more implementations, the supplier may provide a client (e.g., the OEM) with a bare bone AM kit (e.g., a chipset with processing, memory, software) while the client may (under a license) add sensors, actuators, power source, and/or enclosure.
In one or more implementations, the supplier may provide a ASIC, a software library, and/or a service to develop a custom hardware and/or software solution for the client (e.g., provide a demo mode for a robotic toy to enable customers to evaluate the toy in a retail environment).
Cloud Management—
Various implementations of the present disclosure may utilize cloud based network architecture for managing of controller code (e.g., computational brain images). As individual users (or groups of users) begin creating computational brain images through the training process, different tasks related to computational brain image management (e.g., storage, backup, sharing, purchasing, merging, and/or other operations) are performed. User experience with respect to these tasks is at least partly dependent on the ease with which they are performed, and the efficacy of the systems provided for their completion. Cloud-based architectures allow a user to protect and share their work easily, because computational brain images are automatically remotely stored and are easily retrieved from any networked location. The remote storage instantly creates a spatially diverse backup copy of a computational brain image. This decreases the chance of lost work. In various implementations, a computational brain image stored on a server is also available in any location in which a user has access to an internet connection. As used herein, the term cloud architecture is used to generally refer to any network server managed/involved system (generally provided by a 3rd party service). This may refer to connecting to a single static server or to a collection of servers (potentially interchangeable) with dynamic storage locations for user content.
It will be appreciated that while the term “user” as discussed herein is primarily contemplated to be a human being, it is also contemplated that users may include artificially intelligent apparatus themselves. For instance, in one exemplary training paradigm of the disclosure, a human being trains a first learning controller of an AM apparatus (or group of apparatus), the latter of which are then used to train other “untrained” controllers, thereby in effect leveraging the training model so as to permit much more rapid and pervasive training of a large numbers of controller apparatus such as e.g., robots (i.e., the training process then goes “viral”).
Referring back to
For shared applications, a user may designate computational brain images to upload and download from the cloud server. To designate computational brain images for download, the user browses the computational brain image content of the cloud server via the interface device 904 or via a browser application on another mobile device or computer. The user then selects one or more computational brain images. The computational brain images may be transmitted for local storage on the AM of a robotic device, user interface device, portable storage medium (e.g., a flash memory), and/or other computerized storage.
The computational brain images displayed in the browser may be filtered to aid in browsing and/or selection of the appropriate computational brain image. Text or other searches may be used to locate computational brain images with certain attributes. These attributes may be identified for example via metadata (e.g. keywords, descriptions, titles, tags, user reviews/comments, trained behaviors, popularities, or other metadata) associated with the computational brain image file. Further, in some implementations, computational brain images may be filtered for compatibility with the hardware of the AM and/or robotic platform (e.g. processor configuration, memory, on board sensors, cameras, servos, microphones, or any other device on the robotic apparatus). In various ones of these implementations, the cloud server connects to the AM apparatus (or otherwise accesses information about the apparatus, such as from a network server, cloud database, or other user device) to collect hardware information and other data needed to determine compatibility. In some implementations, the interface device 904 collects and sends this information. In some implementations, the user inputs this information via the browser. Thus, the user (or administrator of the cloud server 906) may control which computational brain images are displayed during browsing. Hardware (and software) compatibility may be judged in a binary fashion (i.e. any hardware mismatch is deemed incompatible), or may be listed on a scale based on the severity of the mismatch. For example, a computational brain image with training only to identify red balls is not useful without a color sensing capability. However, a computational brain image that controls legs but not sound sensors may still be used for a device with legs and a sound sensor. The cloud process (or user interface device) may also be configured to assist the user in “fixing” the incompatibilities; e.g., links or other resources to identify a compatible computational brain image.
In some implementations, the cloud server may aid in the improvement of “brain” operation. In an exemplary implementation, the cloud server receives network operating performance information from a brain, and determines how to improve brain performance by adapting the brain's current network image. This may be achieved via e.g., an optimization done in the cloud, or the cloud server may retrieve the optimization algorithms for the local hardware, and provide it to the customer's own computer. In some implementations, the cloud server may optimize performance by providing a new image to the brain that has improved performance in similar situations. The cloud may act as a repository of computational brain images, and select which image(s) is/are appropriate for a particular robot in a particular situation. Such optimization may be provided as a paid service, and/or under one or more other paradigms such as an incentive, on-demand model, or even under a barter system (e.g., in trade for another brain or optimization). In some implementations, users pay a one-time fee to receive an optimized image. In various implementations, users may subscribe to an optimization service and receive periodic updates. In some implementations, a subscription user may be given an assurance that for any given task, the cloud server provides the most optimized image currently known/available.
In various implementations, the performance metrics may be supplied by routines running on the brain or related hardware. For example, a brain may be trained to perform a specific action, and to determine its speed/efficiency in performing the action. These data may be sent to the cloud server for evaluation. In some implementations, an isolated set of routines (running on the same or separate hardware) monitors brain function. Such separated routines may be able to determine performance even in the case in which the brain itself is malfunctioning (rather than just having limited performance). Further, the user of the brain may use search terms based on performance metrics to find candidate/suggested brains meeting certain criteria. For example, the user may wish to find a computational brain image capable of doing a specific task twice as fast/efficiently as a currently loaded image.
To this end, in the exemplary implementations, computational brain images may be uploaded/stored as full or partial images. Full images may be loaded on to an autonomy module (AM) and run as a self-sufficient control application. Partial images may lack the full functions necessary to run certain features of the robotic device. Thus, partial images may be used to augment or upgrade (downgrade) a pre-loaded computational brain image or a stored computational brain image. It will be appreciated that a full computational brain image for a first device may serve as a partial computational brain image for second device with all of the functionality of the first plus additional features. In some implementations, two or more partial computational brain images may be combined to form full computational brain images.
Brain merges using the methods discussed above may also be used for combining computational brain images with conflicting or overlapping traits. In various implementations, these merge techniques may also be used to form full computational brain images from partial computational brain images.
In some embodiments, user accounts are linked to registered AM apparatus and a registered user (or users). During registration, the user provides personally identifiable information, and for access to purchasable content, financial account information may be required. Various embodiments may additionally incorporate authentication and security features using a number of tools known to those of skill in the art, given the contents of the present disclosure. For example, secure socket layer (SSL) or transport layer security (TLS) connections may be used to protect personal data during transfer. Further, cryptographic hashes may be used to protect data stored on the cloud servers. Such hashing may further be used to protect purchasable or proprietary computational brain images (or other content) from theft.
For shared and purchasable content the network validates computational brain images to ensure that malicious, corrupted, or otherwise non-compliant images are not passed between users via the cloud system. In one implementation, an application running on the cloud server extracts the synaptic weight values from the computational brain image, and creates a new file. Thus, corrupted code in auxiliary portions of a computational brain image is lost. A variety of methodologies may be utilized in order to determine as to whether the computational brain image is compliant, including, e.g., hash value computation (e.g., a check sum), credentials verification, and/or other. In some implementations, various checksums are used to verify the integrity of the user uploaded images. Various implementations further require that the AM apparatus to have internet connectivity for uploading computational brain images. Thus, the cloud server may create computational brain images directly from AM apparatus for sharing purposes. In such cases, the cloud server may require that the AM apparatus meet certain requirements for connectivity (e.g. updated firmware, no third-party code or hardware, and/or other.)
The exemplary cloud server may also provide computational assistance to a brain to expand its size of the neural network a given brain may simulate. For example, if a brain is tasked with an operation it has failed to complete with its current computing resources or current computational brain image, it may request assistance from the cloud server. In some implementations, the cloud server may suggest/initiate the assistance. In implementations in which the cloud server monitors the performance of the brain (or is otherwise privy to performance metrics), the cloud server may identify that the image necessary to perform a given task is beyond the hardware capabilities of a given brain. Once the deficiency is identified, the cloud server may provide a new image and the computational resources needed to run the image. In some implementations, the cloud computing expansion may be initiated by a request for improved performance rather than a deficiency that precludes operation. A cloud server operator provides the expanded computing functionality as a paid service (examples of paid services include: usage-based, subscriptions, one-time payments, or other payment models).
In various implementations, cloud computing power may be provided by ad hoc distributed computing environments such as those based on the Berkeley Open Infrastructure for Network Computing (BOINC) platform. Myriad distributed implementations for brains may be used, such as those described in U.S. Provisional Patent Application Ser. No. 61/671,434, filed on Jul. 13, 2012, entitled “INTELLIGENT MODULAR ROBOTIC APPARATUS AND METHODS”, now U.S. patent application Ser. No. 13/829,919 filed on Mar. 14, 2013, entitled “INTELLIGENT MODULAR ROBOTIC APPARATUS AND METHODS” and/or U.S. patent application Ser. No. 13/830,398, entitled “NEURAL NETWORK LEARNING AND COLLABORATION APPARATUS AND METHODS”, filed on Mar. 14, 2013, each of the foregoing previously incorporated herein in its entirety.
In some implementations, the trainable modular robotic device architecture described herein may afford development and use of robots via social interaction. For example, with reference to
In some implementations, a storefront is provided as a user interface to the cloud. From the storefront, users may access purchasable content (e.g. computational brain images, upgrades, alternate firmware packages). Purchasable content allows users to conveniently obtain quality content to enhance their user experience; the quality may be controlled under any number of different mechanisms, such as peer review, user rating systems, functionality testing before the image is made accessible, etc. In some cases, users may prefer different starting points in training Some users prefer to begin with a clean slate, or to use only their own computational brain images as starting points. Other users may prefer not to have to redo training that has already been (properly or suitably) performed. Thus, these users appreciate having easy access to quality-controlled purchasable content.
The cloud may act as an intermediary that may link images with tasks, and users with images to facilitate exchange of computational brain images/training routines. For example, a robot of a user may have difficulty performing certain task. A developer may have an image well suited for the task, but he does not have access to individual robots/users. A cloud service may notify the user about the relevant images suited the task. In some implementations, the users may request assistance with the task. In various implementations, the cloud server may be configured to identify users training brains for specific tasks (via one or more monitoring functions), and alert users that help may be available. The notification may be based on one or more parameters. Examples of parameters may include the hardware/software configuration of the brain, functional modules installed on the robot, sensors available for use, kinetic configuration (how the robot moves), geographical location (e.g. proximity of user to developer), keywords, or other parameters. Further, in the case of training routines, the developer may wish to develop images suitable for a variety of robot configurations. Thus, the developer may be particularly interested in sharing a training routine in exchange for a copy of the user's computational brain image once the training routine is complete. The developer then has an expanded library of pre-trained image offerings to service future requests. In various implementations, one or more of the developer and/or trainer(s) for a given hardware configuration may receive compensation for their contributions.
In some approaches a subscription model may be used for access to content. In various implementations, a user gains access to content based on a periodic payment to the administrator of the networked service. A hybrid model may also be used. An initial/periodic subscription fee allows access to general material, but premium content requires a specific payment.
Other users that develop skill in training or those that develop popular computational brain images may wish to monetize their creations. The exemplary storefront implementation provides a platform to enable such enterprises. Operators of storefronts may desire to encourage such enterprise both for revenue generation and for enhanced user experience. For example, in one such model, the storefront operator may institute competitions with prizes for the most popular/optimized computational brain images, modifications, and/or media. Consequently, users are motivated to create higher quality content. The operator may also (or in lieu of a contest) instate a system of revenue and/or profit sharing for purchasable content. Thus, hobbyists and casual developers may see a reasonable return on their efforts. Such a system may also attract professional developers. Users as a whole may benefit from a wider array of content offerings from more skilled developers. Further, such revenue or profit sharing may be complemented or replaced with a system of internal credits for developers. Thus, contributors have expanded access to paid or otherwise limited distribution materials.
In various implementations, the cloud model may offer access to competing provider systems of computational brain images. A user may be able to reprogram/reconfigure the software elements of the system to connect to different management systems. Thus, competing image provision systems may spur innovation. For example, image provision systems may offer users more comprehensive packages ensuring access to computational brain images optimized for a wide variety of tasks to attract users to their particular provision network, and (potentially) expand their revenue base.
Various aspects of the present disclosure may advantageously be applied to, inter alia, the design and operation reconfigurable and/or modular robotic devices.
By way of an illustration, a user may purchase multiple robotic bodies (e.g., a giraffe, a lion, a dinosaur, and/or other) with a given AM. Upon training a giraffe to perform a particular task (e.g., dance) the user may swap the giraffe body for a lion. An app store may enable the user to search for code for already trained learning controller for the body of the lion that may be compatible with the AM of the user. The user may purchase, trade, and/or otherwise obtain the trained controller in order to utilize it with the new robotic body.
It is noteworthy that different robotic bodies (giraffe, lion) and/or different configurations of a given body (e.g., arm with a tendon attached at a variety of locations as shown in
In some implementations of the modular robotic device architecture described herein, two or more entities may provide individual components of a modular robot. A primary entity, for example, Brain Corporation, may provide the AM and/or the associated computational brain images. One or more other entities (e.g., third parties specializing in toy, plane, appliance manufacturing) may provide robotic bodies that may be compatible with a given AM. The one or more third parties may obtain a license from Brain Corporation in order to interface robotic bodies to the AM. In some implementation, the licensing agreement may include access to a proprietary AM-body interface.
Training of robotic devices outfitted with a learning autonomy mode may be facilitated using various interactions of user and robot. By way of an illustration, the user may utilize voice commands (e.g., approach, avoid), gestures, audible signals (whistle, clap), light pointers, RF transmitters (e.g., a clicker described in U.S. patent application Ser. No. 13/841,980, entitled “ROBOTIC TRAINING APPARATUS AND METHODS”, filed on Mar. 15, 2013), and/or other. A robot trained to avoid red objects and approach green objects may initiate execution of respective task upon determining a given context (e.g., green ball in one or more images provided by robot's camera). The task execution may commence absent an explicit command by the user.
It will be recognized that while certain aspects of the disclosure are described in terms of a specific sequence of steps of a method, these descriptions are only illustrative of the broader methods of the disclosure, and may be modified as required by the particular application. Certain steps may be rendered unnecessary or optional under certain circumstances. Additionally, certain steps or functionality may be added to the disclosed implementations, or the order of performance of two or more steps permuted. All such variations are considered to be encompassed within the disclosure presented herein.
While the above detailed description has shown, described, and pointed out novel features of the disclosure as applied to various implementations, it will be understood that various omissions, substitutions, and changes in the form and details of the device or process illustrated may be made by those skilled in the art without departing from the disclosure. The foregoing description is of the best mode presently contemplated of carrying out the principles and architectures described herein. This description is in no way meant to be limiting, but rather should be taken as illustrative of the general principles of the disclosure. The scope of the disclosure should be determined with reference to the claims.
This application is a divisional of and claims the benefit of priority to co-owned and co-pending U.S. patent application Ser. No. 14/209,826 filed on Mar. 13, 2014, and entitled “TRAINABLE MODULAR ROBOTIC APPARATUS”, which is incorporated herein by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
4600355 | Johnson | Jul 1986 | A |
4687457 | Milner | Aug 1987 | A |
4762455 | Coughlan et al. | Aug 1988 | A |
4820233 | Weiner | Apr 1989 | A |
4853771 | Witriol et al. | Aug 1989 | A |
4889027 | Yokoi | Dec 1989 | A |
5042807 | Sasakawa et al. | Aug 1991 | A |
5063603 | Burt | Nov 1991 | A |
5083803 | Blake et al. | Jan 1992 | A |
5355435 | Deyong et al. | Oct 1994 | A |
5369497 | Allen et al. | Nov 1994 | A |
5378188 | Clark | Jan 1995 | A |
5638359 | Peltola et al. | Jun 1997 | A |
5652594 | Costas | Jul 1997 | A |
5673367 | Buckley | Sep 1997 | A |
5673387 | Chen et al. | Sep 1997 | A |
5875108 | Hoffberg et al. | Feb 1999 | A |
6009418 | Cooper | Dec 1999 | A |
6014653 | Thaler | Jan 2000 | A |
6061088 | Khosravi et al. | May 2000 | A |
6084373 | Goldenberg et al. | Jul 2000 | A |
6124541 | Lu | Sep 2000 | A |
6253058 | Murasaki et al. | Jun 2001 | B1 |
6259988 | Galkowski et al. | Jul 2001 | B1 |
6338013 | Ruffner | Jan 2002 | B1 |
6411055 | Fujita et al. | Jun 2002 | B1 |
6429291 | Turley et al. | Aug 2002 | B1 |
6435936 | Rehkemper et al. | Aug 2002 | B1 |
6458157 | Suaning | Oct 2002 | B1 |
6504610 | Bauer et al. | Jan 2003 | B1 |
6545705 | Sigel et al. | Apr 2003 | B1 |
6545708 | Tamayama et al. | Apr 2003 | B1 |
6546291 | Merfeld et al. | Apr 2003 | B2 |
6547631 | Randall | Apr 2003 | B1 |
6560511 | Yokoo | May 2003 | B1 |
6565407 | Woolington et al. | May 2003 | B1 |
6570608 | Tserng | May 2003 | B1 |
6581046 | Ahissar | Jun 2003 | B1 |
6615108 | Peless et al. | Sep 2003 | B1 |
6633232 | Trajkovic et al. | Oct 2003 | B2 |
6682392 | Chan | Jan 2004 | B2 |
6697711 | Yokono | Feb 2004 | B2 |
6760645 | Kaplan | Jul 2004 | B2 |
6774908 | Bates et al. | Aug 2004 | B2 |
6780042 | Badescu et al. | Aug 2004 | B1 |
7023833 | Aiello et al. | Apr 2006 | B1 |
7054850 | Matsugu | May 2006 | B2 |
7235013 | Kobayashi | Jun 2007 | B2 |
7418320 | Bodin et al. | Aug 2008 | B1 |
7565203 | Greenberg et al. | Jul 2009 | B2 |
7742625 | Pilu | Jun 2010 | B2 |
7765029 | Fleischer et al. | Jul 2010 | B2 |
7849030 | Ellingsworth | Dec 2010 | B2 |
8015130 | Matsugu et al. | Sep 2011 | B2 |
8015785 | Walker et al. | Sep 2011 | B2 |
8145355 | Danko | Mar 2012 | B2 |
8145492 | Fujita | Mar 2012 | B2 |
8154436 | Szajnowski | Apr 2012 | B2 |
8157612 | Rehkemper et al. | Apr 2012 | B2 |
8281997 | Moran et al. | Oct 2012 | B2 |
8295955 | Dibernardo et al. | Oct 2012 | B2 |
8315305 | Petre et al. | Nov 2012 | B2 |
8346692 | Rouat et al. | Jan 2013 | B2 |
8401242 | Newcombe et al. | Mar 2013 | B2 |
8467623 | Izhikevich et al. | Jun 2013 | B2 |
8467823 | Seki et al. | Jun 2013 | B2 |
8515160 | Khosla et al. | Aug 2013 | B1 |
8527094 | Kumar et al. | Sep 2013 | B2 |
8542872 | Gornick et al. | Sep 2013 | B2 |
8571261 | Gagvani et al. | Oct 2013 | B2 |
8578810 | Donhowe | Nov 2013 | B2 |
8583286 | Fleischer et al. | Nov 2013 | B2 |
8712939 | Szatmary et al. | Apr 2014 | B2 |
8712941 | Izhikevich et al. | Apr 2014 | B2 |
8719199 | Izhikevich et al. | May 2014 | B2 |
8725658 | Izhikevich et al. | May 2014 | B2 |
8725662 | Izhikevich et al. | May 2014 | B2 |
8731295 | Schepelmann et al. | May 2014 | B2 |
8756183 | Daily et al. | Jun 2014 | B1 |
8775341 | Commons | Jul 2014 | B1 |
8793205 | Fisher et al. | Jul 2014 | B1 |
8880222 | Kawamoto et al. | Nov 2014 | B2 |
8943008 | Ponulak et al. | Jan 2015 | B2 |
8954193 | Sandin et al. | Feb 2015 | B2 |
8972315 | Szatmary et al. | Mar 2015 | B2 |
8977582 | Richert | Mar 2015 | B2 |
8983216 | Izhikevich et al. | Mar 2015 | B2 |
8990133 | Ponulak et al. | Mar 2015 | B1 |
8996177 | Coenen | Mar 2015 | B2 |
9002511 | Hickerson et al. | Apr 2015 | B1 |
9043952 | Sandin et al. | Jun 2015 | B2 |
9508235 | Suessemilch et al. | Nov 2016 | B2 |
20010020944 | Brown | Sep 2001 | A1 |
20010045809 | Mukai | Nov 2001 | A1 |
20020038294 | Matsugu | Mar 2002 | A1 |
20020072293 | Beyo et al. | Jun 2002 | A1 |
20020081937 | Yamada et al. | Jun 2002 | A1 |
20020156556 | Ruffner | Oct 2002 | A1 |
20020158599 | Fujita | Oct 2002 | A1 |
20020183895 | Kaplan et al. | Dec 2002 | A1 |
20020198854 | Berenji | Dec 2002 | A1 |
20030050903 | Liaw et al. | Mar 2003 | A1 |
20030222987 | Karazuba | Dec 2003 | A1 |
20030232568 | Engel et al. | Dec 2003 | A1 |
20040016638 | Laconti et al. | Jan 2004 | A1 |
20040100563 | Sablak et al. | May 2004 | A1 |
20040153211 | Kamoto et al. | Aug 2004 | A1 |
20040158358 | Anezaki et al. | Aug 2004 | A1 |
20040162638 | Solomon | Aug 2004 | A1 |
20040193670 | Langan et al. | Sep 2004 | A1 |
20040204792 | Taylor et al. | Oct 2004 | A1 |
20040212148 | Losey et al. | Oct 2004 | A1 |
20040220082 | Surmeier et al. | Nov 2004 | A1 |
20040244138 | Taylor et al. | Dec 2004 | A1 |
20050010331 | Taylor et al. | Jan 2005 | A1 |
20050012830 | Pilu | Jan 2005 | A1 |
20050015351 | Nugent | Jan 2005 | A1 |
20050022751 | Nelson | Feb 2005 | A1 |
20050036649 | Yokono et al. | Feb 2005 | A1 |
20050049749 | Watanabe et al. | Mar 2005 | A1 |
20050065651 | Ayers et al. | Mar 2005 | A1 |
20050209749 | Ito et al. | Sep 2005 | A1 |
20050240412 | Fujita | Oct 2005 | A1 |
20050283450 | Matsugu et al. | Dec 2005 | A1 |
20060069448 | Yasui | Mar 2006 | A1 |
20060161218 | Danilov | Jul 2006 | A1 |
20070008405 | Benosman et al. | Jan 2007 | A1 |
20070037475 | Spear | Feb 2007 | A1 |
20070176643 | Nugent | Aug 2007 | A1 |
20070208678 | Matsugu | Sep 2007 | A1 |
20070239315 | Sato et al. | Oct 2007 | A1 |
20070244610 | Ozick et al. | Oct 2007 | A1 |
20070258329 | Winey | Nov 2007 | A1 |
20080039974 | Sandin et al. | Feb 2008 | A1 |
20080170130 | Ollila et al. | Jul 2008 | A1 |
20080201282 | Garcia et al. | Aug 2008 | A1 |
20080294074 | Tong et al. | Nov 2008 | A1 |
20090014402 | Wolf et al. | Jan 2009 | A1 |
20090043722 | Nugent | Feb 2009 | A1 |
20090066790 | Hammadou | Mar 2009 | A1 |
20090118890 | Lin et al. | May 2009 | A1 |
20090141939 | Chambers et al. | Jun 2009 | A1 |
20090153499 | Kim et al. | Jun 2009 | A1 |
20090161981 | Allen | Jun 2009 | A1 |
20090287624 | Rouat et al. | Nov 2009 | A1 |
20090310862 | Tu et al. | Dec 2009 | A1 |
20100036780 | Angelov | Feb 2010 | A1 |
20100086171 | Lapstun | Apr 2010 | A1 |
20100091286 | Dahlgren | Apr 2010 | A1 |
20100152894 | Ha | Jun 2010 | A1 |
20100166320 | Paquier | Jul 2010 | A1 |
20100228418 | Whitlow et al. | Sep 2010 | A1 |
20100250022 | Hines et al. | Sep 2010 | A1 |
20100283853 | Acree | Nov 2010 | A1 |
20100286824 | Solomon | Nov 2010 | A1 |
20100290710 | Gagvani et al. | Nov 2010 | A1 |
20100292835 | Sugiura et al. | Nov 2010 | A1 |
20100316257 | Xu et al. | Dec 2010 | A1 |
20110016071 | Guillen et al. | Jan 2011 | A1 |
20110078717 | Drummond et al. | Mar 2011 | A1 |
20110119214 | Breitwisch et al. | May 2011 | A1 |
20110119215 | Elmegreen et al. | May 2011 | A1 |
20110134245 | Khizhnichenko | Jun 2011 | A1 |
20110178658 | Kotaba et al. | Jul 2011 | A1 |
20110184556 | Seth et al. | Jul 2011 | A1 |
20110222832 | Aizawa | Sep 2011 | A1 |
20110228742 | Honkasalo et al. | Sep 2011 | A1 |
20110235698 | Petre et al. | Sep 2011 | A1 |
20110245974 | Kawamoto et al. | Oct 2011 | A1 |
20120011090 | Tang et al. | Jan 2012 | A1 |
20120063736 | Simmons et al. | Mar 2012 | A1 |
20120081552 | Sablak et al. | Apr 2012 | A1 |
20120083982 | Bonefas et al. | Apr 2012 | A1 |
20120098933 | Robinson et al. | Apr 2012 | A1 |
20120109866 | Modha | May 2012 | A1 |
20120109886 | Ko | May 2012 | A1 |
20120117012 | Szatmary et al. | May 2012 | A1 |
20120143495 | Dantu | Jun 2012 | A1 |
20120173021 | Tsusaka | Jul 2012 | A1 |
20120185092 | Ku | Jul 2012 | A1 |
20120196679 | Newcombe et al. | Aug 2012 | A1 |
20120209428 | Mizutani | Aug 2012 | A1 |
20120209432 | Fleischer et al. | Aug 2012 | A1 |
20120211923 | Garner et al. | Aug 2012 | A1 |
20120215348 | Skrinde | Aug 2012 | A1 |
20120303091 | Izhikevich | Nov 2012 | A1 |
20120308076 | Piekniewski et al. | Dec 2012 | A1 |
20120308136 | Izhikevich | Dec 2012 | A1 |
20120330872 | Esser et al. | Dec 2012 | A1 |
20130046716 | Chan et al. | Feb 2013 | A1 |
20130073491 | Izhikevich et al. | Mar 2013 | A1 |
20130073496 | Szatmary et al. | Mar 2013 | A1 |
20130073500 | Szatmary et al. | Mar 2013 | A1 |
20130077597 | Nukala et al. | Mar 2013 | A1 |
20130103626 | Hunzinger | Apr 2013 | A1 |
20130116827 | Inazumi | May 2013 | A1 |
20130117212 | Hunzinger et al. | May 2013 | A1 |
20130151450 | Ponulak | Jun 2013 | A1 |
20130176423 | Rischmuller et al. | Jul 2013 | A1 |
20130204814 | Hunzinger et al. | Aug 2013 | A1 |
20130204820 | Hunzinger et al. | Aug 2013 | A1 |
20130216144 | Robinson et al. | Aug 2013 | A1 |
20130218821 | Szatmary et al. | Aug 2013 | A1 |
20130226342 | Green et al. | Aug 2013 | A1 |
20130245937 | Dibernardo et al. | Sep 2013 | A1 |
20130251278 | Izhikevich et al. | Sep 2013 | A1 |
20130314502 | Urbach et al. | Nov 2013 | A1 |
20130325768 | Sinyavskiy et al. | Dec 2013 | A1 |
20130325773 | Sinyavskiy et al. | Dec 2013 | A1 |
20130325774 | Sinyavskiy et al. | Dec 2013 | A1 |
20130325775 | Sinyavskiy et al. | Dec 2013 | A1 |
20140008496 | Ye et al. | Jan 2014 | A1 |
20140016858 | Richert | Jan 2014 | A1 |
20140032021 | Metzler et al. | Jan 2014 | A1 |
20140078343 | Dai et al. | Mar 2014 | A1 |
20140085545 | Tu et al. | Mar 2014 | A1 |
20140089232 | Buibas et al. | Mar 2014 | A1 |
20140175267 | Thiel et al. | Jun 2014 | A1 |
20140198838 | Andrysco et al. | Jul 2014 | A1 |
20140240492 | Lee et al. | Aug 2014 | A1 |
20140247325 | Wu et al. | Sep 2014 | A1 |
20140276951 | Hourtash | Sep 2014 | A1 |
20140277718 | Izhikevich et al. | Sep 2014 | A1 |
20140313032 | Sager et al. | Oct 2014 | A1 |
20140320668 | Kalevo et al. | Oct 2014 | A1 |
20140350722 | Skrinde | Nov 2014 | A1 |
20150042485 | Suessemilch et al. | Feb 2015 | A1 |
20150157182 | Noh et al. | Jun 2015 | A1 |
20150168954 | Hickerson et al. | Jun 2015 | A1 |
20150234385 | Sandin et al. | Aug 2015 | A1 |
20150362919 | Bernstein et al. | Dec 2015 | A1 |
20160104044 | Noh et al. | Apr 2016 | A1 |
20160179096 | Bradlow et al. | Jun 2016 | A1 |
Number | Date | Country |
---|---|---|
102226740 | Oct 2011 | CN |
H0487423 | Mar 1992 | JP |
2108612 | Apr 1998 | RU |
WO-2008083335 | Jul 2008 | WO |
WO-2010136961 | Dec 2010 | WO |
Entry |
---|
Mircea Badescu and Constantinos Mavroidis, Novel Smart Connector for Modular Robotics, Aug. 7, 2002, Advanced Intelligent Mechatronics, 2001. Proceedings. 2001 IEEE/ASME International Conference on. |
Jain, Learning Trajectory Preferences for Manipulators via Iterative Improvement, Jun. 2013. |
PR2 User Manual, Oct. 5, 2012. |
Alexandros <g class=“gr_gr_3 gr-alert gr_spell ContextualSpelling ins-del multiReplace” id=“3” data-gr-id=“3”Bouganis<g> and Murray Shanahan, “Training a Spiking Neural Network to Control a 4-DoF Robotic Arm based on Spike Timing-Dependent Plasticity”, Proceedings of WCCI 2010 IEEE World Congress on Computational Intelligence, COB, Barcelona, Spain, Jul. 18-23, 2010, pp. 4104-4111. |
Asensio et al., “Robot Learning Control Based on Neural Network Prediction” ASME 8th Annual Dynamic Systems and Control Conference joint with the JSME 11th Motion and Vibration Conference 2012 [Retrieved on: Jun. 24, 2014]. Retrieved fro internet:<a href=“http://msc.berkeley.edu/wjchen/publications/DSC12.sub.--8726.sub.--FI-.pdf”>http://msc.berkeley.edu/wjchen/publications/DSC12.sub.--8726.sub.--FI-.pdf</a><http: />. |
Bill Steele, The Human Touch Makes Robots Defter, Nov. 6, 2013, Cornell Chronicle. http://www.news.cornell.edu/stories/2013/11/human-touch-makes-robots-defter. |
Bohte, ‘Spiking Nueral Networks’ Doctorate at the University of Leiden, Holland, Mar. 5, 2003, pp. 1-133 [retrieved on Nov. 14, 2012]. Retrieved from the interne <a href=“http://homepages.cwi.nl/˜sbohte/publication/phdthesis.pdf”>http://homepages.cwi.nl/˜sbohte/publication/phdthesis.pdf</a><url: />. |
Brette et al., Brian: a simple and flexible simulator for spiking neural networks, The Neuromorphic Engineer, Jul. 1, 2009, pp. 1-4, doi: 10.2417/1200906.1659. |
Cuntz et al., ‘One Rule to Grow Them All: A General Theory of Neuronal Branching and Its Paractical Application’ PLOS Computational Biology, 6 (8), Published Aug. 5, 2010. |
Davison et al., PyNN: a common interface for neuronal network simulators, Frontiers in Neuroinformatics, Jan. 2009, pp. 1-10, vol. 2, Article 11. |
Djurfeldt, Mikael, The Connection-set Algebra: a formalism for the representation of connectivity structure in neuronal network models, implementations in Python and C++, and their use in simulators BMC Neuroscience Jul. 18, 2011 p. 1 12(Suppl 1):P80. |
Fidjeland, et al., “Accelerated Simulation of Spiking Neural Networks Using GPUs,” WCCI 2010 IEEE World Congress on Computational Intelligience, Jul. 18-23, 2010—CCIB, Barcelona, Spain, pp. 536-543, [retrieved on Nov. 14, 2012]. Retrieved from the Internet: URL:http://www.doc.ic.ac.ukl-mpsha/IJCNN10b.pdf. |
Floreano et al., ‘Neuroevolution: from architectures to learning’ Evol. Intel. Jan. 2008 1:47-62, [retrieved Dec. 30, 2013] [retrieved online from URL:http://inforscienee.eptl.cb/record/112676/files/FloreanoDuerrMattiussi2008.pdf<http: />. |
Gewaltig et al.. ‘NEST (Neural Simulation Tool)’, Scholarpedia, 2007. pp. I-15. 2(4): 1430, doi: 1 0.4249/scholarpedia.1430. |
Gleeson et al., NeuroML: A Language for Describing Data Driven Models of Neurons and Networks with a High Degree of Biological Detail, PLoS Computational Biology, Jun. 2010, pp. 1-19 vol. 6 Issue 6. |
Goodman et al., Brian: a simulator for spiking neural networks in Python, Frontiers in Neuroinformatics, Nov. 2008, pp. 1-10, vol. 2, Article 5. |
Gorchetchnikov et al., NineML: declarative, mathematically-explicit descriptions of spiking neuronal networks, Frontiers in Neuroinformatics, Conference Abstract: 4th INCF Congress of Neuroinformatics, doi: 1 0.3389/conffninf.2011.08.00098. |
Graham, Lyle J., The Surf-Hippo Reference Manual, http://www.neurophys.biomedicale.univparis5. fr/graham/surf-hippo-files/Surf-Hippo%20Reference%20Manual.pdf, Mar. 2002. pp. 1-128. |
Hardware and Software Platform for Mobile Manipulation R&D, 2012, https://web.archive.org/web/20120128031010/http://www.willowgarage.com/pages/pr2/design. |
Huh et al., “Generalized Power Law for Curve Movements” 2011. |
Huh et al., “Real-Time Motor Control Using Recurrent Neural Networks” IEEEE Apr. 2009. |
Huh, “Rethinking Optimal Control of Human Movements” Thesis 2012. |
Ishii K., et al., Designing Laser Gesture Interface for Robot Control, Springer Berlin Heidelberg, Proceedings, Part II 12th IFIP TC 13 International Conference, Uppsala, Sweden, Aug. 24-28, 2009, Proceedings, pp. 479-492. |
Izhikevich E.M. (2006) Polychronization: Computation With Spikes. Neural Computation, 18:245-282. |
Izhikevich et al., ‘Relating STDP to BCM’, Neural Computation (2003) 15, 1511-1523. |
Izhikevich, ‘Simple Model of Spiking Neurons’, IEEE Transactions on Neural Networks, vol. 14, No. 6, Nov. 2003, pp. 1569-1572. |
Jain, Learning Trajectory Preferences for Manipulators via Iterative Improvement, 2013, Advances in Neural Information Processing Systems 26 (NIPS 2013). |
Karbowski et al., ‘Multispikes and Synchronization in a Large Neural Network with Temporal Delays’, Neural Computation 12. 1573-1606 (2000). |
Kasabov, “Evolving Spiking Neural Networks for Spatio-and Spectro-Temporal Pattern Recognition”, IEEE 6th International Conference Intelligent Systems 2012 [Retrieved on Jun. 24, 2014], Retrieved from the Internet: <a href=“http://ncs.ethz.ch/projects/evospike/publications/evolving-spiking-neural-networks-for-spatio-and-spectro-temporal-pattern-recognition-plenary-talk-ieee-is/view” http://ncs.ethz.ch/projects/evospike/publications/evolving-spiking-neural-networks-for-spatio-and-spectro-temporal-pattern-recognition-plenary-talk-ieee-is/view</a>. |
Khotanzad. ‘Classification of invariant image representations using a neural network’ IEEE. Transactions on Acoustics, Speech, and Signal Processing, vol. 38, No. 6, Jun. 1990, pp. 1028-1038 [online], [retrieved on Dec. 10, 2013]. Retrieved from the Internet <URL: http://www-ee.uta.edWeeweb/IP/Courses/SPR/Reference/ Khotanzad.pdf. |
Laurent, ‘Issue 1—nnql Refactor Nucleus into its own file—Neural Network Query Language’ [retrieved on Nov. 12, 2013]. Retrieved from the Internet: URL:https://code.google.com/p/nnql/issues/detail?id=1. |
Laurent, ‘The Neural Network Query Language (NNQL) Reference’ [retrieved on Nov. 12, 2013]. Retrieved from the Internet: <URLhttps://code.google.com/p/ nnql/issues/detail?id=1>. |
Mordatch et al., “Discovery of Complex Behaviors through Contract-Invariant Optimization” ACM Transactions on Graphics (TOG)—SIGGRAPH 2012 Conference. |
Nichols, A Reconfigurable Computing Architecture for Implementing Artificial Neural Networks on FPGA, Master's Thesis, The University of Guelph, 2003, pp. 1-235. |
Paugam-Moisy et al., “Computing with spiking neuron networks” G. Rozenberg T. Back, J. Kok (Eds.), Handbook of Natural Computing, Springer-Verlag (2010) [retrieved Dec. 30, 2013], [retrieved online from link.springer.com ]. |
Pavlidis et al. Spiking neural network training using evolutionary algorithms. In: Proceedings 2005 IEEE International Joint Conference on Neural Networkds, 2005. IJCNN'05, vol. 4, pp. 2190-2194 Publication Date Jul. 31, 2005 [online] [Retrieved on Dec. 10, 2013] Retrieved from the Internet URL: http://citeseerx.ist.psu.edu! viewdoc/download?doi= 0.1.1.5.4346&rep—repl&type-pdf. |
Pierre-Philippe Coupard, An Availabot-like computer-controlled push puppet for Linux, https://web.archive.org/web/20081106161941/http://myspace.voo.be/pcoupard/push_puppet_to_y/, 2008. |
Schaal et al., An Example Application of Policy Improvement with Path Integrals (PI.sup.2), Jun. 9, 2010. |
Schemmel, J., Grub!, A., Meier, K., Mueller, E.: Implementing synaptic plasticity in a VLSI spiking neural network model. In: Proceedings of the 20061ntemational Joint Conference on Neural Networks (IJCNN'06), IEEE Press (2006) Jul. 16-21, 2006, pp. 1-6 [online], [retrieved on Aug. 24, 2012]. Retrieved from the Internet URL: http://www.kip.uniheidelberg.deNeroeffentlichungen/download.cgi/4620/ps/1774.pdf> Introduction. |
Simulink.RTM. model [online], [Retrieved on Dec. 10, 2013] Retrieved from <:URL: http://www.mathworks.com/ products/simulink/index.html>. |
Sinyavskiy et al. ‘Reinforcement learning of a spiking neural network in the task of control of an agent in a virtual discrete environment’ Rus, J. Nonlin. Dyn., 2011, vol. 7, No. 4 (Mobile Robots), pp. 859-875, chapters 1-8 (Russian Article with English Abstract). |
Sjostrom et al., ‘Spike-Timing Dependent Plasticity’ Scholarpedia, 5(2):1362 (2010), pp. 1-18. |
Suzuki et al.,Operation Direction to a Mobile Robot by Projection Lights, 2005 IEEE Workshop on Advanced Robotics and its Social Impacts, Jun. 12-15, 2005, pp. 160-165. |
Szatmary et al,, “Spike-timing Theory of Working Memory” PLoS Computational Biology, vol. 6, Issue 8, Aug. 19, 2010 [retrieved on Dec. 30, 2013]. Retrieved from the Internet: URL: http://www.ploscompbioLorg/article/info%3Adoi% 2F10.1371%2Fjournal,pcbi.1000879<url:></url:>. |
Tank D.W., et al., “Neural Computation by Concentrating Information in Time,” Proceedings of the National Academy of Sciences of the United States of America, 1987, vol. 84 (7), pp. 1896-1900. |
Todorov E., “Direct Cortical Control of Muscle Activation in Voluntary Arm Movements: a Model.,” Nature Neuroscience, 2000, vol. 3 (4), pp. 391-398. |
Baluja S., et al., “Expectation-based Selective Attention for Visual Monitoring and Control of a Robot Vehicle,” Robotics and Autonomous Systems, 1997, pp. 329-344. |
Brette, et al., “Simulation ofNetworks of Spiking Neurons: A Review of Tools and Strategies”, Received Nov. 29, 2006, Revised Apr. 2, 2007, Accepted Apr. 12, 2007, Springer Science, 50 pages. |
Chistiakova, Marina, et al., “Heterosynaptic plasticity in the neocortex.” Experimental brain research 199.3-4 (2009): 377-390. |
Daniel Bush, “STDP, Rate-coded Hebbian Learning and Auto-Associative Network Models of the Hippocampus”, Sep. 2008, University of Sussex, pp. 1-109. |
Fidjeland et al., Accelerated Simulation of Spiking Neural Networks Using GPUs [online], 2010 [retrieved on Jun. 15, 2013], Retrieved from the Internet: URL:http://ieeexplore.ieee.org/xpls/abs_all.jsp?ammber=5596678&tag=1. |
Fletcher, L., et al., “Correlating Driver Gaze with the Road Scene for Driver Assistance Systems,” Robotics and Autonomous Systems, 2005, pp. 71-84. |
Glackin, C. et al., Feature Extraction from Spectra-temporal Signals using Dynamic Synapses, recurrency, and lateral inhibition, Neural Networks (IJCNN), The 2010 International Joint Conference on DOI: 10.1109/IJCNN.2010.5596818 Publication Year: 2010, pp. 1-6. |
International Search Report and Written Opinion for Application No. PCT/US2014/026738, dated Jul. 21, 2014, 10 pages. |
International Search Report for Application No. PCT/US2014/026738, dated Jul. 21, 2014, 2 pages. |
International Search Report for Application No. PCT/US2014/026685, dated Oct. 3, 2014, 4 pages. |
Itti, Laurent, et al., “Computational Modelling of Visual Attention”, Nature Reviews—Neuroscience 2.3 (2001): 194-203. |
Izhikevich, E,M. (2007) Dynamical Systems in Neuroscience: The Geometry of Excitability and Bursting, The MIT Press, 2007. |
Izhikevich E.M., “Neural Excitability, Spiking and Bursting”, Neurosciences Institute, Received Jun. 9, 1999, Revised Oct. 25, 1999, 1171-1266, 96 pages. |
Judd, T., et al., “Learning to Predict where Humans look,” 12th International Conference on Computer Vision, 2009, 8 pages. |
Kazantsev, et al., “Active Spike Transmission in the Neuron Model With a Winding Threshold Maniford”, 01/03112,205-211,7 pages. |
Kienzle, W. et al., “How to find interesting locations in video: a spatiotemporal point detector learned from human eye movements.” Joint Pattern Recognition Symposium. Springer Berlin Heidelberg (2007) 10 pp. |
Kling-Petersen, PhD, “Sun and HPC: From Systems to PetaScale” Sun Microsystems, no date, 31 pages. |
Knoblauch A., et al., “Memory Capacities for Synaptic and Structural Plasticity,” Neural Computation, 2010, vol. 22 (2), pp. 289-341. |
Leydesdorff L., et al., “Classification and Powerlaws: The Logarithmic Transformation, Journal of the American Society for Information Science and Technology (forthcoming)”, 2006. |
Markram, Henry, et al. “Regulation of synaptic efficacy by coincidence of postsynaptic APs and EPSPs.” Science 275.5297 (1997): 213-215. |
Martinez-Perez, et al., “Automatic Activity Estimation Based on Object Behavior Signature”, 2010, 10 pages. |
Matsugu, et al., “Convolutional Spiking Neural Network for Robust Object Detection with Population Code Using Structured Pulse Packets”, 2004, 39-55, 17 pages. |
Medin I.C., et al., Modeling Cerebellar Granular layer Excitability and Combinatorial Computation with Spikes, Bio-Inspired Computing: Theories and Applications (BIC-TA), 2010 IEEE Fifth International Conference on DOI: 10.1 109/BICTA.201 0.5645274, Publication Year: 2010, pp. 1495-1503. |
Meinhardt, et al., “Pattern formation by local self-activation and lateral inhibition.” Bioessays 22.8 (2000): 753-760. |
Munn, S., et al., “Fixation-identification in Dynamic Scenes: Comparing an Automated Algorithm to Manual Coding,” Proceedings of the 5th symposium on Applied Perception in Graphics and Visualization, 2008, pp. 33-42. |
Niv, et al., Evolution of Reinforcement Learning in Uncertain Environments: A Simple Explanation for Complex Foraging Behaviors, International Society for Adaptive Behavior, 2002, vol. 10(1), pp. 5-24. |
Ostojic, Srdjan, Nicolas Brunel, From Spiking Neuron Models to Linear-Nonlinear Models, Jan. 2011, vol. 7 (1), e1001056. |
Paugam-Moisy, et al., “Computing with Spiking Neuron Networks” Handbook of Natural Computing, 40 pages Springer, Heidelberg (2009). |
Pham et al., “Affine Invariance of Human Hand Movements: a direct test” 2012. |
Ramachandran, et al., “The Perception of Phantom Limbs”, The D.O. Hebb Lecture, Center for Brain and Cognition, University of California, 1998, 121, 1603-1630,28 pages. |
Ruan, Chengmei, et al., Competitive behaviors of a spiking neural network with spike timing dependent plasticity, Biomedical Engineering and Informatics (BMEI), 2012 5th International Conference on DOI: 10.1109/BMEI.2012.6513088 Publication Year: 2012 , pp. 1015-1019. |
Stringer, et al., “Invariant Object Recognition in the Visual System with Novel Views of 3D Objects”, 2002, 2585-2596, 12 pages. |
Swiercz, Waldemar, et al. “A new synaptic plasticity rule for networks of spiking neurons.” Neural Networks, IEEE Transactions on 17.1 (2006): 94-105. |
Thorpe, S.J., et al. (2001), Spike-based strategies for rapid processing. Neural Networks 14, pp. 715-725. |
Thorpe, S.J., et al. (2004), SpikeNet: real-time visual processing with one spike per neuron, Neurocomputing, 58-60, pp. 857-864. |
Victor, T., et al., “Sensitivity of Eye-movement Measurements to in-vehicle Task Difficulty,” Transportation Research Part F: Traffic Psychology and Behavior, 2005, pp. 167-190. |
Voutsas K., et al., A Biologically Inspired Spiking Neural Network for Sound Source Lateralization Neural Networks, IEEE Transactions on vol. 18, Issue: 6 DOI: 10.11 09/TNN.2007.899623, Publication Year: 2007, pp. 1785-1799. |
Wade, J.J. , et al., SWAT: A Spiking Neural Network Training Algorithm for Classification Problems, Neural Networks, IEEE Transactions on vol. 21 , Issue: 11 001: 10.1109/TNN.2010.2074212 Publication Year: 2010 , pp. 1817-1830. |
Wennekers, T., Analysis of Spatia-temporal Patterns in Associative Networks of Spiking Neurons Artificial Neural Networks, 1999. 1CANN 99. Ninth International Conference on (Conf. Publ. No. 470) vol. 1 DOI:10.1049/cp:I9991116 Publication Year: 1999, vol. 1, pp. 245-250. |
Won, W.J., et al., “Implementation of Road Traffic Signs Detection based on Saliency Map Model,” IEEE Intelligent Vehicles Symposium, 2008, pp. 542-547. |
Wu, QingXiang, et al., Edge Detection Based on Spiking Neural Network Model, ICIC 2007, LNAI 4682, pp. 26-34,2007, Springer-Verlag, Berlin Heidelberg. |
Wu, QingXiang, et al. “Remembering Key Features of Visual Images based on Spike Timing Dependent Plasticity of Spiking Neurons.” Image and Signal Processing, 2009. CISP'09. 2nd International Congress on. IEEE, 2009. |
Schemmel et al., Implementing synaptic plasticity in a VLSI spiking neural network model in Proceedings of the 2006 International Joint Conference on Neural Networks (IJCNN'06), IEEE Press (2006) Jul. 16-21, 2006, pp. 1-6 [online], [retrieved on Dec. 10, 2013]. Retrieved from the Internet< URL: http://www.kip.uniheidelberg.de/veroeffentlichungen/download.cgi/4620/ps/1774.pdf>. |
Number | Date | Country | |
---|---|---|---|
20160075018 A1 | Mar 2016 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 14209826 | Mar 2014 | US |
Child | 14946589 | US |