This application is related to co-owned and co-pending U.S. patent application Ser. No. 14/070,239, filed contemporaneously herewith on Nov. 1, 2013 and entitled “REDUCED DEGREE OF FREEDOM ROBOTIC CONTROLLER APPARATUS AND METHODS”, and co-owned and co-pending U.S. patent application Ser. No. 14/070,269, filed contemporaneously herewith on Nov. 1, 2013 and entitled “APPARATUS AND METHODS FOR OPERATING ROBOTIC DEVICES USING SELECTIVE STATE SPACE TRAINING”; each of the foregoing being incorporated herein by reference in its entirety.
This application is also related to co-pending and co-owned U.S. patent application Ser. No. 14/040,520, entitled “APPARATUS AND METHODS FOR TRAINING OF ROBOTIC CONTROL ARBITRATION”, filed Sep. 27, 2013; co-pending and co-owned U.S. patent application Ser. No. 14/040,498, entitled “ROBOTIC CONTROL ARBITRATION APPARATUS AND METHODS”, filed Sep. 27, 2013; co-owned U.S. patent application Ser. No. 13/953,595 entitled “APPARATUS AND METHODS FOR CONTROLLING OF ROBOTIC DEVICES”, filed Jul. 29, 2013; co-pending and co-owned U.S. patent application Ser. No. 13/918,338 entitled “ROBOTIC TRAINING APPARATUS AND METHODS”, filed Jun. 14, 2013; co-pending and co-owned U.S. patent application Ser. No. 13/918,298 entitled “HIERARCHICAL ROBOTIC CONTROLLER APPARATUS AND METHODS”, filed Jun. 14, 2013; co-pending and co-owned U.S. patent application Ser. No. 13/918,620 entitled “PREDICTIVE ROBOTIC CONTROLLER APPARATUS AND METHODS”, filed Jun. 14, 2013; co-pending and co-owned U.S. patent application Ser. No. 13/907,734 entitled “ADAPTIVE ROBOTIC INTERFACE APPARATUS AND METHODS”, filed May 31, 2013; co-pending and co-owned U.S. patent application Ser. No. 13/842,530 entitled “ADAPTIVE PREDICTOR APPARATUS AND METHODS”, filed Mar. 15, 2013; co-owned U.S. patent application Ser. No. 13/842,562 entitled “ADAPTIVE PREDICTOR APPARATUS AND METHODS FOR ROBOTIC CONTROL”, filed Mar. 15, 2013; co-owned U.S. patent application Ser. No. 13/842,616 entitled “ROBOTIC APPARATUS AND METHODS FOR DEVELOPING A HIERARCHY OF MOTOR PRIMITIVES”, filed Mar. 15, 2013; co-owned U.S. patent application Ser. No. 13/842,647 entitled “MULTICHANNEL ROBOTIC CONTROLLER APPARATUS AND METHODS”, filed Mar. 15, 2013; and co-owned U.S. patent application Ser. No. 13/842,583 entitled “APPARATUS AND METHODS FOR TRAINING OF ROBOTIC DEVICES”, filed Mar. 15, 2013; each of the foregoing being incorporated herein by reference in its entirety.
A portion of the disclosure of this patent document contains material that is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent files or records, but otherwise reserves all copyright rights whatsoever.
1. Technological Field
The present disclosure relates to machine learning and training of robotic devices.
2. Background
Robotic devices may be used in a variety of applications, such as manufacturing, medical, safety, military, exploration, and/or other applications. Some existing robotic devices (e.g., manufacturing assembly and/or packaging robots) may be programmed in order to perform various desired functions. Some robotic devices (e.g., surgical robots) may be remotely controlled by humans. Some robotic devices may learn to operate via exploration.
Programming robots may be costly and remote control may require a human operator. Furthermore, changes in the robot model and/or environment may require changes in the programming code. Remote control typically relies on user experience and/or agility that may be inadequate when dynamics of the control system and/or environment (e.g., an unexpected obstacle appears in path of a remotely controlled vehicle) change rapidly.
One aspect of the disclosure relates to a robotic apparatus comprising a controllable actuator, a sensor module, and an adaptive controller. The sensor module may be configured to provide information related to an environment surrounding the robotic apparatus. The adaptive controller may be configured to produce a control instruction for the controllable actuator in accordance with the information provided by the sensor module. The control instruction may be configured to cause the robotic apparatus to execute a target task. Execution of the target task may be characterized by the robotic apparatus traversing one of a first trajectory or a second trajectory. The first trajectory and the second trajectory may each have at least one different parameter associated with the environment. The adaptive controller may be operable in accordance with a supervised learning process configured based on a training signal and a plurality of trials. At a given trial of the plurality of trials, the control instruction may be configured to cause the robot to traverse one of the first trajectory or the second trajectory. The teaching input may be configured based on the control signal. The teaching input may be configured to strengthen a trajectory selection by the controller such that, based on the first trajectory being selected for a first trial, the first trajectory is more likely to be selected during one or more trials subsequent to the first trial.
Another aspect of the disclosure relates to a processor-implemented method of operating a robot. The method may be performed by one or more processors configured to execute computer program instructions. The method may comprise: operating, using one or more processors, a robot to perform a task, the task performance including traversing a first trajectory or a second trajectory; and based on a selection of the first trajectory by the robot, providing a teaching signal. The task may be associated with an object within the robot's environment. The robot may be configured to receive sensory input characterizing the object. The first trajectory selection may be configured based on a predicted control output configured in accordance with the characterization of the object. The teaching signal may be configured to confirm selection of the first trajectory over the second trajectory by the robot.
In some implementations, the selection strengthening may be characterized by an increased probability of the robot selecting the first trajectory compared to a probability of the robot selecting the first trajectory in an absence of the teaching input.
Yet another aspect of the disclosure relates to an adaptive controller apparatus comprising one or more processors configured to execute computer program instructions that, when executed, cause a robot to perform a target task. The target task may be performed at least by: at a first time instance, causing the robot to execute a first action in accordance with sensory context; and at a second time instance subsequent to the first time instance, causing the robot to execute the first action based on the sensory context and a teaching signal. Performing the target task may be based on an execution of the first action or the second action. The teaching signal may be based on the robot executing the first action at the first time instance and may be configured to assist execution of the first action at the second time instance.
In some implementations, at a given time instance, the robot may be configured to execute one of the first action or the second action. The execution of the first action at the first time instance may bias the robot to execute the first action at a subsequent time instance.
In some implementations, the bias may be characterized by a probability of execution of the first action at the second time instance being greater than a probability of execution of the second action at the second time instance.
In some implementations, the teaching signal may be configured to reduce a probability of a composite action being executed at the second time instance. The composite action may be configured based on a combination of the first action and the second action.
In some implementations, the execution of the first action at the first time instance may be based on an output of a random number generator.
In some implementations, the controller apparatus may be operable in accordance with a supervised learning process configured based on the teaching input. The first action execution at the first time instance and the second time instance may be configured based on a first control signal and a second control signal, respectively, provided be the learning process. The teaching input may be configured to provide an association between the sensory context and the first action so as to reduce time associated with the provision of the second control signal compared to the signal provisioning in an absence of the teaching input.
In some implementations, the learning process may be configured based on a neuron network comprising a plurality of neurons communicating via a plurality of connections. Individual connections may provide an input into a given one of the plurality of neurons are characterized by a connection efficacy configured to affect operation of the given neuron. The association development may comprise adjustment of the connection efficacy based on the based on the teaching input and the first control signal.
In some implementations, the first action and the second action may be characterized by a different value of a state parameter associated with the environment. The state parameter may be selected from the group consisting of a spatial coordinate, robot's velocity, robot's orientation, and robot's position.
In some implementations, the controller apparatus may be embodied in the robot. Responsive to the sensory context comprising a representation of an obstacle, the target task may comprise an avoidance maneuver executed by the robot. Responsive to the sensory context comprising a representation of a target, the target task may comprise an approach maneuver executed by the robot.
In some implementations, the first action execution may be configured based on a control signal. The control signal may be updated at time intervals shorter than one second. The first time instance and the second time instance may be separated by an interval that is no shorter than one second. The teaching signal may be provided via a wireless remote control device.
In some implementations, the training input may be provided by a computerized entity via a wireless interface.
In some implementations, the robot may comprise an autonomous platform. The controller apparatus may be embodied on the platform. The training input may be provided by a computerized module comprising a proximity indication configured to generate a signal based on an object being within a given range from the platform.
In some implementations, the controller apparatus may be operable in accordance with a learning process configured based on the teaching signal. The context may comprise information indicative of an object within robot's environment. The first action execution may be based on a first predicted control output of the learning process configured in accordance with the context. The second action execution may be based on a second predicted control output of the learning process configured in accordance with the context and the teaching signal.
In some implementations, the first and the second predicted control output may be determined based on output of an adaptive predictor module operable in accordance with supervised learning process configured based on a teaching input. The supervised learning process may be configured to combine the teaching signal with the first control signal at the first time instance to produce a combined signal. The teaching input at the second time instance may be configured based on the combined signal.
In some implementations, the supervised learning process may be configured based on a backward propagation of an error. The combined signal may be determined based on a transform function configured based on a union operation.
In some implementations, the combined signal may be determined based on a transform function configured based on one or more operations including an additive operation cartelized by a first weight and a second weight. The first weight may be applied to a predictor output. The second weight may be applied to a teaching input.
In some implementations, a value of the first weight at the first time instance may be greater than the value of the first weight at the second time instance. A value of the second weight at the first time instance may be lower than the value of the second weight at the second time instance.
In some implementations, the robot may comprise a mobile platform. The controller apparatus may be embodied on the platform. The sensory context may be based on a visual input provided by a camera disposed on the platform.
Yet another aspect of the disclosure relates to a method of increasing a probability of action execution by a robotic apparatus. In one embodiment, the method includes: receiving a sensory context from a sensor; at a first time instance, executing a first action with the robotic apparatus in accordance with the sensory context; at a second time instance subsequent to the first time instance, determining with an adaptive controller whether to execute the first action based on the sensory context received from the sensor and a teaching input received from a user interface during the first time instance; and executing the first action with the robotic apparatus in accordance with the determination of the adaptive controller. In one variant thereof, a target task comprises at least the first action; and increasing or decreasing a probability of execution of the first action is based on the teaching input, the teaching input having an effectiveness value determined by the adaptive controller from the execution of the first action at one or more time instances, where the effectiveness value is reduced after a threshold number of the one or more time instances.
In one variant, the determining whether to execute the first action further comprises determining whether to execute a second action by the adaptive controller; and the method further comprises executing the second action with the robotic apparatus in accordance with the determination of whether to execute the second action.
These and other objects, features, and characteristics of the present disclosure, as well as the methods of operation and functions of the related elements of structure and the combination of parts and economies of manufacture, will become more apparent upon consideration of the following description and the appended claims with reference to the accompanying drawings, all of which form a part of this specification, wherein like reference numerals designate corresponding parts in the various figures. It is to be expressly understood, however, that the drawings are for the purpose of illustration and description only and are not intended as a definition of the limits of the disclosure. As used in the specification and in the claims, the singular form of “a”, “an”, and “the” include plural referents unless the context clearly dictates otherwise.
All Figures disclosed herein are © Copyright 2013 Brain Corporation. All rights reserved.
Implementations of the present technology will now be described in detail with reference to the drawings, which are provided as illustrative examples so as to enable those skilled in the art to practice the technology. Notably, the figures and examples below are not meant to limit the scope of the present disclosure to a single implementation or implementation, but other implementations and implementations are possible by way of interchange of or combination with some or all of the described or illustrated elements. Wherever convenient, the same reference numbers will be used throughout the drawings to refer to same or like parts.
Where certain elements of these implementations can be partially or fully implemented using known components, only those portions of such known components that are necessary for an understanding of the present disclosure will be described, and detailed descriptions of other portions of such known components will be omitted so as not to obscure the disclosure.
In the present specification, an implementation showing a singular component should not be considered limiting; rather, the disclosure is intended to encompass other implementations including a plurality of the same component, and vice-versa, unless explicitly stated otherwise herein.
Further, the present disclosure encompasses present and future known equivalents to the components referred to herein by way of illustration.
As used herein, the term “bus” is meant generally to denote all types of interconnection or communication architecture that is used to access the synaptic and neuron memory. The “bus” may be electrical, optical, wireless, infrared, and/or another type of communication medium. The exact topology of the bus could be for example standard “bus”, hierarchical bus, network-on-chip, address-event-representation (AER) connection, and/or other type of communication topology used for accessing, e.g., different memories in pulse-based system.
As used herein, the terms “computer”, “computing device”, and “computerized device” may include one or more of personal computers (PCs) and/or minicomputers (e.g., desktop, laptop, and/or other PCs), mainframe computers, workstations, servers, personal digital assistants (PDAs), handheld computers, embedded computers, programmable logic devices, personal communicators, tablet computers, portable navigation aids, J2ME equipped devices, cellular telephones, smart phones, personal integrated communication and/or entertainment devices, and/or any other device capable of executing a set of instructions and processing an incoming data signal.
As used herein, the term “computer program” or “software” may include any sequence of human and/or machine cognizable steps which perform a function. Such program may be rendered in a programming language and/or environment including one or more of C/C++, C#, Fortran, COBOL, MATLAB®, PASCAL, Python®, assembly language, markup languages (e.g., HTML, SGML, XML, VoXML), object-oriented environments (e.g., Common Object Request Broker Architecture (CORBA)), Java® (e.g., J2ME®, Java Beans), Binary Runtime Environment (e.g., BREW), and/or other programming languages and/or environments.
As used herein, the terms “connection”, “link”, “transmission channel”, “delay line”, “wireless” may include a causal link between any two or more entities (whether physical or logical/virtual), which may enable information exchange between the entities.
As used herein, the term “memory” may include an integrated circuit and/or other storage device adapted for storing digital data. By way of non-limiting example, memory may include one or more of ROM, PROM, EEPROM, DRAM, Mobile DRAM, SDRAM, DDR/2 SDRAM, EDO/FPMS, RLDRAM, SRAM, “flash” memory (e.g., NAND/NOR), memristor memory, PSRAM, and/or other types of memory.
As used herein, the terms “integrated circuit”, “chip”, and “IC” are meant to refer to an electronic circuit manufactured by the patterned diffusion of elements in or on to the surface of a thin substrate. By way of non-limiting example, integrated circuits may include field programmable gate arrays (e.g., FPGAs), a programmable logic device (PLD), reconfigurable computer fabrics (RCFs), application-specific integrated circuits (ASICs), printed circuits, organic circuits, and/or other types of computational circuits.
As used herein, the terms “microprocessor” and “digital processor” are meant generally to include digital processing devices. By way of non-limiting example, digital processing devices may include one or more of digital signal processors (DSPs), reduced instruction set computers (RISC), general-purpose (CISC) processors, microprocessors, gate arrays (e.g., field programmable gate arrays (FPGAs)), PLDs, reconfigurable computer fabrics (RCFs), array processors, secure microprocessors, application-specific integrated circuits (ASICs), and/or other digital processing devices. Such digital processors may be contained on a single unitary IC die, or distributed across multiple components.
As used herein, the term “network interface” refers to any signal, data, and/or software interface with a component, network, and/or process. By way of non-limiting example, a network interface may include one or more of FireWire (e.g., FW400, FW800, and/or other), USB (e.g., USB2), Ethernet (e.g., 10/100, 10/100/1000 (Gigabit Ethernet), 10-Gig-E, and/or other), MoCA, Coaxsys (e.g., TVnet™), radio frequency tuner (e.g., in-band or OOB, cable modem, and/or other), Wi-Fi (802.11), WiMAX (802.16), PAN (e.g., 802.15), cellular (e.g., 3G, LTE/LTE-A/TD-LTE, GSM, and/or other), IrDA families, and/or other network interfaces.
As used herein, the terms “node”, “neuron”, and “neuronal node” are meant to refer, without limitation, to a network unit (e.g., a spiking neuron and a set of synapses configured to provide input signals to the neuron) having parameters that are subject to adaptation in accordance with a model.
As used herein, the terms “state” and “node state” is meant generally to denote a full (or partial) set of dynamic variables used to describe node state.
As used herein, the term “synaptic channel”, “connection”, “link”, “transmission channel”, “delay line”, and “communications channel” include a link between any two or more entities (whether physical (wired or wireless), or logical/virtual) which enables information exchange between the entities, and may be characterized by a one or more variables affecting the information exchange.
As used herein, the term “Wi-Fi” includes one or more of IEEE-Std. 802.11, variants of IEEE-Std. 802.11, standards related to IEEE-Std. 802.11 (e.g., 802.11a/b/g/n/s/v), and/or other wireless standards.
As used herein, the term “wireless” means any wireless signal, data, communication, and/or other wireless interface. By way of non-limiting example, a wireless interface may include one or more of Wi-Fi, Bluetooth, 3G (3GPP/3GPP2), HSDPA/HSUPA, TDMA, CDMA (e.g., IS-95A, WCDMA, and/or other), FHSS, DSSS, GSM, PAN/802.15, WiMAX (802.16), 802.20, narrowband/FDMA, OFDM, PCS/DCS, LTE/LTE-A/TD-LTE, analog cellular, CDPD, satellite systems, millimeter wave or microwave systems, acoustic, infrared (i.e., IrDA), and/or other wireless interfaces.
Apparatus and methods for online training of robotic devices are disclosed herein. Robotic devices may be trained to perform a target task (e.g., recognize an object, approach a target, avoid an obstacle, and/or other tasks). In some implementations, performing the task may be achieved by the robot by following one of two or more spatial trajectories. By way of an illustration, a robotic vacuum apparatus may avoid a chair by passing it on the left or on the right. A training entity may assist the robot in selecting a target trajectory out of two or more available trajectories. In one or more implementations, the training entity may comprise a human user and/or a computerized controller device.
The robot may comprise an adaptive controller configured to generate control commands based on one or more of the teaching signal, sensory input, performance measure associated with the task, and/or other information. Training may comprise a plurality of trials. During one or more first trials, the trainer may observe operation of the robot. The trainer may refrain from providing the teaching signal to the robot. The robot may select one of the two trajectories (e.g., initialize a maneuver to the left of the chair). Upon observing the trajectory choice by the robot, the trainer may provide a teaching input configured to indicate to the robot a target trajectory. In some implementations, such teaching input may comprise a left turn control command issued by the trainer via a remote interface device (e.g., a joystick). The teaching input may be configured to affect robot's trajectory during subsequent trials so that probability of the robot selecting the same trajectory (e.g., passing the obstacle on the left) may be increased, compared to a random trajectory selection, and/or trajectory selection by the robot in absence of the teaching input. Upon completing a sufficient number of trials, the robot may be capable of consistently navigating the selected trajectory in absence of the teaching input.
Online robot training methodology described herein may enable more reliable decision making and reduce confusion when operating robotic controllers in order to perform a target task via two or more trajectories.
As shown in
During one of the trials (e.g., trial A in
Returning now to
The training configuration shown and described with respect to
In one or more implementations, the action selection by the robotic controller may be based on operation of an adaptive predictor apparatus configured to select an action based on sensory input (e.g., position of an object and/or an obstacle) as described, e.g., in U.S. patent application Ser. No. 13/842,530 filed Mar. 15, 2013 and entitled “ADAPTIVE PREDICTOR APPARATUS AND METHODS”, and/or U.S. patent application Ser. No. 13/842,583 filed Mar. 15, 2013 and entitled “APPARATUS AND METHODS FOR TRAINING OF ROBOTIC DEVICES”, each of the foregoing being incorporated herein by reference in its entirety. The predictor may be operable in accordance with a supervised learning process, a reinforcement learning process, and/or a combination thereof. As described, e.g., in the '583 application referenced above, the training input may be combined with the predictor output by a combiner characterized by a transfer function. In one or more implementations, the combiner transfer function may comprise a union and/or additive (e.g., a weighted sum) operation. The robot platform may be operable using output of the combiner.
In one or more implementations, the learning process of the robot may be configured to assign an increased weight to the action indicated by the training input compared to weight assigned to the action being selected by the predictor during beginning of training (e.g., duration of first 10%-20% of trials). Such weight configuration may reduce probability of selecting action 1 based on the predictor output as shown by the open circle 228 in
During the latter portion of the training (e.g., subsequent to duration of the first 10%-20% of trials) the learning process of the robot may be configured to assign a reduced weight to the action indicated by the training input and an increased weight to the action being selected by the predictor. Such weight configuration may reduce a probability of selecting action 1 based on the trainer input (e.g., 236 in
In some implementations of the robot learning process, based on a teaching input that is inconsistent (e.g., as shown by the teaching input 236 to select action 1 in
In one or more implementations of the robot learning process, the inconsistent teaching input may cause the robot to execute the action indicated by the teaching input (e.g., the action 1 associated with the input 236 in
During time interval 300 in
Subsequently, during the interval 310, the controller learning process may be adapted based on the training input 302 and sensory input received during the interval 300. The sensory input may be provided, by a camera (e.g., 966 in
The training input and controller trajectories obtained during the interval 300 may comprise a portion of first trajectories (e.g., 212 ion
During an interval 320 subsequent to interval 310, the controller may be configured to operate the robot in order to perform the target task based on controller output 324. The controller output may be configured based on the adapted state of the learning process during the preceding interval 320. In one or more implementations, the controller input may be combined with the teaching input 322 during the interval 320. During the interval 320, the learning configuration of the controller (e.g., connection efficacy) may remain unchanged so no adaptation takes place. Controller output during the interval 320 may be configured based on the training input 322 and controller output obtained using the control process configuration determined during the interval 310. The teaching input 322 and the context obtained during the interval 320 may be stored for use during subsequent controller adaptations. Based on the adaptation performed during the interval 310, the controller measure of autonomy may increase from level 306 prior to adaptation to the level 326.
Subsequently, during the interval 330, the controller learning process may be adapted based on the training input 322 and sensory input received during the interval 320. The training input and controller trajectories obtained during the interval 320 may comprise a portion of first trajectories (e.g., 212 in
The offline training process described with respect to
During individual training trials illustrated and described with respect to
Symbols ‘X’ in
Training configurations such as illustrated with respect to
The apparatus 500 may comprise a processing module 516 configured to receive sensory input from sensory block 520 (e.g., camera 966 in
The apparatus 500 may comprise memory 514 configured to store executable instructions (e.g., operating system and/or application code, raw and/or processed data such as raw image frames and/or object views, teaching input, information related to one or more detected objects, and/or other information).
In some implementations, the processing module 516 may interface with one or more of the mechanical 518, sensory 520, electrical 522, power components 524, communications interface 526, and/or other components via driver interfaces, software abstraction layers, and/or other interfacing techniques. Thus, additional processing and memory capacity may be used to support these processes. However, it will be appreciated that these components may be fully controlled by the processing module. The memory and processing capacity may aid in processing code management for the apparatus 500 (e.g. loading, replacement, initial startup and/or other operations). Consistent with the present disclosure, the various components of the device may be remotely disposed from one another, and/or aggregated. For example, the instructions operating the online learning process may be executed on a server apparatus that may control the mechanical components via network or radio connection. In some implementations, multiple mechanical, sensory, electrical units, and/or other components may be controlled by a single robotic controller via network/radio connectivity.
The mechanical components 518 may include virtually any type of device capable of motion and/or performance of a desired function or task. Examples of such devices may include one or more of motors, servos, pumps, hydraulics, pneumatics, stepper motors, rotational plates, micro-electro-mechanical devices (MEMS), electroactive polymers, SMA (shape memory alloy) activation, and/or other devices. The sensor devices may interface with the processing module, and/or enable physical interaction and/or manipulation of the device.
The sensory devices 520 may enable the controller apparatus 500 to accept stimulus from external entities. Examples of such external entities may include one or more of video, audio, haptic, capacitive, radio, vibrational, ultrasonic, infrared, motion, and temperature sensors radar, lidar and/or sonar, and/or other external entities. The module 516 may implement logic configured to process user queries (e.g., voice input “are these my keys”) and/or provide responses and/or instructions to the user. The processing associated with sensory information is discussed with respect to
The electrical components 522 may include virtually any electrical device for interaction and manipulation of the outside world. Examples of such electrical devices may include one or more of light/radiation generating devices (e.g. LEDs, IR sources, light bulbs, and/or other), audio devices, monitors/displays, switches, heaters, coolers, ultrasound transducers, lasers, and/or other electrical devices. These devices may enable a wide array of applications for the apparatus 500 in industrial, hobbyist, building management, medical device, military/intelligence, and/or other fields.
The communications interface may include one or more connections to external computerized devices to allow for, inter alia, management of the apparatus 500. The connections may include one or more of the wireless or wireline interfaces discussed above, and may include customized or proprietary connections for specific applications. The communications interface may be configured to receive sensory input from an external camera, a user interface (e.g., a headset microphone, a button, a touchpad and/or other user interface), and/or provide sensory output (e.g., voice commands to a headset, visual feedback).
The power system 524 may be tailored to the needs of the application of the device. For example, for a small hobbyist robot or aid device, a wireless power solution (e.g. battery, solar cell, inductive (contactless) power source, rectification, and/or other wireless power solution) may be appropriate. However, for building management applications, battery backup/direct wall power may be superior, in some implementations. In addition, in some implementations, the power system may be adaptable with respect to the training of the apparatus 500. Thus, the apparatus 500 may improve its efficiency (to include power consumption efficiency) through learned management techniques specifically tailored to the tasks performed by the apparatus 500.
In some implementations, methods 600, 700 may be implemented in one or more processing devices (e.g., a digital processor, an analog processor, a digital circuit designed to process information, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information). The one or more processing devices may include one or more devices executing some or all of the operations of methods 600, 700 in response to instructions stored electronically on an electronic storage medium. The one or more processing devices may include one or more devices configured through hardware, firmware, and/or software to be specifically designed for execution of one or more of the operations of methods 600, 700.
At operation 602 of method 600, a context may be determined. In some implementations, the context may comprise on or more aspects of sensory input (e.g., 806 in
At operation 604, predicted control output may be determined consistent with the context. In one or more implementations, the context may comprise location of an obstacle (e.g., 134) relative a target (e.g., 142) and the control output may comprise one or more motor commands configured to navigate one of the trajectories (e.g., left/right 132/136, respectively in
At operation 606, teaching input may be determined. In one or more implementations, the teaching input may be configured based on observing trajectory selection by the robotic controller. The teaching input may comprise, e.g., a motor control command (e.g., turn left) configured to cause the robot to follow the selected trajectory (e.g., 132 in
At operation 608, combined control output may be determined. In one or more implementations, the combined output may comprise a combination of the predicted control output (e.g., 224 in
At operation 610, the trajectory may be navigated in accordance with the combined control output determined at operation 608.
At operation 702, the robot may be trained to execute two or more actions (e.g., turn left/right). The action execution may be based on a sensory context (e.g., an image of an object in video frame, in one or more implementations.
At operation 704, the trainer may observe action selection by the robot. The action selection may be based on appearance of an obstacle (e.g., 134) in robot's sensory input. In one or more implementations, actions 1, 2 may comprise selection of the trajectory 132, 136, respectively, in
At operation 706, the trainer may determine as to whether the action selected by the robot at operation 704 matches a target action. In one or more implementations, the target action may comprise a previously selected action (e.g., the action selection 230 in
Responsive to a determination at operation 706 that the action selected by the robot at operation 704 does not match the target action, the method may proceed to operation 708 wherein a training input may be provided to the robot. In some implementations, the teaching input may be configured based on the trainer observing trajectory navigation by the robot associated with executing the action selected at operation 704. In one or more implementations, the teaching input may correspond to the signal 226 configured to indicate to a controller of the robot the target trajectory (e.g., a right turn of 20° versus the right turn of 10° or a left turn selected by the robot at operation 704). The learning process of the robot controller may be updated based on the selected action and the training input using any applicable methodologies, including these described in U.S. patent application Ser. No. 13/842,530 filed Mar. 15, 2013 and entitled “ADAPTIVE PREDICTOR APPARATUS AND METHODS”, and/or U.S. patent application Ser. No. 13/842,583 filed Mar. 15, 2013 and entitled “APPARATUS AND METHODS FOR TRAINING OF ROBOTIC DEVICES”, incorporated supra. In one or more implementations, the adaptation may comprise adjusting efficacy of network connections. In some implementations of learning configured based on a look-up table (LUT), the learning adaptation may comprise updating one or more LUT entries. Based on the adaptation, updated control output (e.g., the output 224 in
Responsive to a determination at operation 706 that the action selected by the robot at operation 704 matches the target action, the method may proceed to operation 710 wherein the robot may navigate a trajectory based on the updated learning process, selected action and the teaching input. The trajectory navigation may be based on a predicted control output. In one or more implementations, the predicted control output may comprise output of an adaptive predictor operable in accordance with reinforcement and/or supervised learning process configured based on the sensory context. The predicted output may correspond to the signal 224 configured to cause the robot to select one of the two trajectories (actions).
At operation 712, a determination may be made as to whether a performance associated with the action execution by the robot matches a target level. The performance may be determined based on a consistency measure of the action selected at operation 704. In some implementations, the consistency may be determined based on a probability of the target action selection, a number of matches determined at operation 706, a number of mis-matches between a preceding selected action and the current selected action, and/or other.
Online learning methodology described herein may be utilized for implementing adaptive controllers of robotic devices.
The controller 802 may be operable in accordance with a supervised learning process. In one or more implementations, the controller 802 may optimize performance (e.g., performance of the system 800 of
The adaptive controller 802 may comprise a parallel network multiple interconnected neurons. Individual neurons may be operable independent from one another thereby enabling parallel computations. Neurons may communicate with one another within the network using a variety of methods. In some implementations, the neurons may be configured to facilitate a rate-based process. Data may be encoded into a scalar and/or a vector for neuron output. In one or more implementations, the network (e.g., of the adaptive controller 802) may comprise spiking neurons, e.g., as described in the '533 application referenced above.
One or more objects (e.g., a floor 970, a stationary object 974, a moving object 976, and/or other objects) may be present in the camera field of view. The motion of the objects may result in a displacement of pixels representing the objects within successive frames, such as described in U.S. patent application Ser. No. 13/689,717 filed on Nov. 30, 2012 and entitled “APPARATUS AND METHODS FOR OBJECT DETECTION VIA OPTICAL FLOW CANCELLATION”, incorporated, supra.
When the robotic apparatus 960 is in motion, such as shown by arrow 964 in
One approach to object recognition and/or obstacle avoidance may comprise processing of optical flow using a spiking neural network apparatus comprising for example the self-motion cancellation mechanism, such as described, for example, in U.S. patent application Ser. No. 13/689,717 filed on Nov. 30, 2012 and entitled “APPARATUS AND METHODS FOR OBJECT DETECTION VIA OPTICAL FLOW CANCELLATION”, the foregoing being incorporated herein by reference in its entirety.
The apparatus 1000 may comprise an encoder 1010 configured to transform (e.g., encode) the input signal 1002 into an encoded signal 1026. In some implementations, the encoded signal may comprise a plurality of pulses (also referred to as a group of pulses) configured to represent to optical flow due to one or more objects in the vicinity of the robotic device.
The encoder 1010 may receive signal 1004 representing motion of the robotic device. In one or more implementations, the input 1004 may comprise an output of an inertial sensor module. The inertial sensor module may comprise one or more acceleration sensors and/or acceleration rate of change (i.e., rate) sensors. In one or more implementations, the inertial sensor module may comprise a 3-axis accelerometer, 3-axis gyroscope, and/or other inertial sensor. It will be appreciated by those skilled in the arts that various other motion sensors may be used to characterized motion of a robotic platform, such as, for example, radial encoders, range sensors, global positioning system (GPS) receivers, RADAR, SONAR, LIDAR, and/or other sensors.
The encoder 1010 may comprise one or more spiking neurons. One or more of the spiking neurons of the module 1010 may be configured to encode motion input 1004. One or more of the spiking neurons of the module 1010 may be configured to encode input 1002 into optical flow, as described in U.S. patent application Ser. No. 13/689,717 filed on Nov. 30, 2012 and entitled “APPARATUS AND METHODS FOR OBJECT DETECTION VIA OPTICAL FLOW CANCELLATION”, incorporated supra.
The encoded signal 1026 may be communicated from the encoder 1010 via multiple connections (also referred to as transmission channels, communication channels, or synaptic connections) 1044 to one or more neuronal nodes (also referred to as the detectors) 1042.
In one or more implementations such as those represented by
In various implementations, individual detectors 1042_1, 1042_n may contain logic (which may be implemented as a software code, hardware logic, and/or a combination of thereof) configured to recognize a predetermined pattern of pulses in the encoded signal 1026 to produce post-synaptic detection signals transmitted over communication channels 1048. Such recognition may include one or more mechanisms described in one or more of U.S. patent application Ser. No. 12/869,573 filed on Aug. 26, 2010 and entitled “SYSTEMS AND METHODS FOR INVARIANT PULSE LATENCY CODING”, now issued as U.S. Pat. No. 8,315,305; U.S. patent application Ser. No. 12/869,583 filed on Aug. 26, 2010 and entitled “INVARIANT PULSE LATENCY CODING SYSTEMS AND METHODS”, now issued as U.S. Pat. No. 8,467,623; U.S. patent application Ser. No. 13/117,048 filed on May 26, 2011 and entitled “APPARATUS AND METHODS FOR POLYCHRONOUS ENCODING AND MULTIPLEXING IN NEURONAL PROSTHETIC DEVICES”; and/or U.S. patent application Ser. No. 13/152,084 filed Jun. 2, 2011 and entitled “APPARATUS AND METHODS FOR PULSE-CODE INVARIANT OBJECT RECOGNITION”; each of the foregoing incorporated herein by reference in its entirety. In
In some implementations, the detection signals may be delivered to a next layer of detectors 1052 (comprising detectors 1052_1, 1052_m, 1052_k) for recognition of complex object features and objects, similar to the exemplary implementation described in commonly owned and co-pending U.S. patent application Ser. No. 13/152,084 filed on Jun. 2, 2011 and entitled “APPARATUS AND METHODS FOR PULSE-CODE INVARIANT OBJECT RECOGNITION”, incorporated supra. In some implementations, individual subsequent layers of detectors may be configured to receive signals (e.g., via connections 1058) from the previous detector layer, and to detect more complex features and objects (as compared to the features detected by the preceding detector layer). For example, a bank of edge detectors may be followed by a bank of bar detectors, followed by a bank of corner detectors and so on, thereby enabling recognition of one or more letters of an alphabet by the apparatus.
Individual detectors 1042 may output detection (post-synaptic) signals on communication channels 1048_1, 1048_n (with an appropriate latency) that may propagate with appropriate conduction delays to the detectors 1052. In some implementations, the detector cascade shown in
The exemplary sensory processing apparatus 1000 illustrated in
In some implementations, the apparatus 1000 may comprise feedback connections 1006, 1056, which may be configured to communicate context information from detectors within one hierarchy layer to previous layers, as illustrated by the feedback connections 1056_1, 1056_2 in
Output 1050 of the processing apparatus 1000 may be provided via one or more connections 1058.
Various exemplary computerized apparatus configured to operate a neuron network configured to implement online learning methodology set forth herein are now described in connection with
A computerized neuromorphic processing system, consistent with one or more implementations, for use with an adaptive robotic controller described, supra, is illustrated in
The system 1100 further may comprise a random access memory (RAM) 1108, configured to store neuronal states and connection parameters and to facilitate synaptic updates. In some implementations, synaptic updates may be performed according to the description provided in, for example, in U.S. patent application Ser. No. 13/239,255 filed Sep. 21, 2011, entitled “APPARATUS AND METHODS FOR SYNAPTIC UPDATE IN A PULSE-CODED NETWORK”, incorporated by reference, supra
In some implementations, the memory 1108 may be coupled to the processor 1102 via a direct connection 1116 (e.g., memory bus). The memory 1108 may also be coupled to the processor 1102 via a high-speed processor bus 1112.
The system 1100 may comprise a nonvolatile storage device 1106. The nonvolatile storage device 1106 may comprise, inter alia, computer readable instructions configured to implement various aspects of neuronal network operation. Examples of various aspects of neuronal network operation may include one or more of sensory input encoding, connection plasticity, operation model of neurons, learning rule evaluation, other operations, and/or other aspects. In one or more implementations, the nonvolatile storage 1106 may be used to store state information of the neurons and connections for later use and loading previously stored network configuration. The nonvolatile storage 1106 may be used to store state information of the neurons and connections when, for example, saving and/or loading network state snapshot, implementing context switching, saving current network configuration, and/or performing other operations. The current network configuration may include one or more of connection weights, update rules, neuronal states, learning rules, and/or other parameters.
In some implementations, the computerized apparatus 1100 may be coupled to one or more of an external processing device, a storage device, an input device, and/or other devices via an I/O interface 1120. The I/O interface 1120 may include one or more of a computer I/O bus (PCI-E), wired (e.g., Ethernet) or wireless (e.g., Wi-Fi) network connection, and/or other I/O interfaces.
In some implementations, the input/output (I/O) interface may comprise a speech input (e.g., a microphone) and a speech recognition module configured to receive and recognize user commands.
It will be appreciated by those skilled in the arts that various processing devices may be used with computerized system 1100, including but not limited to, a single core/multicore CPU, DSP, FPGA, GPU, ASIC, combinations thereof, and/or other processing entities (e.g., computing clusters and/or cloud computing services). Various user input/output interfaces may be similarly applicable to implementations of the disclosure including, for example, an LCD/LED monitor, touch-screen input and display device, speech input device, stylus, light pen, trackball, and/or other devices.
Referring now to
The micro-blocks 1140 may be interconnected with one another using connections 1138 and routers 1136. As it is appreciated by those skilled in the arts, the connection layout in
The neuromorphic apparatus 1130 may be configured to receive input (e.g., visual input) via the interface 1142. In one or more implementations, applicable for example to interfacing with computerized spiking retina, or image array, the apparatus 1130 may provide feedback information via the interface 1142 to facilitate encoding of the input signal.
The neuromorphic apparatus 1130 may be configured to provide output via the interface 1144. Examples of such output may include one or more of an indication of recognized object or a feature, a motor command (e.g., to zoom/pan the image array), and/or other outputs.
The apparatus 1130, in one or more implementations, may interface to external fast response memory (e.g., RAM) via high bandwidth memory interface 1148, thereby enabling storage of intermediate network operational parameters. Examples of intermediate network operational parameters may include one or more of spike timing, neuron state, and/or other parameters. The apparatus 1130 may interface to external memory via lower bandwidth memory interface 1146 to facilitate one or more of program loading, operational mode changes, and retargeting, and/or other operations. Network node and connection information for a current task may be saved for future use and flushed. Previously stored network configuration may be loaded in place of the network node and connection information for the current task, as described for example in co-owned U.S. patent application Ser. No. 13/487,576 filed on Jun. 4, 2012 and entitled “DYNAMICALLY RECONFIGURABLE STOCHASTIC LEARNING APPARATUS AND METHODS”, now issued as U.S. Pat. No. 9,015,092 which is incorporated herein by reference in its entirety. External memory may include one or more of a Flash drive, a magnetic drive, and/or other external memory.
Different cell levels (e.g., L1, L2, L3) of the apparatus 1150 may be configured to perform functionality various levels of complexity. In some implementations, individual L1 cells may process in parallel different portions of the visual input (e.g., encode individual pixel blocks, and/or encode motion signal), with the L2, L3 cells performing progressively higher level functionality (e.g., object detection). Individual ones of L2, L3, cells may perform different aspects of operating a robot with one or more L2/L3 cells processing visual data from a camera, and other L2/L3 cells operating motor control block for implementing lens motion what tracking an object or performing lens stabilization functions.
The neuromorphic apparatus 1150 may receive input (e.g., visual input) via the interface 1160. In one or more implementations, applicable for example to interfacing with computerized spiking retina, or image array, the apparatus 1150 may provide feedback information via the interface 1160 to facilitate encoding of the input signal.
The neuromorphic apparatus 1150 may provide output via the interface 1170. The output may include one or more of an indication of recognized object or a feature, a motor command, a command to zoom/pan the image array, and/or other outputs. In some implementations, the apparatus 1150 may perform all of the I/O functionality using single I/O block (not shown).
The apparatus 1150, in one or more implementations, may interface to external fast response memory (e.g., RAM) via a high bandwidth memory interface (not shown), thereby enabling storage of intermediate network operational parameters (e.g., spike timing, neuron state, and/or other parameters). In one or more implementations, the apparatus 1150 may interface to external memory via a lower bandwidth memory interface (not shown) to facilitate program loading, operational mode changes, retargeting, and/or other operations. Network node and connection information for a current task may be saved for future use and flushed. Previously stored network configuration may be loaded in place of the network node and connection information for the current task, as described for example the application '576, referenced supra.
In one or more implementations, one or more portions of the apparatus 1150 may be configured to operate one or more learning rules, as described for example in the application '576 referenced supra. In one such implementation, one block (e.g., the L3 block 1156) may be used to process input received via the interface 1160 and to provide a teaching signal to another block (e.g., the L2 block 1156) via interval interconnects 1166, 1168.
The methodology for online learning by adaptive controllers set forth herein may advantageously be utilized in various applications, including, e.g., autonomous navigation, classification, detection, object manipulation, tracking, object pursuit, locomotion, and/or other robotic applications.
It will be recognized that while certain aspects of the disclosure are described in terms of a specific sequence of steps of a method, these descriptions are only illustrative of the broader methods of the disclosure, and may be modified as required by the particular application. Certain steps may be rendered unnecessary or optional under certain circumstances. Additionally, certain steps or functionality may be added to the disclosed implementations, or the order of performance of two or more steps permuted. All such variations are considered to be encompassed within the disclosure disclosed and claimed herein.
While the above detailed description has shown, described, and pointed out novel features of the disclosure as applied to various implementations, it will be understood that various omissions, substitutions, and changes in the form and details of the device or process illustrated may be made by those skilled in the art without departing from the disclosure. The foregoing description is of the best mode presently contemplated of carrying out the principles of the disclosure. This description is in no way meant to be limiting, but rather should be taken as illustrative of the general principles of the disclosure. The scope of the disclosure should be determined with reference to the claims.
Number | Name | Date | Kind |
---|---|---|---|
3920972 | Corwin, Jr. et al. | Nov 1975 | A |
4468617 | Ringwall | Aug 1984 | A |
4617502 | Sakaue et al. | Oct 1986 | A |
4638445 | Mattaboni | Jan 1987 | A |
4706204 | Hattori | Nov 1987 | A |
4763276 | Perreirra et al. | Aug 1988 | A |
4852018 | Grossberg | Jul 1989 | A |
5063603 | Burt | Nov 1991 | A |
5092343 | Spitzer | Mar 1992 | A |
5121497 | Kerr et al. | Jun 1992 | A |
5245672 | Wilson | Sep 1993 | A |
5303384 | Rodriguez et al. | Apr 1994 | A |
5355435 | DeYong | Oct 1994 | A |
5388186 | Bose | Feb 1995 | A |
5408588 | Ulug | Apr 1995 | A |
5467428 | Ulug | Nov 1995 | A |
5579440 | Brown | Nov 1996 | A |
5602761 | Spoerre et al. | Feb 1997 | A |
5638359 | Peltola | Jun 1997 | A |
5673367 | Buckley | Sep 1997 | A |
5687294 | Jeong | Nov 1997 | A |
5719480 | Bock | Feb 1998 | A |
5739811 | Rosenberg et al. | Apr 1998 | A |
5841959 | Guiremand | Nov 1998 | A |
5875108 | Hoffberg | Feb 1999 | A |
5994864 | Inoue et al. | Nov 1999 | A |
6009418 | Cooper | Dec 1999 | A |
6014653 | Thaler | Jan 2000 | A |
6169981 | Werbos | Jan 2001 | B1 |
6218802 | Onoue et al. | Apr 2001 | B1 |
6259988 | Galkowski et al. | Jul 2001 | B1 |
6272479 | Farry et al. | Aug 2001 | B1 |
6363369 | Liaw | Mar 2002 | B1 |
6366293 | Hamilton | Apr 2002 | B1 |
6442451 | Lapham | Aug 2002 | B1 |
6458157 | Suaning | Oct 2002 | B1 |
6489741 | Genov | Dec 2002 | B1 |
6493686 | Francone et al. | Dec 2002 | B1 |
6545705 | Sigel | Apr 2003 | B1 |
6545708 | Tamayama et al. | Apr 2003 | B1 |
6546291 | Merfeld | Apr 2003 | B2 |
6581046 | Ahissar | Jun 2003 | B1 |
6601049 | Cooper | Jul 2003 | B1 |
6636781 | Shen | Oct 2003 | B1 |
6643627 | Liaw | Nov 2003 | B2 |
6697711 | Yokono | Feb 2004 | B2 |
6703550 | Chu | Mar 2004 | B2 |
6760645 | Kaplan et al. | Jul 2004 | B2 |
6961060 | Mochizuki et al. | Nov 2005 | B1 |
7002585 | Watanabe | Feb 2006 | B1 |
7024276 | Ito | Apr 2006 | B2 |
7324870 | Lee | Jan 2008 | B2 |
7342589 | Miserocchi | Mar 2008 | B2 |
7395251 | Linsker | Jul 2008 | B2 |
7398259 | Nugent | Jul 2008 | B2 |
7426501 | Nugent | Sep 2008 | B2 |
7668605 | Braun | Feb 2010 | B2 |
7672920 | Ito | Mar 2010 | B2 |
7752544 | Cheng | Jul 2010 | B2 |
7849030 | Ellingsworth | Dec 2010 | B2 |
8015130 | Matsugu | Sep 2011 | B2 |
8145355 | Danko | Mar 2012 | B2 |
8214062 | Eguchi et al. | Jul 2012 | B2 |
8271134 | Kato et al. | Sep 2012 | B2 |
8315305 | Petre | Nov 2012 | B2 |
8380652 | Francis, Jr. | Feb 2013 | B1 |
8419804 | Herr et al. | Apr 2013 | B2 |
8452448 | Pack et al. | May 2013 | B2 |
8467623 | Izhikevich | Jun 2013 | B2 |
8509951 | Gienger | Aug 2013 | B2 |
8571706 | Zhang et al. | Oct 2013 | B2 |
8639644 | Hickman et al. | Jan 2014 | B1 |
8655815 | Palmer et al. | Feb 2014 | B2 |
8751042 | Lee | Jun 2014 | B2 |
8793205 | Fisher | Jul 2014 | B1 |
8924021 | Dariush et al. | Dec 2014 | B2 |
8958912 | Blumberg et al. | Feb 2015 | B2 |
8972315 | Szatmary et al. | Mar 2015 | B2 |
8990133 | Ponulak et al. | Mar 2015 | B1 |
9008840 | Ponulak et al. | Apr 2015 | B1 |
9015092 | Sinyavskiy et al. | Apr 2015 | B2 |
9015093 | Commons | Apr 2015 | B1 |
9047568 | Fisher et al. | Jun 2015 | B1 |
9056396 | Linnell | Jun 2015 | B1 |
9070039 | Richert | Jun 2015 | B2 |
9082079 | Coenen | Jul 2015 | B1 |
9104186 | Sinyavskiy et al. | Aug 2015 | B2 |
9144907 | Summer et al. | Sep 2015 | B2 |
9186793 | Meier | Nov 2015 | B1 |
9189730 | Coenen et al. | Nov 2015 | B1 |
9193075 | Cipollini et al. | Nov 2015 | B1 |
9195934 | Hunt et al. | Nov 2015 | B1 |
9213937 | Ponulak | Dec 2015 | B2 |
9242372 | Laurent et al. | Jan 2016 | B2 |
20010045809 | Mukai | Nov 2001 | A1 |
20020038294 | Matsugu | Mar 2002 | A1 |
20020103576 | Takamura et al. | Aug 2002 | A1 |
20020158599 | Fujita et al. | Oct 2002 | A1 |
20020169733 | Peters | Nov 2002 | A1 |
20020175894 | Grillo | Nov 2002 | A1 |
20020198854 | Berenji et al. | Dec 2002 | A1 |
20030023347 | Konno | Jan 2003 | A1 |
20030050903 | Liaw | Mar 2003 | A1 |
20030108415 | Hosek et al. | Jun 2003 | A1 |
20030144764 | Yokono et al. | Jul 2003 | A1 |
20030220714 | Nakamura et al. | Nov 2003 | A1 |
20040030449 | Solomon | Feb 2004 | A1 |
20040036437 | Ito | Feb 2004 | A1 |
20040051493 | Furuta | Mar 2004 | A1 |
20040128028 | Miyamoto et al. | Jul 2004 | A1 |
20040131998 | Marom et al. | Jul 2004 | A1 |
20040136439 | Dewberry | Jul 2004 | A1 |
20040158358 | Anezaki et al. | Aug 2004 | A1 |
20040162638 | Solomon | Aug 2004 | A1 |
20040167641 | Kawai et al. | Aug 2004 | A1 |
20040172168 | Watanabe et al. | Sep 2004 | A1 |
20040193670 | Langan | Sep 2004 | A1 |
20040267404 | Danko | Dec 2004 | A1 |
20050004710 | Shimomura | Jan 2005 | A1 |
20050008227 | Duan et al. | Jan 2005 | A1 |
20050015351 | Nugent | Jan 2005 | A1 |
20050036649 | Yokono | Feb 2005 | A1 |
20050049749 | Watanabe et al. | Mar 2005 | A1 |
20050065651 | Ayers | Mar 2005 | A1 |
20050069207 | Zakrzewski et al. | Mar 2005 | A1 |
20050113973 | Endo et al. | May 2005 | A1 |
20050119791 | Nagashima | Jun 2005 | A1 |
20050125099 | Mikami et al. | Jun 2005 | A1 |
20050283450 | Matsugu | Dec 2005 | A1 |
20060069448 | Yasui | Mar 2006 | A1 |
20060082340 | Watanabe et al. | Apr 2006 | A1 |
20060094001 | Torre | May 2006 | A1 |
20060129277 | Wu et al. | Jun 2006 | A1 |
20060129506 | Edelman et al. | Jun 2006 | A1 |
20060149489 | Joublin et al. | Jul 2006 | A1 |
20060161218 | Danilov | Jul 2006 | A1 |
20060161300 | Gonzalez-Banos et al. | Jul 2006 | A1 |
20060167530 | Flaherty et al. | Jul 2006 | A1 |
20060181236 | Brogardh et al. | Aug 2006 | A1 |
20060189900 | Flaherty et al. | Aug 2006 | A1 |
20060207419 | Okazaki et al. | Sep 2006 | A1 |
20060250101 | Khatib et al. | Nov 2006 | A1 |
20070022068 | Linsker | Jan 2007 | A1 |
20070074177 | Kurita et al. | Mar 2007 | A1 |
20070100780 | Fleischer et al. | May 2007 | A1 |
20070112700 | Den Haan et al. | May 2007 | A1 |
20070151389 | Prisco et al. | Jul 2007 | A1 |
20070176643 | Nugent | Aug 2007 | A1 |
20070200525 | Kanaoka | Aug 2007 | A1 |
20070208678 | Matsugu | Sep 2007 | A1 |
20070250464 | Hamilton | Oct 2007 | A1 |
20070255454 | Dariush et al. | Nov 2007 | A1 |
20070260356 | Kock | Nov 2007 | A1 |
20080024345 | Watson | Jan 2008 | A1 |
20080040040 | Goto et al. | Feb 2008 | A1 |
20080097644 | Kaznov | Apr 2008 | A1 |
20080100482 | Lazar | May 2008 | A1 |
20080112596 | Rhoads et al. | May 2008 | A1 |
20080133052 | Jones | Jun 2008 | A1 |
20080140257 | Sato et al. | Jun 2008 | A1 |
20080154428 | Nagatsuka | Jun 2008 | A1 |
20080162391 | Izhikevich | Jul 2008 | A1 |
20080294074 | Tong et al. | Nov 2008 | A1 |
20080319929 | Kaplan et al. | Dec 2008 | A1 |
20090037033 | Phillips et al. | Feb 2009 | A1 |
20090043722 | Nugent | Feb 2009 | A1 |
20090069943 | Akashi et al. | Mar 2009 | A1 |
20090105786 | Fetz et al. | Apr 2009 | A1 |
20090231359 | Bass, II et al. | Sep 2009 | A1 |
20090234501 | Ishizaki | Sep 2009 | A1 |
20090272585 | Nagasaka | Nov 2009 | A1 |
20090287624 | Rouat | Nov 2009 | A1 |
20090299751 | Jung | Dec 2009 | A1 |
20090312817 | Hogle et al. | Dec 2009 | A1 |
20100036457 | Sarpeshkar | Feb 2010 | A1 |
20100081958 | She | Apr 2010 | A1 |
20100086171 | Lapstun | Apr 2010 | A1 |
20100152896 | Komatsu et al. | Jun 2010 | A1 |
20100152899 | Chang et al. | Jun 2010 | A1 |
20100166320 | Paquier | Jul 2010 | A1 |
20100169098 | Patch | Jul 2010 | A1 |
20100198765 | Fiorillo | Aug 2010 | A1 |
20100222924 | Gienger | Sep 2010 | A1 |
20100225824 | Lazar | Sep 2010 | A1 |
20100286824 | Solomon | Nov 2010 | A1 |
20100292835 | Sugiura et al. | Nov 2010 | A1 |
20100299101 | Shimada | Nov 2010 | A1 |
20100305758 | Nishi et al. | Dec 2010 | A1 |
20100312730 | Weng et al. | Dec 2010 | A1 |
20110010006 | Tani et al. | Jan 2011 | A1 |
20110016071 | Guillen | Jan 2011 | A1 |
20110026770 | Brookshire | Feb 2011 | A1 |
20110035052 | McLurkin | Feb 2011 | A1 |
20110035188 | Martinez-Heras et al. | Feb 2011 | A1 |
20110040405 | Lim et al. | Feb 2011 | A1 |
20110060460 | Oga et al. | Mar 2011 | A1 |
20110060461 | Velliste et al. | Mar 2011 | A1 |
20110067479 | Davis et al. | Mar 2011 | A1 |
20110071676 | Sanders et al. | Mar 2011 | A1 |
20110107270 | Wang et al. | May 2011 | A1 |
20110110006 | Meyer et al. | May 2011 | A1 |
20110119214 | Breitwisch | May 2011 | A1 |
20110119215 | Elmegreen | May 2011 | A1 |
20110144802 | Jang | Jun 2011 | A1 |
20110158476 | Fahn et al. | Jun 2011 | A1 |
20110160741 | Asano et al. | Jun 2011 | A1 |
20110160906 | Orita et al. | Jun 2011 | A1 |
20110160907 | Orita | Jun 2011 | A1 |
20110196199 | Donhowe | Aug 2011 | A1 |
20110208350 | Eliuk et al. | Aug 2011 | A1 |
20110218676 | Okazaki | Sep 2011 | A1 |
20110231016 | Goulding | Sep 2011 | A1 |
20110244919 | Aller et al. | Oct 2011 | A1 |
20110282169 | Grudic et al. | Nov 2011 | A1 |
20110296944 | Carter | Dec 2011 | A1 |
20110319714 | Roelle et al. | Dec 2011 | A1 |
20120008838 | Guyon et al. | Jan 2012 | A1 |
20120011090 | Tang | Jan 2012 | A1 |
20120011093 | Aparin et al. | Jan 2012 | A1 |
20120017232 | Hoffberg et al. | Jan 2012 | A1 |
20120036099 | Venkatraman et al. | Feb 2012 | A1 |
20120045068 | Kim et al. | Feb 2012 | A1 |
20120053728 | Theodorus | Mar 2012 | A1 |
20120109866 | Modha | May 2012 | A1 |
20120143495 | Dantu | Jun 2012 | A1 |
20120144242 | Vichare et al. | Jun 2012 | A1 |
20120150777 | Setoguchi et al. | Jun 2012 | A1 |
20120150781 | Arthur | Jun 2012 | A1 |
20120173021 | Tsusaka | Jul 2012 | A1 |
20120185092 | Ku | Jul 2012 | A1 |
20120197439 | Wang | Aug 2012 | A1 |
20120209428 | Mizutani | Aug 2012 | A1 |
20120209432 | Fleischer | Aug 2012 | A1 |
20120296471 | Inaba et al. | Nov 2012 | A1 |
20120303091 | Izhikevich | Nov 2012 | A1 |
20120303160 | Ziegler et al. | Nov 2012 | A1 |
20120308076 | Piekniewski | Dec 2012 | A1 |
20120308136 | Izhikevich | Dec 2012 | A1 |
20130000480 | Komatsu et al. | Jan 2013 | A1 |
20130006468 | Koehrsen et al. | Jan 2013 | A1 |
20130019325 | Deisseroth | Jan 2013 | A1 |
20130310979 | Herr et al. | Jan 2013 | A1 |
20130066468 | Choi et al. | Mar 2013 | A1 |
20130073080 | Ponulak | Mar 2013 | A1 |
20130073484 | Izhikevich | Mar 2013 | A1 |
20130073491 | Izhikevich et al. | Mar 2013 | A1 |
20130073492 | Izhikevich | Mar 2013 | A1 |
20130073493 | Modha | Mar 2013 | A1 |
20130073495 | Izhikevich | Mar 2013 | A1 |
20130073496 | Szatmary | Mar 2013 | A1 |
20130073498 | Izhikevich | Mar 2013 | A1 |
20130073499 | Izhikevich | Mar 2013 | A1 |
20130073500 | Szatmary | Mar 2013 | A1 |
20130096719 | Sanders | Apr 2013 | A1 |
20130116827 | Inazumi | May 2013 | A1 |
20130151442 | Suh et al. | Jun 2013 | A1 |
20130151448 | Ponulak | Jun 2013 | A1 |
20130151449 | Ponulak | Jun 2013 | A1 |
20130151450 | Ponulak | Jun 2013 | A1 |
20130172906 | Olson et al. | Jul 2013 | A1 |
20130173060 | Yoo et al. | Jul 2013 | A1 |
20130218821 | Szatmary | Aug 2013 | A1 |
20130245829 | Ohta et al. | Sep 2013 | A1 |
20130251278 | Izhikevich | Sep 2013 | A1 |
20130297541 | Piekniewski et al. | Nov 2013 | A1 |
20130297542 | Piekniewski | Nov 2013 | A1 |
20130325244 | Wang | Dec 2013 | A1 |
20130325766 | Petre et al. | Dec 2013 | A1 |
20130325768 | Sinyavskiy | Dec 2013 | A1 |
20130325773 | Sinyavskiy | Dec 2013 | A1 |
20130325774 | Sinyavskiy et al. | Dec 2013 | A1 |
20130325775 | Sinyavskiy | Dec 2013 | A1 |
20130325776 | Ponulak | Dec 2013 | A1 |
20130325777 | Petre | Dec 2013 | A1 |
20130345718 | Crawford et al. | Dec 2013 | A1 |
20130346347 | Patterson et al. | Dec 2013 | A1 |
20140012788 | Piekniewski | Jan 2014 | A1 |
20140016858 | Richert | Jan 2014 | A1 |
20140025613 | Ponulak | Jan 2014 | A1 |
20140027718 | Zhao | Jan 2014 | A1 |
20140032458 | Sinyavskiy et al. | Jan 2014 | A1 |
20140032459 | Sinyavskiy et al. | Jan 2014 | A1 |
20140052679 | Sinyavskiy et al. | Feb 2014 | A1 |
20140081895 | Coenen | Mar 2014 | A1 |
20140089232 | Buibas | Mar 2014 | A1 |
20140114479 | Okazaki | Apr 2014 | A1 |
20140122397 | Richert et al. | May 2014 | A1 |
20140122398 | Richert | May 2014 | A1 |
20140156574 | Piekniewski et al. | Jun 2014 | A1 |
20140163729 | Shi et al. | Jun 2014 | A1 |
20140193066 | Richert | Jul 2014 | A1 |
20140222739 | Ponulak | Aug 2014 | A1 |
20140229411 | Richert et al. | Aug 2014 | A1 |
20140244557 | Piekniewski et al. | Aug 2014 | A1 |
20140277718 | Izhikevich | Sep 2014 | A1 |
20140277744 | Coenen | Sep 2014 | A1 |
20140309659 | Roh et al. | Oct 2014 | A1 |
20140358284 | Laurent et al. | Dec 2014 | A1 |
20140358828 | Phillipps et al. | Dec 2014 | A1 |
20140369558 | Holz | Dec 2014 | A1 |
20140371907 | Passot et al. | Dec 2014 | A1 |
20140371912 | Passot et al. | Dec 2014 | A1 |
20150032258 | Passot et al. | Jan 2015 | A1 |
20150094850 | Passot et al. | Apr 2015 | A1 |
20150094852 | Laurent et al. | Apr 2015 | A1 |
20150127149 | Sinyavskiy et al. | May 2015 | A1 |
20150127154 | Passot et al. | May 2015 | A1 |
20150127155 | Passot et al. | May 2015 | A1 |
20150148956 | Negishi | May 2015 | A1 |
20150204559 | Hoffberg et al. | Jul 2015 | A1 |
20150283701 | Izhikevich et al. | Oct 2015 | A1 |
20150283702 | Izhikevich et al. | Oct 2015 | A1 |
20150283703 | Izhikevich et al. | Oct 2015 | A1 |
20150306761 | O'Connor et al. | Oct 2015 | A1 |
20150317357 | Harmsen et al. | Nov 2015 | A1 |
20150338204 | Richert et al. | Nov 2015 | A1 |
20150339589 | Fisher | Nov 2015 | A1 |
20150339826 | Buibas et al. | Nov 2015 | A1 |
20150341633 | Richert | Nov 2015 | A1 |
20160004923 | Piekniewski et al. | Jan 2016 | A1 |
20160014426 | Richert | Jan 2016 | A1 |
Number | Date | Country |
---|---|---|
102226740 | Oct 2011 | CN |
2384863 | Nov 2011 | EP |
4087423 | Mar 1992 | JP |
2003175480 | Jun 2003 | JP |
2108612 | Oct 1998 | RU |
2008083335 | Jul 2008 | WO |
2010136961 | Dec 2010 | WO |
WO-2011039542 | Apr 2011 | WO |
WO-2012151585 | Nov 2012 | WO |
Entry |
---|
PCT International Search Report and Written Opinion for PCT/US14/48512 dated Jan. 23, 2015, pp. 1-14. |
Abbott et al. (2000), “Synaptic plasticity: taming the beast”, Nature Neuroscience, 3, 1178-1183. |
Bartlett et al., “Convexity, Classification, and Risk Bounds” Jun. 16, 2005, pp. 1-61. |
Bartlett et al., “Large margin classifiers: convex loss, low noise, and convergence rates” Dec. 8, 2003, 8 pgs. |
Bohte, “Spiking Nueral Networks” Doctorate at the University of Leiden, Holland, Mar. 5, 2003, pp. 1-133 [retrieved on Nov. 14, 2012]. Retrieved from the internet: <URL: http://homepages.cwi.nl/-sbohte/publication/phdthesis.pdf>. |
Brette et al., Brain: a simple and flexible simulator for spiking neural networks, The Neuromorphic Engineer, Jul. 1, 2009, pp. 1-4, doi: 10.2417/1200906.1659. |
Cessac et al. “Overview of facts and issues about neural coding by spikes.” Journal of Physiology, Paris 104.1 (2010): 5. |
Cuntz et al., “One Rule to Grow Them All: A General Theory of Neuronal Branching and Its Paractical Application” PLOS Computational Biology, 6 (8), Published Aug. 5, 2010. |
Davison et al., PyNN: a common interface for neuronal network simulators, Frontiers in Neuroinformatics, Jan. 2009, pp. 1-10, vol. 2, Article 11. |
Djurfeldt, Mikael, The Connection-set Algebra: a formalism for the representation of connectivity structure in neuronal network models, implementations in Python and C++, and their use in simulators BMC Neuroscience Jul. 18, 2011 p. 1 12(Suppl 1):P80. |
Dorval et al. “Probability distributions of the logarithm of inter-spike intervals yield accurate entropy estimates from small datasets.” Journal of neuroscience methods 173.1 (2008): 129. |
Fidjeland et al. “Accelerated Simulation of Spiking Neural Networks Using GPUs” WCCI 2010 IEEE World Congress on Computational lntelligience, Jul. 18-23, 2010—CCIB, Barcelona, Spain, pp. 536-543, [retrieved on Nov. 14, 2012]. Retrieved from the Internet: <URL:http://www.doc.ic.ac.ukl-mpsha/IJCNN10b.pdf>. |
Floreano et al., “Neuroevolution: from architectures to learning” Evol. Intel. Jan. 2008 1:47-62, [retrieved Dec. 30, 2013] [retrieved online from URL:<http://inforscience.epfl.ch/record/112676/files/FloreanoDuerrMattiussi2008.pdf>. |
Gewaltig et al., NEST (Neural Simulation Tool), Scholarpedia, 2007, pp. 1-15, 2( 4 ): 1430, doi: 1 0.4249/scholarpedia.1430. |
Gleeson et al., ) NeuroML: A Language for Describing Data Driven Models of Neurons and Networks with a High Degree of Biological Detail, PLoS Computational Biology, Jun. 2010, pp. 1-19 vol. 6 Issue 6. |
Gollisch et al. “Rapid neural coding in the retina with relative spike latencies.” Science 319.5866 (2008): 11 08-1111. |
Goodman et al., Brian: a simulator for spiking neural networks in Python, Frontiers in Neuroinformatics, Nov. 2008, pp. 1-10, vol. 2, Article 5. |
Gorchetchnikov et al., NineML: declarative, mathematically-explicit descriptions of spiking neuronal networks, Frontiers in Neuroinformatics, Conference Abstract: 4th INCF Congress of Neuroinformatics, doi: 1 0.3389/conf.fninf.2011.08.00098. |
Graham, Lyle J., The Surf-Hippo Reference Manual, http://www.neurophys.biomedicale.univparis5. fr/-graham/surf-hippo-files/Surf-Hippo%20Reference%20Manual.pdf, Mar. 2002, pp. 1-128. |
Izhikevich, “Polychronization: Computation with Spikes”, Neural Computation, 25, 2006, 18, 245-282. |
Izhikevich et al., “Relating STDP to BCM”, Neural Computation (2003) 15, 1511-1523. |
Izhikevich, “Simple Model of Spiking Neurons”, IEEE Transactions on Neural Networks, Vol. 14, No. 6, Nov. 2003, pp. 1569-1572. |
Jin et al. (2010) “Implementing Spike-Timing-Dependent Plasticity on SpiNNaker Neuromorphic Hardware”, WCCI 2010, IEEE World Congress on Computational Intelligence. |
Karbowski et al., “Multispikes and Synchronization in a Large Neural Network with Temporal Delays”, Neural Computation 12, 1573-1606 (2000)). |
Khotanzad, “Classification of invariant image representations using a neural network” IEEF. Transactions on Acoustics, Speech, and Signal Processing, vol. 38, No. 6, Jun. 1990, pp. 1028-1038 [online], [retrieved on Dec. 10, 2013]. Retrieved from the Internet <URL: http://www-ee.uta.edu/eeweb/IP/Courses/SPR/Reference/Khotanzad.pdf>. |
Laurent, “The Neural Network Query Language (NNQL) Reference” [retrieved on Nov. 12, 2013]. Retrieved from the Internet: <URL https://code.google.com/p/nnql/issues/detail?id=1>. |
Laurent, “Issue 1—nnq1—Refactor Nucleus into its own file—Neural Network Query Language” [retrieved on Nov. 12, 2013]. Retrieved from the Internet: <URL:https:1/code.google.com/p/nnql/issues/detail?id=1 >. |
Lazar et al. “A video time encoding machine”, in Proceedings of the 15th IEEE International Conference on Image Processing (ICIP '08}, 2008, pp. 717-720. |
Lazar et al. “Consistent recovery of sensory stimuli encoded with MIMO neural circuits.” Computational intelligence and neuroscience (2010): 2. |
Lazar et al. “Multichannel time encoding with integrate-and-fire neurons.” Neurocomputing 65 (2005): 401-407. |
Masquelier, Timothee. “Relative spike time coding and STOP-based orientation selectivity in the early visual system in natural continuous and saccadic vision: a computational model.” Journal of computational neuroscience 32.3 (2012): 425-441. |
Nguyen et al., “Estimating divergence functionals and the likelihood ratio by penalized convex risk minimization” 2007, pp. 1-8. |
Nichols, A Reconfigurable Computing Architecture for Implementing Artificial Neural Networks on FPGA, Master's Thesis, The University of Guelph, 2003, pp. 1-235. |
Paugam-Moisy et al., “Computing with spiking neuron networks” G. Rozenberg T. Back, J. Kok (Eds.), Handbook of Natural Computing, Springer-Verlag (2010) [retrieved Dec. 30, 2013], [retrieved online from link.springer.com]. |
Pavlidis et al. Spiking neural network training using evolutionary algorithms. In: Proceedings 2005 IEEE International Joint Conference on Neural Networkds, 2005. IJCNN'05, vol. 4, pp. 2190-2194 Publication Date Jul. 31, 2005 [online] [Retrieved on Dec. 10, 2013] Retrieved from the Internet <URL: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.5.4346&rep=rep1&type=pdf. |
Sato et al., “Pulse interval and width modulation for video transmission.” Cable Television, IEEE Transactions on 4 (1978): 165-173. |
Schemmel et al., Implementing synaptic plasticity in a VLSI spiking neural network model in Proceedings of the 2006 International Joint Conference on Neural Networks (IJCNN'06), IEEE Press (2006) Jul. 16-21, 2006, pp. 1-6 [online], [retrieved on Dec. 10, 2013]. Retrieved from the Internet <URL: http://www.kip.uni-heidelberg.de/veroeffentlichungen/download.cgi/4620/ps/1774.pdf>. |
Simulink® model [online], [Retrieved on Dec. 10, 2013] Retrieved from <URL: http://www.mathworks.com/products/simulink/index.html>. |
Sinyavskiy et al. “Reinforcement learning of a spiking neural network in the task of control of an agent in a virtual discrete environment” Rus. J. Nonlin. Dyn., 2011, vol. 7, No. 4 (Mobile Robots), pp. 859-875, chapters 1-8. |
Szatmary et al., “Spike-timing Theory of Working Memory” PLoS Computational Biology, vol. 6, Issue 8, Aug. 19, 2010 [retrieved on Dec. 30, 2013]. Retrieved from the Internet: <URL: http://www.ploscompbiol.org/article/info%3Adoi%2F10.1371%2Fjournal.pcbi.1000879#>. |
Wang “The time dimension for scene analysis.” Neural Networks, IEEE Transactions on 16.6 (2005): 1401-1426. |
Sjostrom et al., “Spike-Timing Dependent Plasticity” Scholarpedia, 5(2):1362 (2010), pp. 1-18. |
Lazar et al. “Consistent recovery of sensory stimuli encoded with MIMO neural circuits.” Computational intelligence and neuroscience (2009): 2. |
Alvarez, ‘Review of approximation techniques’, PhD thesis, chapter 2. pp. 7-14, University of Bradford, 2000. |
Makridakis et al., ‘Evaluating Accuracy (or Error) Measures’, INSEAD Technical Report, 1995/18/TM. |
Walters, “Implementation of Self-Organizing Neural Networks for Visuo-Motor Control of an Industrial Robot”, IEEE Transactions on Neural Networks, vol. 4, No. 1, Jan. 1993, pp. 86-95. |
Froemke et al., “Temporal Modulation of Spike-Timing-Dependent Plasticity”, Frontiers in Synaptic Neuroscience, vol. 2, article 19, Jun. 2010, pp. 1-16. |
Grollman et al., 2007 “Dogged Learning for Robots” IEEE International Conference on Robotics and Automation (ICRA). |
PCT International Search Report for PCT/US2014/040407 dated Oct. 17, 2014. |
PCT International Search Report for International Application PCT/US2013/026738 dated Jul. 21, 2014. |
Asensio et al., “Robot Learning Control Based on Neural Network Prediction” ASME 8th Annual Dynamic Systems and Control Conference joint with the JSME 11th Motion and Vibration Conference 2012 [Retrieved on: Jun. 24, 2014]. Retrieved fro internet: <http://msc.berkely.edu/wjchen/publications/DSC12—8726—FI.pdf>. |
Bouganis et al., Training a Spiking Neural Network to Control a 4-DoF Robotic Arm based on Spiking Timing-Dependent Plasticity in WCCI 2010 IEEE World Congress on Computational Intelligence Jul. 2010 [Retrieved on Jun. 24, 2014] Retrieved from internet: http://www.doc.ic.ac.uk/˜mpsha/IJCNN10a.pdf>. |
Kasabov, “Evolving Spiking Neural Networks for Spatio-and Spectro-Temporal Pattern Recognition”, IEEE 6th International Conference Intelligent Systems 2012 [Retrieved on Jun. 24, 2014], Retrieved from internet: <http://ncs.ethz.ch/projects/evospike/publications/evolving-spiking-neural-networks for-spatio-and-spectro-temporal-pattern-recognition-plenary-talk-ieee-is>. |
&It;a href=“http://www.braincorporation.corn/specs/BStem—SpecSheet—Rev—Nov11—2013.pdf”>http://www.braincorporation.corn/specs/BStem—SpecSheet—Rev—Nov11—2013.pdf&It;/a>. |
A Neural Network for Ego-motion Estimation from Optical Flow, by Branka, Published 1995. |
Chung Hyuk Park., et al., Transfer of Skills between Human Operators through Haptic Training with Robot Coordination. International Conference on Robotics and Automation Anchorage Convention District, Anchorage, Alaska, USA, pp. 229-235 [online], 2010 [retrieved Dec. 3, 2015]. Retrieved from the Internet:&It;URL: https://smartech.gatech.edu!bitstream/handle/1853/38279/IEEE—2010—ICRA—002.pdf>. |
Computation of Optical Flow Using a Neural Network, by Zhou, Published 1988. |
Fall Detection Using Modular Neural Networks with Back-projected Optical Flow, by Huang, Published 2007. |
Graham the Surf Hippo User Manual Version 3.0 B“. Unite de Neurosiences Integratives et Computationnelles Institut Federatif de Neurobiologie Alfred Fessard CNRS. France. Mar. 2002 [retrieved Jan. 16, 2014]. [retrieved biomedical.univ-paris5.fr ]”. |
Miller III, “Real-Time Application of Neural Networks for Sensor-Based Control of Robots with Vision”, IEEE Transactions on Systems, Man, and Cypernetics Jul./Aug. 1989, pp. 825-831, vol. 19, No. 4. |
Specification, figures and EFS receipt of U.S. Appl. No. 14/244,888, filed Apr. 3, 2014 and entitled “Learning apparatus and methods for control of robotic devices via spoofing” (100 pages). |
Specification, figures and EFS receipt of U.S. Appl. No. 14/244,890, filed Apr. 3, 2014 and entitled “Apparatus and methods for remotely controlling robotic devices” (91 pages). |
Specification, figures and EFS receipt of U.S. Appl. No. 14/244,892, filed Apr. 3, 2014 and entitled—“Spoofing remote control apparatus and methods” (95 pages). |
Specification, figures and EFS receipt of U.S. Appl. No. 14/265,113, filed Apr. 29, 2014 and entitled “Trainable convolutional network apparatus and methods for operating a robotic vehicle” (71 pages). |
Specification, figures and EFS receipt of U.S. Appl. No. 14/285,385, filed May 22, 2014 and entitled “Apparatus and methods for real time estimation of differential motion in live video” (42 pages). |
Specification, figures and EFS receipt of U.S. Appl. No. 14/285,414, filed May 22, 2014 and entitled “Apparatus and methods for distance estimation using multiple image sensors” (63 pages). |
Specification, figures and EFS receipt of U.S. Appl. No. 14/285,466, filed May 22, 2014 and entitled “Apparatus and methods for robotic operation using video imagery” (64 pages). |
Specification, figures and EFS receipt of U.S. Appl. No. 14/321,736, filed Jul. 1, 2014 and entitled “Optical detection apparatus and methods” (49 pages). |
Specification, figures and EFS receipt of U.S. Appl. No. 14/326,374, filed Jul. 8, 2014 and entitled “Apparatus and methods for distance estimation using stereo imagery” (75 pages). |
Specification, figures and EFS receipt of U.S. Appl. No. 14/489,242, filed Sep. 17, 2014 and entitled “Apparatus and methods for remotely controlling robotic devices” (100 pages). |
Specification, figures and EFS receipt of U.S. Appl. No. 14/542,391, filed Nov. 14, 2014 and entitled “Feature detection apparatus and methods for training of robotic navigation” (83 pages). |
Specification, figures and EFS receipt of U.S. Appl. No. 14/588,168, filed Dec. 31, 2014 and entitled “Apparatus and methods for training robots” (101 pages). |
Specification, figures and EFS receipt of U.S. Appl. No. 14/637,138, filed Mar. 3, 2015 and entitled “Salient features tracking apparatus and methods using visual initialization” (66 pages). |
Specification, figures and EFS receipt of U.S. Appl. No. 14/637,164, filed Mar. 3, 2015 and entitled “Apparatus and methods for tracking salient features” (66 pages). |
Specification, figures and EFS receipt of U.S. Appl. No. 14/637,191, filed Mar. 3, 2015 and entitled “Apparatus and methods for saliency detection based on color occurrence analysis” (66 pages). |
Specification, figures and EFS receipt of U.S. Appl. No. 14/705,487, filed May 6, 2015 and entitled “Persistent predictor apparatus and methods for task switching” (119 pages). |
Visual Navigation with a Neural Network, by Hatsopoulos, Published 1991. |
Kalman Filter; wikipedia. |
Number | Date | Country | |
---|---|---|---|
20150127149 A1 | May 2015 | US |