A portion of the disclosure of this patent document contains material that is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent files or records, but otherwise reserves all copyright rights whatsoever.
Field of the Disclosure
The present disclosure relates to, inter alia, computerized apparatus and methods for detecting objects or targets using processing of optical data.
Description of Related Art
Object detection may be required for target approach/obstacle avoidance by autonomous robotic devices. Small sized and/or low-cost robotic vehicles may comprise limited processing and/or energy resources (and/or weight budget) for object detection.
A variety of object detection apparatus (such as IR-based, ultrasonic, and lidar) currently exist, but suffer from various disabilities. Specifically, some existing infrared (IR) optical proximity sensing methods may prove unreliable, particularly when used outdoors and/or in presence of other sources of infrared radiation. Ultrasonic proximity sensors may become unreliable outdoors, particularly on e.g., unmanned aerial vehicles (UAV) comprising multiple motors that may produce acoustic noise. The ultrasonic sensor distance output may also be affected by wind and/or humidity. Lidar based systems are typically costly, heavy, and may require substantial computing resources for processing lidar data.
Therefore there exists a need for an improved object detection sensor apparatus, and associated methods. Specifically, in one application, the improved detection sensor apparatus would be one or more of small sized, inexpensive, reliable, lightweight, power efficient, and/or capable of effective use outdoors.
The present disclosure satisfies the foregoing need for an improved object detection apparatus and associated methods.
Specifically, one aspect of the disclosure relates to a non-transitory computer-readable storage medium having instructions embodied thereon, the instructions being executable to perform a method of detecting a distance to an object.
Another aspect of the disclosure relates to a non-transitory computer-readable storage medium having instructions embodied thereon, the instructions being executable on a processing apparatus to detect an object in an image. In one implementation, the detection includes: producing a high-pass filtered version of the image, the high-pass filtered version comprising a plurality of pixels; for at least some of the plurality of pixels, determining a deviation parameter between the value of a given pixel and a reference value; and based on the deviation parameter meeting a criterion, providing an indication of the object being present in the image.
In one variant, the high-pass filtered version of the image is configured produced based at least on a first convolution operation between a kernel and the image, the kernel characterized by a kernel dimension; and the convolution operation is configured to reduce energy associated spatial scales in the image that are lower than the kernel dimension. The kernel is configured for example based at least on a second convolution operation of a first matrix, the first matrix configured based at least on a Laplacian operator, and a second matrix configured based at least on a Gaussian operator; and the image is characterized by image dimension, the image dimension exceeding the kernel dimension by at least 10 ten times (10×).
In another variant, the criterion comprises meeting or exceeding a prescribed threshold.
In another aspect of the disclosure, an optical object detection apparatus is disclosed. In one implementation, the apparatus includes: a lens characterized by a depth of field range; an image sensor configured to provide an image comprising one or more pixels; and logic in communication with the image sensor. The logic is configured to in one variant: evaluate at least a portion of the image to determine an image contrast parameter; and produce an object detection indication based on the contrast parameter breaching a threshold meeting one or more criteria, the object indication configured to convey presence of the object within the depth of field range.
In another variant, the image sensor comprises an array of photo-sensitive elements arranged in a plane disposed substantially parallel to the lens; and the image comprises an array of pixels, individual pixels being produced by individual ones of the photo-sensitive elements.
In another aspect, a method of navigating a trajectory by a robotic apparatus is disclosed. In one implementation, the apparatus includes a controller, an actuator and a sensor, and the method includes: obtaining at least one image associated with surroundings of the apparatus using the sensor; analyzing the at least one image to determine a contrast parameter of the image; detecting a presence of an object in the at least one image based at least in part on based on the contrast parameter meeting one or more criteria; and causing the controller to activate the actuator based on the detection of the presence of the object. In one variant, the actuator activation is configured consistent with a characteristic of the object.
In another variant, the robotic device comprises a vehicle; the object comprises one of a target or an obstacle; and the actuator activation is configured to cause the vehicle to perform at least one of a target approach or obstacle avoidance action.
In another implementation, the method includes: obtaining at least one image associated with surroundings of the apparatus using the sensor; analyzing the at least one image to detect a presence of an object in the at least one image; determining that the detected object comprises one of either a target or an obstacle; and causing the controller to selectively activate the actuator based on the determination.
In another aspect of the present disclosure, a method of navigating a robotic apparatus is disclosed. In one embodiment, the robotic apparatus includes a controller, an actuator and a sensor, and the method includes: obtaining, through a lens, at least one image associated with surroundings of the apparatus using the sensor, the sensor including a plurality of image detectors disposed at an angle with respect to a plane of the lens, each of the plurality of image detectors being associated with an extent located in the surroundings; analyzing the at least one image to detect a presence of one or more objects in the at least one image; during the navigating of the robotic apparatus, detecting a distance to the one or more objects based on the presence of at least one of the one or more objects within the extent located in the surroundings; determining that the detected object includes one of either a target or an obstacle; and causing the controller to selectively activate the actuator based on the determination.
In another aspect of the present disclosure, an apparatus is disclosed. In one embodiment, the apparatus is configured to cause a robotic device to navigate a trajectory, and the apparatus includes a non-transitory computer-readable medium including a plurality of instructions configured to cause the apparatus to, when executed by a processor: obtain an image associated with surroundings of the apparatus using a sensor of the device, the sensor including multiple discrete image sensors behind a single lens associated therewith, each of the multiple discrete image sensors being associated with respective extents located in the surroundings of the apparatus; analyze the image to determine a contrast parameter of the image; detect a presence of an object in the image based at least in part on the contrast parameter meeting one or more criteria; and cause a controller of the device to activate an actuator of the device based on the detection of the presence of the object; wherein the actuator is configured to be activated based on a characteristic of the object; and wherein during the navigation of the trajectory by the robotic device, the apparatus is configured to determine (i) a first distance to the object when the presence of the object produces a first in-focus representation within a first extent of the respective extents and (ii) a second distance to the object when the presence of the object produces a second in-focus representation within a second extent of the respective extents.
These and other objects, features, and characteristics of the system and/or method disclosed herein, as well as the methods of operation and functions of the related elements of structure and the combination of parts and economies of manufacture, will become more apparent upon consideration of the following description and the appended claims with reference to the accompanying drawings, all of which form a part of this specification, wherein like reference numerals designate corresponding parts in the various figures. It is to be expressly understood, however, that the drawings are for the purpose of illustration and description only and are not intended as a definition of the limits of the disclosure. As used in the specification and in the claims, the singular form of “a”, “an”, and “the” include plural referents unless the context clearly dictates otherwise.
All Figures disclosed herein are © Copyright 2014 Brain Corporation. All rights reserved.
Implementations of the present disclosure will now be described in detail with reference to the drawings, which are provided as illustrative examples so as to enable those skilled in the art to practice the present technology. Notably, the figures and examples below are not meant to limit the scope of the present disclosure to a single implementation, but other implementations are possible by way of interchange of or combination with some or all of the described or illustrated elements. Wherever convenient, the same reference numbers will be used throughout the drawings to refer to same or like parts.
Although the system(s) and/or method(s) of this disclosure have been described in detail for the purpose of illustration based on what is currently considered to be the most practical and preferred implementations, it is to be understood that such detail is solely for that purpose and that the disclosure is not limited to the disclosed implementations, but, on the contrary, is intended to cover modifications and equivalent arrangements that are within the spirit and scope of the appended claims. For example, it is to be understood that the present disclosure contemplates that, to the extent possible, one or more features of any implementation may be combined with one or more features of any other implementation
In the present disclosure, an implementation showing a singular component should not be considered limiting; rather, the disclosure is intended to encompass other implementations including a plurality of the same component, and vice-versa, unless explicitly stated otherwise herein.
Further, the present disclosure encompasses present and future known equivalents to the components referred to herein by way of illustration.
As used herein, the term “bus” is meant generally to denote all types of interconnection or communication architecture that is used to access the synaptic and neuron memory. The “bus” could be optical, wireless, infrared or another type of communication medium. The exact topology of the bus could be for example standard “bus”, hierarchical bus, network-on-chip, address-event-representation (AER) connection, or other type of communication topology used for accessing, e.g., different memories in pulse-based system.
As used herein, the terms “computer”, “computing device”, and “computerized device”, include, but are not limited to, personal computers (PCs) and minicomputers, whether desktop, laptop, or otherwise, mainframe computers, workstations, servers, personal digital assistants (PDAs), handheld computers, embedded computers, programmable logic device, personal communicators, tablet or “phablet” computers, portable navigation aids, J2ME equipped devices, smart TVs, cellular telephones, smart phones, personal integrated communication or entertainment devices, or literally any other device capable of executing a set of instructions and processing an incoming data signal.
As used herein, the term “computer program” or “software” is meant to include any sequence or human or machine cognizable steps which perform a function. Such program may be rendered in virtually any programming language or environment including, for example, C/C++, C#, Fortran, COBOL, MATLAB™, PASCAL, Python, assembly language, markup languages (e.g., HTML, SGML, XML, VoXML), and the like, as well as object-oriented environments such as the Common Object Request Broker Architecture (CORBA), Java™ (including J2ME, Java Beans), Binary Runtime Environment (e.g., BREW), and other languages.
As used herein, the terms “connection”, “link”, “synaptic channel”, “transmission channel”, “delay line”, are meant generally to denote a causal link between any two or more entities (whether physical or logical/virtual), which enables information exchange between the entities.
As used herein the term feature may refer to a representation of an object edge, determined by change in color, luminance, brightness, transparency, texture, and/or curvature. The object features may comprise, inter alia, individual edges, intersections of edges (such as corners), orifices, and/or curvature
As used herein, the term “memory” includes any type of integrated circuit or other storage device adapted for storing digital data including, without limitation, ROM. PROM, EEPROM, DRAM, Mobile DRAM, SDRAM, DDR/2 SDRAM, EDO/FPMS, RLDRAM, SRAM, “flash” memory (e.g., NAND/NOR), memristor memory, and PSRAM.
As used herein, the terms “processor”, “microprocessor” and “digital processor” are meant generally to include all types of digital processing devices including, without limitation, digital signal processors (DSPs), reduced instruction set computers (RISC), general-purpose (CISC) processors, microprocessors, gate arrays (e.g., field programmable gate arrays (FPGAs)), PLDs, reconfigurable computer fabrics (RCFs), array processors, secure microprocessors, and application-specific integrated circuits (ASICs). Such digital processors may be contained on a single unitary IC die, or distributed across multiple components.
As used herein, the term “network interface” refers to any signal, data, or software interface with a component, network or process including, without limitation, those of the FireWire (e.g., FW400, FW800, and/or other FireWire implementation.), USB (e.g., USB2), Ethernet (e.g., 10/100, 10/100/1000, Gigabit Ethernet, 10-Gig-E), MoCA, Coaxsys (e.g., TVnet™), radio frequency tuner (e.g., in-band or OOB, cable modem), Wi-Fi (802.11), WiMAX (802.16), PAN (e.g., 802.15), cellular (e.g., 3G, LTE/LTE-A/TD-LTE, GSM, and/or other cellular interface implementation) or IrDA families.
As used herein, the term “Wi-Fi” refers to, without limitation, any of the variants of IEEE-Std. 802.11 or related standards including 802.11 a/b/g/n/s/v and 802.11-2012.
As used herein, the term “wireless” means any wireless signal, data, communication, or other interface including without limitation Wi-Fi, Bluetooth, 3G (3GPP/3GPP2), HSDPA/HSUPA, TDMA, CDMA (e.g., IS-95A, WCDMA, and/or other wireless interface implementation.), FHSS, DSSS, GSM, PAN/802.15, WiMAX (802.16), 802.20, narrowband/FDMA, OFDM, PCS/DCS, LTE/LTE-A/TD-LTE, analog cellular, CDPD, RFID or NFC (e.g., EPC Global Gen. 2, ISO 14443, ISO 18000-3), satellite systems, millimeter wave or microwave systems, acoustic, and infrared (e.g., IrDA).
The present disclosure provides, among other things, apparatus and methods for detecting objects at a given distance from a moving device (such as e.g., a robotic device) in real time.
An optical distance/detection apparatus may comprise sensory apparatus, such as a camera comprising an imaging sensor with a lens configured to project a representation of a visual scene onto the imaging sensor. For a given lens, objects at different distances in front the lens may appear to focused at different ranges behind the lens. The lens may be characterized by a range of sharp focus (also referred to as depth of field). For a given range between the image sensor and the lens, one or more objects present within the range of focus may produce in-focus images. Objects disposed outside the range of focus may produce smeared (or out of focus) images. In-focus representations of objects may be characterized by a greater contrast parameter compared to out-of-focus representations. One or more images provided by the detection apparatus may be analyzed in order to determine image contrast parameter of a given image. Based on one or more criteria (e.g., the image contrast breaching a threshold), an object detection indication may be produced by the apparatus.
When operated from a moving vehicle or device (e.g., a car, an aerial vehicle) the image on the detector may gradually get sharper as the vehicle approaches an object. Upon breaching a given contrast (given sharpness) threshold, the detection apparatus may produce an indication conveying presence of an object within the range of focus in front of the lens.
The apparatus 300 may comprise an image sensor disposed behind the lens 304. In some implementations, the sensor may be located at a fixed distance from the lens, e.g., in the focal plane 334 of the lens 302.
The lens 302 may be characterized by an area of acceptable contrast which also may be referred to as a circle of confusion (CoC). In optics, a circle of confusion may correspond to a region in a plane of the detector formed by a cone of light rays from a lens that may not coming to a perfect focus when imaging a point source. The area of acceptable contrast may also be termed to as disk of confusion, circle of indistinctness, blur circle, or region. The area of acceptable contrast of the lens 302 (denoted by bold segment in
In some implementations of still or video image processing, dimension of the circle of confusion may be determined based on a size of the largest blur spot that may still be perceived by the processing apparatus as a point. In some photography implementations wherein the detector may comprise a human eye, a person with a good vision may be able to distinguish an image resolution of 5 line pairs per millimeter (lp/mm) at 0.25 in, which corresponds to CoC of 0.2 mm.
In one or more implementations of computerized detectors, the DOF dimension may be determined as
R˜1/dc, dc˜d/dr/Rv (Eqn. 1)
where:
For a given detector size and/or location, and sharpness threshold, the dimension 304 of the area of acceptable contrast may increase with an increasing f-number of the lens 302. By way of an illustration, the circle of confusion of a lens with f-number of 1.4 (f/1.4) will be twice that of the f/2.8 lens. Dimension of the area of acceptably sharp focus 324 in
A plurality of objects (denoted by circles 312, 314, 316, 318 in
In some implementations, a lens with a lower relative aperture (smaller f number) may be capable of producing a shallower depth of field (e.g., 324 in
In some implementations, e.g., such as illustrated in
In some implementations, the contrast determination process may comprise one or more computer-implemented mathematical operations, including down-sampling the image produced by the detector; performing a high-pass filter operation on the acquired image; and/or determining a deviation parameter of pixels within the high-pass filtered image relative a reference value.
The image sensor 410 may comprise one or more sensing layers arranged along dimension shown by arrow 404. Individual layers (not shown) may comprise a plurality of photo sensitive elements (photosites) configured to produce arrays of pixels. Individual sensing layers may be configured partially translucent so as to permit light delivered by the lens 402 propagation through a prior layer (with respect to the lens) to subsequent layer(s). Axial displacement of individual layers (e.g., along dimension 404) may produce an axial distribution of focal planes. Light rays propagating through the lens 402 at different angles may be focused at different locations, as shown by broken lines 406, 408 in
Due to light dispersion within the lens, light traveling along different paths may focus in different areas behind the lens. Conversely, objects disposed at different distances from the lens may produce in-focus representations at image sensors responding to different light wavelengths. By way of an illustration, light from objects within the distance range 453 may travel along ray paths 454 and produce in focus image on photosite type 444 array. Light from objects within the distance range 455 may travel along ray paths 456 and produce in focus image on photosite type 446 array. Light from objects within the distance range 457 may travel along ray paths 452 and produce in focus image on photosite type 442 array. Accordingly, the optical apparatus comprising one or more image sensors operating at multiple wavelengths may be capable of providing distance information for objects located at different distance from the lens. Although chromatic aberration is known in the arts, most modern lenses are constructed such as to remove or minimize effects of chromatic aberration. Such achromatic and/or apochromatic lenses may be quite costly, heavy, and/or large compared to simpler lenses with chromatic aberration. The methodology of the present disclosure may enable use of simpler, less costly, and/or more compact lens designs for detecting objects compared to existing approaches.
The apparatus 480 of
By way of an illustration, a portion of the light from the lens 402 may be diverted by semi-permeable mirror 488 along direction shown by arrow 496 towards the sensor 486 while a portion 492 of light may be passed through towards the sensor 482. All or a portion of the light from the lens 402 may be diverted by the semi-permeable mirror 490 along direction shown by arrow 494 towards the sensor 484. A portion 492 of the incident light may be passed through towards the sensor 482. Cumulative path lengths for light sensed by elements 482, 484, 486 may also be configured different from one another. Accordingly, the light traveling along paths denoted by arrows 481, 483, 485 may correspond to depth of field regions 493, 495, 497 in front of the lens 402 in
The sensor 504 may provide output 506 comprising one or more pixels. In some implementations, the output 506 may comprise an array of pixels characterized by one or more channels (e.g., R,G,B) and a bit depth (e.g., 8/12/16 bits).
The output 506 may be high-pass (HP) filtered by the filter component 520. In some implementations, the filter component 520 may be configured to downsample the input 506. The filter component 520 operation may comprise a convolution operation with a Laplacian and/or another sharpening kernel, a difference of Gaussian operation configured to reduce low frequency energy, a combination thereof, and/or other operations configured to reduce low frequency energy content. In some implementations, the filter component 520 may be configured as described below with respect to
In some implementations, the output 506 might be cropped prior to further processing (e.g., filtering, and/or transformations). Multiple crops of an image in the output 506 (performed with different cropping setting) may be used. In some implementations crops of the high-passed image may be utilized. In some implementations, the output 506 may be processed using a band-pass filter operation. The band pass filter may comprise a high-pass filtering operation, subsampling, and/or a blurring filter operation. The blurring filter operation on an image may comprise a convolution of a smoothing kernel (e.g., Gaussian, box, circular, and/or other kernel whose Fourier transform has most of energy in a low part of spectrum) with the image.
Filtered output 508 may be provided to a processing component 522 configured to execute a detection process. The detection process may comprise determination of a parameter, quantity, or other value, such as e.g., a contrast parameter. In one or more implementations, the contrast parameter may be configured based on a maximum absolute deviation (MAD) of pixels within the filtered image from a reference value. The reference value may comprise a fixed pre-computed value, an image mean, image median, and/or a value determined using statistics of multiple images. In some implementations, the contrast parameter for a plurality of images value may be low-pass filtered using a running mean, an exponential filter, and/or other filter approach configured to reduce inter-image variability of the detection process, or accomplish some other desirable operation to enhance detection. The averaging window may be configured in accord with requirements of a given application. By way of an illustration, a detection apparatus configured to detect objects in a video acquired at 25 fps from a robotic rover device may utilize averaging window between 2 and 50 frames. It will be appreciated by those skilled in the arts that averaging parameters may be configured based on requirements of an applications, e.g., vehicle speed, motion of objects, object size, and/or other parameters. For autonomous ground vehicle navigation the averaging window may be selected smaller than 2 seconds.
The contrast parameter may be evaluated in light of one or more detection or triggering criteria; e.g., compared to a threshold, and/or analysis of a value produced by the contrast determination process. Responsive to the contrast parameter for a given image breaching the threshold (e.g., 604, 624 in
The output 536 may be processed by a processing component 538. In some implementations, the component 538 may be configured to assess spatial and/or temporal characteristics of the camera component output 536. In some implementations, the assessment process may comprise determination of a two-dimensional image spectrum using, e.g., a discrete Fourier transform described below with respect to
Responsive to a determination that the high frequency (small spatial scale) energy content in the image meets one or more criteria (e.g., breaches a threshold), the processing component 538 may produce output 540 indicative of a presence of an object in the sensory input 502. In one or more implementations, the threshold value may be selected and/or configured dynamically based on, e.g., ambient light conditions, time of day, and/or other parameters.
In one or more implementations, the object detection may be effectuated based on a comparison of the Ehi parameter to another energy parameter (El) associated with energy content in the image spectrum at spatial scales that exceed a prescribed number of pixels.
In one or more implementations, processing components 520, 522, 524, 538 may be embodied within one or more integrated circuits, one or more computerized devices, and/or implemented using machine executable instructions (e.g., a software library) executed by one or more processors and/or ASICs.
One or more images may an acquired using, e.g. a camera component 504 shown and described with respect to
Image 700 in
The images may be high-pass processed by a filtering operation. The filtering operation may comprise determination of a Gaussian blurred image using a Gaussian kernel. In the implementation illustrated in
In one or more implementations, the high-passed image Ih may be obtained by subtracting Gaussian blurred image Ig from the original image I, as follows:
Ih=I+m−Ig (Eqn. 2)
where m denotes an offset value. In some implementations (e.g., such as illustrated in
Panels 710, 740
In order to obtain a quantitative measure of an object presence in a given image (e.g., 720 in
The contrast parameter for a given image may be evaluated in light of one or more criteria (e.g., compared to a threshold) in order to determine presence of an object in the associated image.
Detection threshold used for determining the presence of an object is shown by broken line 604 in
The input image (e.g., image 700, 720) may be transformed using a Fourier transformation, e.g., discrete Fourier transform (DFT), cosine Fourier transform, a Fast Fourier transform (FFT) or other spatial scale transformation.
The image spectra may be partitioned into a low-frequency portion and a high frequency portion. In some implementations, e.g., such as illustrated in
Detection of an object in a given image may comprise determination of a contrast parameter for a given image. In some implementations, the contrast parameter may be configured based on a comparison of the high frequency energy portion and the low frequency energy portion. In some implementations, the contrast parameter (C) may be determined based on a ratio or other relationship of the high frequency energy portion and the low frequency energy portion, such as the exemplary relationship of Eqn. 2 below.
C=Ehi/Elo (Eqn. 2)
The contrast parameter for a given image may be compared to a threshold in order to determine presence of an object in the associated image.
The detection threshold used for determining the presence of an object is shown by the broken line 624 in
The apparatus 900 may comprise memory 914 configured to store executable instructions (e.g., operating system and/or application code, raw and/or processed data such as raw image fames, image spectrum, and/or contrast parameter, information related to one or more detected objects, and/or other information).
In some implementations, the processing module 916 may interface with one or more of the mechanical 918, sensory 920, electrical 922, power components 924, communications interface 926, and/or other components via driver interfaces, software abstraction layers, and/or other interfacing techniques. Thus, additional processing and memory capacity may be used to support these processes. However, it will be appreciated that these components may be fully controlled by the processing module. The memory and processing capacity may aid in processing code management for the apparatus 900 (e.g. loading, replacement, initial startup and/or other operations). Consistent with the present disclosure, the various components of the device may be remotely disposed from one another, and/or aggregated. For example, the instructions operating the detection process may be executed on a server apparatus that may control the mechanical components via a network or a radio connection. In some implementations, multiple mechanical, sensory, electrical units, and/or other components may be controlled by a single robotic controller via network/radio connectivity.
The mechanical components 918 may include virtually any type of device capable of motion and/or performance of a desired function or task. Examples of such devices may include one or more of motors, servos, pumps, hydraulics, pneumatics, stepper motors, rotational plates, micro-electro-mechanical devices (MEMS), electroactive polymers, shape memory alloy (SMA) activation, and/or other devices. The sensor devices may interface with the processing module, and/or enable physical interaction and/or manipulation of the device.
The sensory component may be configured to provide sensory input to the processing component. In some implementations, the sensory input may comprise camera output images.
The electrical components 922 may include virtually any electrical device for interaction and manipulation of the outside world. Examples of such electrical devices may include one or more of light/radiation generating devices (e.g. LEDs, IR sources, light bulbs, and/or other devices), audio devices, monitors/displays, switches, heaters, coolers, ultrasound transducers, lasers, and/or other electrical devices. These devices may enable a wide array of applications for the apparatus 900 in industrial, hobbyist, building management, medical device, military/intelligence, and/or other fields.
The communications interface may include one or more connections to external computerized devices to allow for, inter cilia, management of the apparatus 900. The connections may include one or more of the wireless or wireline interfaces discussed above, and may include customized or proprietary connections for specific applications. The communications interface may be configured to receive sensory input from an external camera, a user interface (e.g., a headset microphone, a button, a touchpad, and/or other user interface), and/or provide sensory output (e.g., voice commands to a headset, visual feedback, and/or other sensory output).
The power system 924 may be tailored to the needs of the application of the device. For example, for a small hobbyist robot or aid device, a wireless power solution (e.g. battery, solar cell, inductive (contactless) power source, rectification, and/or other wireless power solution) may be appropriate. However, for building management applications, battery backup/direct wall power may be superior, in some implementations. In addition, in some implementations, the power system may be adaptable with respect to the training of the apparatus 900. Thus, the apparatus 900 may improve its efficiency (to include power consumption efficiency) through learned management techniques specifically tailored to the tasks performed by the apparatus 900.
In some implementations, methods 1000, 1100, 1130, 1200 may be implemented in one or more processing devices (e.g., a digital processor, an analog processor, a digital circuit designed to process information, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information). The one or more processing devices may include one or more devices executing some or all of the operations of methods 1000, 1100, 1130, 1200 in response to instructions stored electronically on an electronic storage medium. The one or more processing devices may include one or more devices configured through hardware, firmware, and/or software to be specifically designed for execution of one or more of the operations of methods 1000, 1100, 1200, 1300.
At operation 1002 of method 1000, one or more input images may be acquired. In one or more implementations, individual images may be provided by an optical apparatus (e.g., 200 of
At operation 1004, image contrast parameter may be determined. In one or more implementations, the contrast parameter determination may be effectuated by a processing apparatus (e.g., the apparatus described with respect to
At operation 1006, a determination may be made as to whether the parameter(s) of interest meet the relevant criterion or criteria (e.g., the contrast parameter is within detection range). In some implementations, the determination of operation 1006 may be configured based on a comparison of the contrast parameter to a threshold, e.g., as shown and described with respect to
Responsive to a determination at operation 1006 that the contrast parameter is within the detection range (e.g., breached the threshold), the method 1000 may proceed to operation 1008 wherein an object detection indication may be produced. In some implementations, the object detection indication may comprise a message, a logic level transition, a pulse, a voltage, a register value and/or other means configured to communicate the detection indication to, e.g., a robotic controller.
At operation 1102, an image may be obtained. In some implementations, the image may comprise a 2-array of pixels characterized by one or more channels (grayscale, RGB, and/or other representations) and/or pixel bit depth.
At operation 1104, the obtained image may be evaluated. In some implementations, the image evaluation may be configured to determine a processing load of an image analysis apparatus (e.g., components of the apparatus 500, 530 of
At operation 1106 a high-passed version of the image produced by operation 1104 may be obtained. In one or more implementations, the high-pass filtered image version may be produced using a filter operation configured based on a convolution operation with a Laplacian, a difference of Gaussian operation, a combination thereof, and/or other operations. In some implementations, filter operation may be configured based on a hybrid kernel determined using a convolution of the Gaussian smoothing kernel with the Laplacian kernel. The hybrid kernel may be convolved with the image produced by operation 1104 in order to obtain the high-pass filtered image version. In some implementations of down-sampling and high-pass filtering, the image may be convolved with the processing kernel at a subset of locations in the image corresponding to the subsampling parameters, by applying the kernel on a grid of n pixels, where n is the down-sampling parameter.
At operation 1108, a contrast parameter of the high-pass filtered image may be determined. In one or more implementations, the contrast parameter determination may be effectuated by the processing component 522 of the apparatus 500 described with respect to
At operation 1110, a determination may be made as to whether the contrast parameter may fall within detection range. In some implementations, the determination of operation 1110 may be configured based on a comparison of the contrast parameter to a threshold, e.g., as shown and described with respect to
Responsive to a determination at operation 1110 that the contrast parameter may be within the detection range (e.g., breached the threshold), the method 1100 may proceed to operation 1112 wherein an object detection indication may be produced. In some implementations, the object detection indication may comprise a message, a logic level transition, a pulse, a voltage, a register value and/or other means configured to communicate the detection indication to, e.g., a robotic controller, an alert indication to a user, and/or to another entity.
At operation 1122 an image may be obtained. In some implementations, the image may comprise a 2-array of pixels characterized by one or more channels (grayscale, RGB, and/or other representations) and/or pixel bit depth.
At operation 1124, the image may be evaluated. In some implementations, the image evaluation may be configured to determine a processing load of an image analysis apparatus (e.g., components of the apparatus 530 of
At operation 1126, a spectrum of the image produced by operation 1124 may be obtained. In one or more implementations, the spectrum may be determined using a discrete Fourier transform, and/or other transformation.
At operation 1128, a contrast parameter of the image may be determined. In one or more implementations, the contrast parameter determination may be effectuated by the processing component 538 of the apparatus 530 described with respect to
At operation 1130, a determination may be made as to whether the contrast parameter may fall within detection range. In some implementations, the determination of operation 1130 may be configured based on a comparison of the contrast parameter to a threshold, e.g., as shown and described with respect to
Responsive to a determination at operation 1130 that the contrast parameter is within the detection range (e.g., breached the threshold) the method 1120 may proceed to operation 1132, wherein an object detection indication may be produced. In some implementations, the object detection indication may comprise a message, a logic level transition, a pulse, a wireless transmission, a voltage, a register value and/or other means configured to communicate the detection indication to, e.g., a robotic controller, an alert indication to a user, and/or to another entity.
At operation 1202, a vehicle (e.g., the vehicle 100 of
At operation 1204, an image of the surrounding may be obtained. In one or more implementation, the image may comprise representation of one or more objects in the surroundings (e.g., the objects 112, 122 in
At operation 1206, the image may be analyzed using to obtain contrast parameter. In one or more implementations, the contrast parameter determination may be configured using an image-domain approach (e.g., described above with respect to
At operation 1208, a determination may be made as to whether an object may be present in the image obtained at operation 1204. In some implementations, the object presence may be determined based on a comparison of the contrast parameter to a threshold; e.g., as shown and described with respect to
Responsive to a determination at operation 1208 that the object is present, the method 1200 may proceed to operation 1210, wherein the trajectory may be adapted. In some implementations, the object presence may correspond to a target being present in the surroundings and/or an obstacle present in the path of the vehicle. The trajectory adaptation may comprise alteration of vehicle course, speed, and/or other parameter. The trajectory adaptation may be configured based on one or more characteristics of the object (e.g., persistence over multiple frames, distance, location, and/or other parameters). In some implementations of target approach and/or obstacle avoidance by a robotic vehicle, when a target may be detected, the trajectory adaptation may be configured to reduce distance between the target and the vehicle (e.g., during landing, approaching a home base, a trash can, and/or other action). When an obstacle may be detected, the trajectory adaptation may be configured to maintain distance (e.g., stop), increase distance (e.g., go away), alter course (e.g., turn) in order to avoid collision. In some implementations, target/obstacle discrimination may be configured based on object color (e.g., approach red ball avoid all other objects); object reflectivity (approach bright objects), shape, orientation, and or other characteristics. In some implementations, object discrimination may be configured based on input from other sensors (e.g., RFID signal, radio beacon signal, acoustic pinger signal), location of the robotic device (e.g., when the vehicle is in the middle of a room all objects are treated as obstacles), and/or other approaches.
Although the above application of the object detection methodology is described for a vehicle navigation application, it will be appreciated by those skilled in the arts that various other implementations of the methodology of the present disclosure may be utilized. By way of an illustration, a security camera device may be used to observe and detect potential intruders (e.g., based on sudden appearance of objects) and/or detect theft (e.g., based on a sudden disappearance of previously present objects)
Implementations of the principles of the disclosure may be further applicable to a wide assortment of applications including computer-human interaction (e.g., recognition of gestures, voice, posture, face, and/or other interactions), controlling processes (e.g., processes associated with an industrial robot, autonomous and other vehicles, and/or other processes), augmented reality applications, access control (e.g., opening a door based on a gesture, opening an access way based on detection of an authorized person), detecting events (e.g., for visual surveillance or people or animal counting, tracking).
A video processing system of the disclosure may be implemented in a variety of ways such as, for example, a software library, an IP core configured for implementation in a programmable logic device (e.g., FPGA), an ASIC, a remote server, comprising a computer readable apparatus storing computer executable instructions configured to perform feature detection. Myriad other applications exist that will be recognized by those of ordinary skill given the present disclosure.
Although the system(s) and/or method(s) of this disclosure have been described in detail for the purpose of illustration based on what is currently considered to be the most practical and preferred implementations, it is to be understood that such detail is solely for that purpose and that the disclosure is not limited to the disclosed implementations, but, on the contrary, is intended to cover modifications and equivalent arrangements that are within the spirit and scope of the appended claims. For example, it is to be understood that the present disclosure contemplates that, to the extent possible, one or more features of any implementation can be combined with one or more features of any other implementation.
Number | Name | Date | Kind |
---|---|---|---|
5063603 | Burt | Nov 1991 | A |
5138447 | Shen et al. | Aug 1992 | A |
5216752 | Tam | Jun 1993 | A |
5272535 | Elabd | Dec 1993 | A |
5355435 | DeYong et al. | Oct 1994 | A |
5638359 | Peltola et al. | Jun 1997 | A |
5673367 | Buckley | Sep 1997 | A |
5875108 | Hoffberg et al. | Feb 1999 | A |
6009418 | Cooper | Dec 1999 | A |
6014653 | Thaler | Jan 2000 | A |
6035389 | Grochowski et al. | Mar 2000 | A |
6418424 | Hoffberg et al. | Jul 2002 | B1 |
6458157 | Suaning | Oct 2002 | B1 |
6501794 | Wang et al. | Dec 2002 | B1 |
6509854 | Morita et al. | Jan 2003 | B1 |
6545705 | Sigel et al. | Apr 2003 | B1 |
6545708 | Tamayama et al. | Apr 2003 | B1 |
6546291 | Merfeld et al. | Apr 2003 | B2 |
6556610 | Jiang et al. | Apr 2003 | B1 |
6581046 | Ahissar | Jun 2003 | B1 |
6625317 | Gaffin et al. | Sep 2003 | B1 |
6678590 | Burchfiel | Jan 2004 | B1 |
7016783 | Hac | Mar 2006 | B2 |
7113867 | Stein | Sep 2006 | B1 |
7142602 | Porikli et al. | Nov 2006 | B2 |
7430682 | Carlson et al. | Sep 2008 | B2 |
7447337 | Zhang et al. | Nov 2008 | B2 |
7580907 | Rhodes | Aug 2009 | B1 |
7653255 | Rastogi | Jan 2010 | B2 |
7737933 | Yamano et al. | Jun 2010 | B2 |
7849030 | Ellingsworth | Dec 2010 | B2 |
8000967 | Taleb | Aug 2011 | B2 |
8015130 | Matsugu et al. | Sep 2011 | B2 |
8103602 | Izhikevich | Jan 2012 | B2 |
8108147 | Blackburn | Jan 2012 | B1 |
8160354 | Paquier | Apr 2012 | B2 |
8200593 | Guillen et al. | Jun 2012 | B2 |
8311965 | Breitwisch et al. | Nov 2012 | B2 |
8315305 | Petre et al. | Nov 2012 | B2 |
8390707 | Yamashita | Mar 2013 | B2 |
8416847 | Roman | Apr 2013 | B2 |
8467623 | Izhikevich et al. | Jun 2013 | B2 |
8542875 | Eswara | Sep 2013 | B2 |
8712939 | Szatmary et al. | Apr 2014 | B2 |
9150220 | Clarke | Oct 2015 | B2 |
20020038294 | Matsugu | Mar 2002 | A1 |
20020176025 | Kim et al. | Nov 2002 | A1 |
20030050903 | Liaw et al. | Mar 2003 | A1 |
20030216919 | Roushar | Nov 2003 | A1 |
20040054964 | Bozdagi et al. | Mar 2004 | A1 |
20040136439 | Dewberry et al. | Jul 2004 | A1 |
20040170330 | Fogg et al. | Sep 2004 | A1 |
20040193670 | Langan et al. | Sep 2004 | A1 |
20040233987 | Porikli et al. | Nov 2004 | A1 |
20050015351 | Nugent | Jan 2005 | A1 |
20050036649 | Yokono et al. | Feb 2005 | A1 |
20050047647 | Rutishauser et al. | Mar 2005 | A1 |
20050096539 | Leibig et al. | May 2005 | A1 |
20050283450 | Matsugu et al. | Dec 2005 | A1 |
20060088191 | Zhang et al. | Apr 2006 | A1 |
20060094001 | Torre et al. | May 2006 | A1 |
20060127042 | Park et al. | Jun 2006 | A1 |
20060129728 | Hampel | Jun 2006 | A1 |
20060161218 | Danilov | Jul 2006 | A1 |
20060188168 | Sheraizin | Aug 2006 | A1 |
20070022068 | Linsker | Jan 2007 | A1 |
20070071100 | Shi et al. | Mar 2007 | A1 |
20070176643 | Nugent | Aug 2007 | A1 |
20070208678 | Matsugu | Sep 2007 | A1 |
20080043848 | Kuhn | Feb 2008 | A1 |
20080100482 | Lazar | May 2008 | A1 |
20080152236 | Vendrig et al. | Jun 2008 | A1 |
20080174700 | Takaba | Jul 2008 | A1 |
20080199072 | Kondo et al. | Aug 2008 | A1 |
20080205764 | Iwai et al. | Aug 2008 | A1 |
20080237446 | Oshikubo et al. | Oct 2008 | A1 |
20080252723 | Park | Oct 2008 | A1 |
20080267458 | Laganiere et al. | Oct 2008 | A1 |
20090028384 | Bovyrin et al. | Jan 2009 | A1 |
20090043722 | Nugent | Feb 2009 | A1 |
20090096927 | Camp, Jr. et al. | Apr 2009 | A1 |
20090106030 | Den Brinker et al. | Apr 2009 | A1 |
20090141938 | Lim et al. | Jun 2009 | A1 |
20090202114 | Morin et al. | Aug 2009 | A1 |
20090287624 | Rouat et al. | Nov 2009 | A1 |
20090304231 | Lu et al. | Dec 2009 | A1 |
20090312985 | Eliazar | Dec 2009 | A1 |
20090323809 | Raveendran | Dec 2009 | A1 |
20100036457 | Sarpeshkar et al. | Feb 2010 | A1 |
20100073371 | Ernst et al. | Mar 2010 | A1 |
20100080297 | Wang et al. | Apr 2010 | A1 |
20100081958 | She | Apr 2010 | A1 |
20100086171 | Lapstun | Apr 2010 | A1 |
20100100482 | Hardt | Apr 2010 | A1 |
20100166320 | Paquier | Jul 2010 | A1 |
20100225824 | Lazar et al. | Sep 2010 | A1 |
20100235310 | Gage et al. | Sep 2010 | A1 |
20100271511 | Ma et al. | Oct 2010 | A1 |
20100290530 | Huang et al. | Nov 2010 | A1 |
20100299296 | Modha et al. | Nov 2010 | A1 |
20110002191 | Demaio et al. | Jan 2011 | A1 |
20110016071 | Guillen et al. | Jan 2011 | A1 |
20110063409 | Hannuksela | Mar 2011 | A1 |
20110103480 | Dane | May 2011 | A1 |
20110119214 | Breitwisch et al. | May 2011 | A1 |
20110119215 | Elmegreen et al. | May 2011 | A1 |
20110134242 | Loubser et al. | Jun 2011 | A1 |
20110137843 | Poon et al. | Jun 2011 | A1 |
20110160741 | Asano et al. | Jun 2011 | A1 |
20110170792 | Tourapis et al. | Jul 2011 | A1 |
20110206122 | Lu et al. | Aug 2011 | A1 |
20110222603 | Le Barz et al. | Sep 2011 | A1 |
20110228092 | Park | Sep 2011 | A1 |
20110242341 | Agrawal et al. | Oct 2011 | A1 |
20120011090 | Tang et al. | Jan 2012 | A1 |
20120057634 | Shi et al. | Mar 2012 | A1 |
20120072189 | Bullen et al. | Mar 2012 | A1 |
20120083982 | Bonefas et al. | Apr 2012 | A1 |
20120084240 | Esser et al. | Apr 2012 | A1 |
20120109866 | Modha | May 2012 | A1 |
20120130566 | Anderson | May 2012 | A1 |
20120162450 | Park et al. | Jun 2012 | A1 |
20120212579 | Froejdh et al. | Aug 2012 | A1 |
20120236114 | Chang et al. | Sep 2012 | A1 |
20120243733 | Sawai | Sep 2012 | A1 |
20120256941 | Ballestad et al. | Oct 2012 | A1 |
20120294486 | Diggins et al. | Nov 2012 | A1 |
20120303091 | Izhikevich | Nov 2012 | A1 |
20120308076 | Piekniewski et al. | Dec 2012 | A1 |
20120308136 | Izhikevich | Dec 2012 | A1 |
20120330447 | Gerlach et al. | Dec 2012 | A1 |
20130022111 | Chen et al. | Jan 2013 | A1 |
20130050574 | Lu et al. | Feb 2013 | A1 |
20130051680 | Kono et al. | Feb 2013 | A1 |
20130073484 | Izhikevich et al. | Mar 2013 | A1 |
20130073491 | Izhikevich et al. | Mar 2013 | A1 |
20130073492 | Izhikevich et al. | Mar 2013 | A1 |
20130073495 | Izhikevich et al. | Mar 2013 | A1 |
20130073496 | Szatmary et al. | Mar 2013 | A1 |
20130073498 | Izhikevich et al. | Mar 2013 | A1 |
20130073499 | Izhikevich et al. | Mar 2013 | A1 |
20130073500 | Szatmary et al. | Mar 2013 | A1 |
20130148882 | Lee | Jun 2013 | A1 |
20130151450 | Ponulak | Jun 2013 | A1 |
20130176430 | Zhu et al. | Jul 2013 | A1 |
20130218821 | Szatmary et al. | Aug 2013 | A1 |
20130251278 | Izhikevich et al. | Sep 2013 | A1 |
20130297539 | Piekniewski et al. | Nov 2013 | A1 |
20130297541 | Piekniewski et al. | Nov 2013 | A1 |
20130297542 | Piekniewski et al. | Nov 2013 | A1 |
20130325766 | Petre et al. | Dec 2013 | A1 |
20130325768 | Sinyavskiy et al. | Dec 2013 | A1 |
20130325773 | Sinyavskiy et al. | Dec 2013 | A1 |
20130325774 | Sinyavskiy et al. | Dec 2013 | A1 |
20130325775 | Sinyavskiy et al. | Dec 2013 | A1 |
20130325777 | Petre et al. | Dec 2013 | A1 |
20140012788 | Piekniewski | Jan 2014 | A1 |
20140016858 | Richert | Jan 2014 | A1 |
20140032458 | Sinyavskiy et al. | Jan 2014 | A1 |
20140032459 | Sinyavskiy et al. | Jan 2014 | A1 |
20140052679 | Sinyavskiy et al. | Feb 2014 | A1 |
20140064609 | Petre et al. | Mar 2014 | A1 |
20140119654 | Taylor et al. | May 2014 | A1 |
20140122397 | Richert et al. | May 2014 | A1 |
20140122398 | Richert | May 2014 | A1 |
20140122399 | Szatmary et al. | May 2014 | A1 |
20140156574 | Piekniewski et al. | Jun 2014 | A1 |
20140201126 | Zadeh et al. | Jul 2014 | A1 |
20140241612 | Rhemann et al. | Aug 2014 | A1 |
20140379179 | Goossen | Dec 2014 | A1 |
20150077639 | Chamaret et al. | Mar 2015 | A1 |
20150127154 | Passot et al. | May 2015 | A1 |
20150127155 | Passot et al. | May 2015 | A1 |
20150181168 | Pahalawatta et al. | Jun 2015 | A1 |
20150217449 | Meier et al. | Aug 2015 | A1 |
20150281715 | Lawrence et al. | Oct 2015 | A1 |
20150304634 | Karvounis | Oct 2015 | A1 |
20160003946 | Gilliland et al. | Jan 2016 | A1 |
20160009413 | Lee et al. | Jan 2016 | A1 |
Number | Date | Country |
---|---|---|
102226740 | Oct 2011 | CN |
H0487423 | Mar 1992 | JP |
2108612 | Apr 1998 | RU |
2406105 | Dec 2010 | RU |
2424561 | Jul 2011 | RU |
WO-2008083335 | Jul 2008 | WO |
WO-2008132066 | Nov 2008 | WO |
Entry |
---|
Berke, et al., Slow feature analysis yields a rich repertoire of complex cell properties. Journal of Vision (2005) vol. 5 (6). |
Bohte, ‘Spiking Nueral Networks’ Doctorate at the University of Leiden, Holland, Mar. 5, 2003, pp. 1-133 [retrieved on Nov. 14, 2012]. Retrieved from the interne <ahref=http://homepages.cwi.nl/˜sbohte/publication/phdthesis.pdf>http://homepages.cwi.nl/˜sbohte/publication/phdthesis.pdf⁢/a><url: />. |
Brette, et al., Brian: a simple and flexible simulator for spiking neural networks, The Neuromorphic Engineer, Jul. 1, 2009, pp. 1-4, doi: 10.2417/1200906.1659. |
Cessac, et al., ‘Overview of facts and issues about neural coding by spikes.’ Journal of Physiology, Paris 104.1 (2010): 5. |
Cuntz, et al., ‘One Rule to Grow Them All: A General Theory of Neuronal Branching and Its Paractical Application’ PLOS Computational Biology, 6 (8), Published Aug. 5, 2010. |
Davison, et al., PyNN: a common interface for neuronal network simulators, Frontiers in Neuroinformatics, Jan. 2009, pp. 1-10, vol. 2, Article 11. |
Djurfeldt, Mikael, The Connection-set Algebra: a formalism for the representation of connectivity structure in neuronal network models, implementations in Python and C++, and their use in simulators BMC Neuroscience Jul. 18, 2011 p. 1 12(Suppl 1): P80. |
Dorval, et al., ‘Probability distributions of the logarithm of inter-spike intervals yield accurate entropy estimates from small datasets.’ Journal of neuroscience methods 173.1 (2008): 129. |
Fidjeland, et al., “Accelerated Simulation of Spiking Neural Networks Using GPUs,” WCCI 2010 IEEE World Congress on Computational Intelligence, Jul. 18-23, 2010—CCIB, Barcelona, Spain, pp. 536-543, [retrieved on Nov. 14, 2012]. Retrieved from the Internet: URL:http://www.doc.ic.ac.uld-mpsha/IJCNN10b.pdf. |
Field, G., et al., Information Processing in the Primate Retina: Circuitry and Coding. Annual Review of Neuroscience, 2007, 30(1), 1-30. |
Fiete, et al, Spike-Time-Dependent Plasticity and Heterosynaptic Competition Organize Networks to Produce Long Scale-Free Sequences of Neural Activity. Neuron 65, Feb. 25, 2010, pp. 563-576. |
Floreano, et al., ‘Neuroevolution: from architectures to learning’ Evol. Intel. Jan. 2008 1:47-62, [retrieved Dec. 30, 2013] [retrieved online from: http://inforscienee.eptl.cb/record/112676/files/FloreanoDuerrMattiussi2008.pdf. |
Florian, Biologically Inspired Neural Networks for the Control of Embodied Agents, Technical Report Coneural-03-03 Version 1.0 [online], Nov. 30, 2003 [retrieved on Nov. 24, 2014]. Retrieved from the Internet: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.216.4931-&rep1&type=pdf. |
Foldiak, P. Learning invariance from transformation sequences. Neural Computation, 1991, 3(2), 194-200. |
Froemke et al., Temporal modulation of spike-timing-dependent plasticity, Frontiers in Synaptic Neuroscience, vol. 2, Article 19, pp. 1-16 [online] Jun. 2010 [retrieved on Dec. 16, 2013]. |
Gerstner et al. (1996) A neuronal learning rule for sub-millisecond temporal coding. Nature vol. 383 (6595) pp. 76-78. |
Gewaltig et al., ‘NEST (Neural Simulation Tool)’, Scholarpedia, 2007. pp. 1-15. 2(4): 1430, doi: 1 0.4249/scholarpedia.1430. |
Gleeson et al., NeuroML: A Language for Describing Data Driven Models of Neurons and Networks with a High Degree of Biological Detail, PLoS Computational Biology, Jun. 2010, pp. 1-19 vol. 6 Issue 6. |
Gluck, Stimulus Generalization and Representation in Adaptive Network Models of Category Learning [online], 1991 [retrieved on Aug 24, 2013]. Retrieved from the Internet: http:// www.google.coinlurl ?sa—t&rct—j&q—Giuck+4)/022STIMULUS+GENERALIZATION+AND+REPRESENTATIO N+1N +ADAPTIVE+NETWORK+MODELS±OF+CATEGORY+LEARN I NG%22+ 1991. |
Gollisch, et al., ‘Rapid neural coding in the retina with relative spike latencies.’ Science 319.5866 (2008): 1108-1111. |
Goodman, et al., ‘The Brian Simulator’. Frontiers in Neuroinformatics, Nov. 2008, pp. 1-10, vol. 2, Article 5. |
Gorchetchnikov et al., NineML: declarative, mathematically-explicit descriptions of spiking neuronal networks, Frontiers in Neuroinformatics, Conference Abstract: 4th INCF Congress of Neuroinformatics, doi: 1 0.3389/conffninf.2011.08.00098, (2011). |
Graham, Lyle J., The Surf-Hippo Reference Manual, http://www.neurophys.biomedicale.univparis5. fr/graham/surf-hippo-files/Surf-Hippo%20Reference%20Manual.pdf, Mar. 2002, pp. 1-128. |
Hopfield, JJ (1995) Pattern recognition computation using action potential timing for stimulus representation.Nature 376: 33-36. |
Izhikevich, E. M, et al. (2009), Polychronous Wavefront Computations. International Journal of Bifurcation and Chaos, 19:1733-1739. |
Izhikevich, E.M. (2004), Which Model to Use for Cortical Spiking Neurons? IEEE Transactions on Neural Networks, 15:1063-1070. |
Izhikevich, E.M. (2008), Polychronization: Computation With Spikes. Neural Computation, 18:245-282. |
Izhikevich, E.M. (2007) Dynamical Systems in Neuroscience: The Geometry of Excitability and Bursting, The MIT Press, 2007. |
Izhikevich, et al., ‘Relating STDP to BCM’, Neural Computation (2003) 15, 1511-1523. |
Izhikevich, ‘Simple Model of Spiking Neurons’, IEEE Transactions on Neural Networks, vol. 14, No. 6, Nov. 2003, pp. 1569-1572. |
Janowitz, M.K., et al., Excitability changes that complement Hebbian learning. Network, Computation in Neural Systems, 2006, 17 (1), 31-41. |
Karbowski, et al., ‘Multispikes and Synchronization in a Large Neural Network with Temporal Delays’, Neural Computation 12. 1573-1606 (2000). |
Khotanzad, ‘Classification of invariant image representations using a neural network’ IEEE. Transactions on Acoustics, Speech, and Signal Processing, vol. 38, No. 6, Jun. 1990, pp. 1028-1038 [online ], [retrieved on Dec. 10, 2013]. Retrieved from the Internet <URL: http://www-ee.uta.edu/eeweb/IP/Courses/SPR/Reference/ Khotanzad.pdf. |
Knoblauch, et al., Memory Capacities for Synaptic and Structural Plasticity, Neural Computation 2009, pp. 1-45. |
Laurent, ‘Issue 1—nnql Refactor Nucleus into its own file—Neural Network Query Language’ [retrieved on Nov. 12, 2013]. Retrieved from the Internet: URL:https://code.google.com/p/nnql/issues/detail?id=1. |
Laurent, ‘The Neural Network Query Language (NNQL) Reference’ [retrieved on Nov. 12, 2013]. Retrieved from the Internet: <URL https://code.google.com/p/ nnql/issues/detail?id=1>. |
Lazar et a]. ‘Multichannel time encoding with integrate-and-fire neurons.’ Neurocomputing 65 (2005): 401-407. |
Lazar et al. ‘A video time encoding machine’, in Proceedings of the 15th IEEE International Conference on Image Processing (ICIP '08 2008, pp. 717-720. |
Lazar et al. ‘Consistent recovery of sensory stimuli encoded with MIMO neural circuits.’Computational intelligence and neuroscience (2010): 2. |
Masquelier et al., Learning to recognize objects using waves of spikes and Spike Timing-Dependent Plasticity. Neural Networks (IJCNN), The 2010 International Joint Conference on DOI—10.1109/1JCNN.2010.5596934 (2010) pp. 1-8. |
Masquelier, Timothee, ‘Relative spike time coding and STOP-based orientation selectivity in the early visual system in natural continuous and saccadic vision: a computational model.’ Journal of computational neuroscience 32.3 (2012): 425-441. |
Meister, M., et al., The neural code of the retina, Neuron, 1999, 22, 435-450. |
Meister, M., Multineuronal codes in retinal signaling. Proceedings of the National Academy of sciences. 1996, 93, 609-614. |
Oster M., et al., A Spike-Based Saccadic Recognition System, ISCAS 2007, IEEE International Symposium on Circuits and Systems, 2009, pp. 3083-3086. |
Paugam-Moisy et al., “Computing with spiking neuron networks” G. Rozenberg T. Back, J. Kok (Eds.), Handbook of Natural Computing, Springer-Verlag (2010) [retrieved Dec. 30, 2013], [retrieved online from link.springer.com ]. |
Pavlidis et al. Spiking neural network training using evolutionary algorithms. In: Proceedings 2005 IEEE International Joint Conference on Neural Networkds, 2005. IJCNN'05, vol. 4, pp. 2190-2194 Publication Date Jul. 31, 2005 [online] [Retrieved on Dec. 10, 2013] Retrieved from the Internet <URL: http://citeseerx.ist.psu.edu! viewdoc/download?doi=0.1.1.5.4346&rep—repl&type-pdf. |
Rekeczky, et al., “Cellular Multiadaptive Analogic Architecture: A Computational Framework for UAV Applications.” May 2004. |
Revow M., et al. 1996, Using Generative Models for Handwritten Digit Recognition, IEEE Trans. on Pattern Analysis and Machine Intelligence, 18, No. 6, Jun. 1996. |
Sanchez, Efficient Simulation Scheme for Spiking Neural Networks. Doctoral Thesis. (Juiversita di Granada Mar. 28, 2008, pp. 1-104. |
Sato et al., ‘Pulse interval and width modulation for video transmission.’ Cable Television, IEEE Transactions on 4 (1978): 165-173. |
Schemmel, J., et al., Implementing synaptic plasticity in a VLSI spiking neural network model. In: Proceedings of the 20061nternational Joint Conference on Neural Networks (IJCNN'06), IEEE Press (2006) Jul. 16-21, 2006, pp. 1-6 [online], [retrieved on Aug. 24, 2012]. Retrieved from the Internet <URL: http://www.kip.uniheidelberg.deNeroeffentlichungen/download/cgi/4620/ps/1774.pdf> Introduction. |
Schnitzer, M.J., et al., Multineuronal Firing Patterns in the Signal from Eye to Brain. Neuron, 2003, 37, 499-511. |
Serrano-Gotarredona, et al, “On Real-Time: AER 2-D Convolutions Hardware for Neuromorphic Spike-based Cortical Processing”, Jul. 2008. |
Serre, et al., 2004, Realistic Modeling of Simple and Complex Cell Tuning in the HMAX Model, and Implications for Invariant Object Recognition in Cortex, AI Memo 2004-017 Jul. 2004. |
Simulink.RTM. model [online], [Retrieved on Dec. 10, 2013]Retrieved from http://www.mathworks.com/ products/simulink/index/html. |
Sinyavskiy et al. ‘Reinforcement learning of a spiking neural network in the task of control of an agent in a virtual discrete environment’Rus, J. Nonlin. Dyn., 2011, vol. 7, No. 4 (Mobile Robots), pp. 859-875, chapters 1-8 (Russian Article with English Abstract). |
Sjostrom et al., ‘Spike-Timing Dependent Plasticity’ Scholarpedia, 5(2): 1362 (2010), pp. 1-18. |
Szatmary et al., “Spike-timing Theory of Working Memory” PLoS Computational Biology, vol. 6, Issue 8, Aug. 19, 2010 [retrieved on Dec. 30, 2013]. Retrieved from the Internet: URL: http://www.ploscompbioLorg/article/info%3Adoi% 2F10.1371%2Fjournal.pcbi,1000879<url:></url:>. |
Thorpe S.; Ultra-Rapid Scene Categorization with a Wave of Spikes. In H,H. Bulthoff et al. (eds.), Biologically Motivated Computer Vision, Lecture Notes in Computer Science, 2002, 2525, pp. 1-15, Spring-Verlag, Berlin. |
Thorpe, S.J., et al., (2001). Spike-based strategies for rapid processing. Neural Networks 14, pp. 715-725. |
Van Rullen, R., et al., (2003), Is perception discrete or continuous?Trends in Cognitive Sciences 7(5), pp. 207-213. |
Van Rullen, R., et al., (2005). Spike times make sense. Trends in Neurosciences 28(1). |
Van Rullen, R., et al., Rate Coding versus temporal order coding: What the Retinal ganglion cells tell the visual cortex. Neural computation, 2001, 13, 1255-1283. |
Wallis, G., et al., A model of invariant object recognition in the visual system, Progress in Neurobiology. 1997, 51, 167-194. |
Wang ‘The time dimension for scene analysis.’ Neural Networks, IEEE Transactions on 16.6 (2005): 1401-1426. |
Wiskott, L., et al., Slow feature analysis: Unsupervised learning of invariances, Neural Computation, 2002, 14, (4), 715-770. |
Wysoski et al, “Fast and Adaptive Network of Spiking Neuron for Multi-view Visual Pattern Recognition”, May 3, 2008, Elsevier,Neurocomputing vol. 71, pp. 2563-2575. |
Zarandy et al. “Bi-i: A Standalone Ultra High Speed Cellular Vision System.” In: [online]. Dated Jun. 13, 2005 (Jun. 13, 2005). Retrieved on Aug. 16, 2012 (Aug. 16, 2012). Retrieved from the Internet at URL: http://ieeexplore.ieee.orgixplilogin.jsp?tp=tarnumber=14387388turl=http%3A%2Fieeexplore.ieee.org%2Fxpls%2Fabs—all.jsp% Farnumber%3D1438738. |
Number | Date | Country | |
---|---|---|---|
20160004923 A1 | Jan 2016 | US |