The present invention relates to a signal processor and, more particularly, to a signal processor for complex signal denoising in the ultra-wide instantaneous bandwidth.
Noise reduction, or denoising, is the process of removing noise from a signal. Noise reduction techniques exist for audio and images. All signal processing devices, both analog and digital, have traits that make them susceptible to noise. Noise can be random or white noise with an even frequency distribution, or frequency-dependent noise introduced by a device's mechanism or signal processing algorithms.
Current systems, such as conventional channelizers, operate over a smaller frequency band and require a large latency to achieve processing results. A channelizer is a term used for algorithms which select a certain frequency band from an input signal. The input signal typically has a higher sample rate that the sample rate of the selected channel. A channelizer is also used for algorithms that select multiple channels from an input signal in an efficient manner. Additionally, current machine learning approaches to signal processing require large quantities of online/offline training data.
Thus, a continuing need exists for a system that does not require any pre-training and enables real-time signal denoising in the ultra-wide bandwidth for both real-valued and complex input signals.
The present invention relates to a signal processor and, more particularly, to a Neuromorphic Adaptive Core (NeurACore) signal processor for complex signal denoising in ultra-wide instantaneous bandwidth. The NeurACore signal processor comprises a digital signal pre-processing unit operable for performing cascaded decomposition of a wideband complex valued In-phase and Quadrature-phase (I/Q) input signal in real time. The wideband complex valued I/Q input signal is decomposed into I and Q sub-channels. The NeurACore signal processor further comprises a NeurACore and local learning layers operable for performing high-dimensional projection of the wideband complex valued I/Q input signal into a high-dimensional state space. A global learning layer of the NeurACore signal processor is operable for performing a gradient descent online learning algorithm, and a neural combiner operable for combining outputs of the global learning layer to compute signal predictions corresponding to the wideband complex valued I/Q input signal.
In another aspect, the cascaded decomposition is a multi-layered I/Q decomposition scheme, wherein for each layer, a sample rate of the layer is reduced by half compared to a preceding layer in the cascaded decomposition.
In another aspect, the cascaded decomposition is a three layer I/Q decomposition scheme, and wherein the gradient descent online learning algorithm is an eight-dimensional gradient descent online learning algorithm.
In another aspect, the gradient descent online learning algorithm uses eight-dimensional state variables and weight matrices by cross coupling the eight-dimensional state variables in weights update equations and output layer update equations.
In another aspect, the digital signal pre-processing is further operable for implementing blind source separation (BSS) and feature extraction algorithms with updates to interpret denoised eight-dimensional state variables.
In another aspect, the NeurACore comprises high-dimensional signal processing nodes with adaptable parameters.
Finally, the present invention also includes a computer program product and a computer implemented method. The computer program product includes computer-readable instructions stored on a non-transitory computer-readable medium that are executable by a computer having one or more processors, such that upon execution of the instructions, the one or more processors perform the operations listed herein. Alternatively, the computer implemented method includes an act of causing a computer to execute such instructions and perform the resulting operations.
The objects, features and advantages of the present invention will be apparent from the following detailed descriptions of the various aspects of the invention in conjunction with reference to the following drawings, where:
The present invention relates to a signal processor and, more particularly, to a signal processor for complex signal denoising in the ultra-wide instantaneous bandwidth. The following description is presented to enable one of ordinary skill in the art to make and use the invention and to incorporate it in the context of particular applications. Various modifications, as well as a variety of uses in different applications will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to a wide range of aspects. Thus, the present invention is not intended to be limited to the aspects presented, but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
In the following detailed description, numerous specific details are set forth in order to provide a more thorough understanding of the present invention. However, it will be apparent to one skilled in the art that the present invention may be practiced without necessarily being limited to these specific details. In other instances, well-known structures and devices are shown in block diagram form, rather than in detail, in order to avoid obscuring the present invention.
The reader's attention is directed to all papers and documents which are filed concurrently with this specification and which are open to public inspection with this specification, and the contents of all such papers and documents are incorporated herein by reference. All the features disclosed in this specification, (including any accompanying claims, abstract, and drawings) may be replaced by alternative features serving the same, equivalent, or similar purpose, unless expressly stated otherwise. Thus, unless expressly stated otherwise, each feature disclosed is one example only of a generic series of equivalent or similar features.
Furthermore, any element in a claim that does not explicitly state “means for” performing a specified function, or “step for” performing a specific function, is not to be interpreted as a “means” or “step” clause as specified in 35 U.S.C. Section 112, Paragraph 6. In particular, the use of “step of” or “act of” in the claims herein is not intended to invoke the provisions of 35 U.S.C. 112, Paragraph 6.
(1) Principal Aspects
Various embodiments of the invention include three “principal” aspects. The first is a system for complex signal denoising. The system is typically in the form of a computer system operating software or in the form of a “hard-coded” instruction set. This system may be incorporated into a wide variety of devices that provide different functionalities. The second principal aspect is a method, typically in the form of software, operated using a data processing system (computer). The third principal aspect is a computer program product. The computer program product generally represents computer-readable instructions stored on a non-transitory computer-readable medium such as an optical storage device, e.g., a compact disc (CD) or digital versatile disc (DVD), or a magnetic storage device such as a floppy disk or magnetic tape. Other, non-limiting examples of computer-readable media include hard disks, read-only memory (ROM), and flash-type memories. These aspects will be described in more detail below.
A block diagram depicting an example of a system (i.e., computer system 100) of the present invention is provided in
The computer system 100 may include an address/data bus 102 that is configured to communicate information. Additionally, one or more data processing units, such as a processor 104 (or processors), are coupled with the address/data bus 102. The processor 104 is configured to process information and instructions. In an aspect, the processor 104 is a microprocessor. Alternatively, the processor 104 may be a different type of processor such as a parallel processor, application-specific integrated circuit (ASIC), programmable logic array (PLA), complex programmable logic device (CPLD), or a field programmable gate array (FPGA).
The computer system 100 is configured to utilize one or more data storage units. The computer system 100 may include a volatile memory unit 106 (e.g., random access memory (“RAM”), static RAM, dynamic RAM, etc.) coupled with the address/data bus 102, wherein a volatile memory unit 106 is configured to store information and instructions for the processor 104. The computer system 100 further may include a non-volatile memory unit 108 (e.g., read-only memory (“ROM”), programmable ROM (“PROM”), erasable programmable ROM (“EPROM”), electrically erasable programmable ROM “EEPROM”), flash memory, etc.) coupled with the address/data bus 102, wherein the non-volatile memory unit 108 is configured to store static information and instructions for the processor 104. Alternatively, the computer system 100 may execute instructions retrieved from an online data storage unit such as in “Cloud” computing. In an aspect, the computer system 100 also may include one or more interfaces, such as an interface 110, coupled with the address/data bus 102. The one or more interfaces are configured to enable the computer system 100 to interface with other electronic devices and computer systems. The communication interfaces implemented by the one or more interfaces may include wireline (e.g., serial cables, modems, network adaptors, etc.) and/or wireless (e.g., wireless modems, wireless network adaptors, etc.) communication technology.
In one aspect, the computer system 100 may include an input device 112 coupled with the address/data bus 102, wherein the input device 112 is configured to communicate information and command selections to the processor 100. In accordance with one aspect, the input device 112 is an alphanumeric input device, such as a keyboard, that may include alphanumeric and/or function keys.
Alternatively, the input device 112 may be an input device other than an alphanumeric input device. In an aspect, the computer system 100 may include a cursor control device 114 coupled with the address/data bus 102, wherein the cursor control device 114 is configured to communicate user input information and/or command selections to the processor 100. In an aspect, the cursor control device 114 is implemented using a device such as a mouse, a track-ball, a track-pad, an optical tracking device, or a touch screen. The foregoing notwithstanding, in an aspect, the cursor control device 114 is directed and/or activated via input from the input device 112, such as in response to the use of special keys and key sequence commands associated with the input device 112. In an alternative aspect, the cursor control device 114 is configured to be directed or guided by voice commands.
In an aspect, the computer system 100 further may include one or more optional computer usable data storage devices, such as a storage device 116, coupled with the address/data bus 102. The storage device 116 is configured to store information and/or computer executable instructions. In one aspect, the storage device 116 is a storage device such as a magnetic or optical disk drive (e.g., hard disk drive (“HDD”), floppy diskette, compact disk read only memory (“CD-ROM”), digital versatile disk (“DVD”)). Pursuant to one aspect, a display device 118 is coupled with the address/data bus 102, wherein the display device 118 is configured to display video and/or graphics. In an aspect, the display device 118 may include a cathode ray tube (“CRT”), liquid crystal display (“LCD”), field emission display (“FED”), plasma display, or any other display device suitable for displaying video and/or graphic images and alphanumeric characters recognizable to a user.
The computer system 100 presented herein is an example computing environment in accordance with an aspect. However, the non-limiting example of the computer system 100 is not strictly limited to being a computer system. For example, an aspect provides that the computer system 100 represents a type of data processing analysis that may be used in accordance with various aspects described herein. Moreover, other computing systems may also be implemented. Indeed, the spirit and scope of the present technology is not limited to any single data processing environment. Thus, in an aspect, one or more operations of various aspects of the present technology are controlled or implemented using computer-executable instructions, such as program modules, being executed by a computer. In one implementation, such program modules include routines, programs, objects, components and/or data structures that are configured to perform particular tasks or implement particular abstract data types. In addition, an aspect provides that one or more aspects of the present technology are implemented by utilizing one or more distributed computing environments, such as where tasks are performed by remote processing devices that are linked through a communications network, or such as where various program modules are located in both local and remote computer-storage media including memory-storage devices.
An illustrative diagram of a computer program product (i.e., storage device) embodying the present invention is depicted in
(2) Specific Details of Various Embodiments of the Invention Described is an implementation of the Ultra-Wide Instantaneous Bandwidth (IBW) Neuromorphic Adaptive Core (NeurACore) processor, used for the denoising of real-valued and complex In-Phase and Quadrature-Phase (I/Q) signals (i.e., signals in ultra-wide IBW). IBW refers to the bandwidth in which all frequency components can be simultaneously analyzed. The term “real-time bandwidth” is often used interchangeably with IBW to describe the maximum continuous radio frequency (RF) bandwidth that an instrument generates or acquires. A real-valued signal is a complex signal where all the imaginary component of all the complex values are strictly zero. Real-valued signals have one degree of freedom. Complex signals are often used to represent signals, or data, with two degrees of freedom, such as magnitude and phase, or kinetic and potential energy.
The invention described herein is a system for real-time, real-valued and complex I/Q signal denoising in ultra-wide IBW with processor clock speed that is lower than the data sampling rate. The denoiser according to embodiments of the present disclosure provides detection and denoising capabilities for complex (I/Q) signals, including low probability of intercept (LPI), low probability of detection (LPD), and frequency hopping signals. A LPI radar is a radar employing measures to avoid detection by passive radar detection equipment (such as a radar warning receiver (RWR), or electronic support receiver) while it is searching for a target or engaged in target tracking. LPI and LPD allow an active acoustic source to be concealed or camouflaged so that the signal is essentially undetectable. Frequency-hopping spread spectrum (FHSS) is a method of transmitting radio signals by rapidly changing the carrier frequency among many distinct frequencies occupying a large spectral band. Signals rapidly change, or hop, their carrier frequencies among the center frequencies of these sub-bands in a predetermined order.
Additionally, the denoiser improves the signal-to-noise ratio (SNR) performance by >20 decibels (dB) for a variety of different waveforms, as will be described in detail below. Key advantages of the present invention compared to current state-of-the art systems are the ultra-low latency detection and denoising of wideband input signals. Comparable systems, like a conventional channelizer, would operate over a smaller frequency band and likely require larger latency to achieve the same processing results. Additionally, the system enables detection and denoising of fast frequency hopping signals that cannot be achieved with current frequency channelization-based systems. While current machine learning approaches would require large quantities of online/offline training data, the system described herein does not require any pre-training.
The ultra-wide IBW for the real-time digital signal processing system (i.e., NeurACore) according to embodiments of the present disclosure is defined as where the incoming signal's sample rate is larger than the digital signal processor's (i.e., NeurACore) clock speed. Here, it is assumed that the input signal is uniformly sampled with a sampling clock whose clock speed is fs (sampling frequency, or sampling rate) The samples are quantized and fed to the digital signal processor whose clock rate is fc. In computing, the clock rate refers to the frequency at which the clock generator of a processor can generate pulses, which are used to synchronize the operation of its components. The clock rate is used as an indicator of the processor's speed and is measured in clock cycles per second. The heart of the invention described herein is a cascaded I/Q decomposition that enables NeurACore to achieve an instantaneous bandwidth (IBW) that is significantly higher than the clock speed of the digital processor. The system described herein, which is an ultra-wide IBW real-time denoiser (where fs>fc), greatly improves SWAP (size, weight, and power) of hardware over comparable systems with the same performance, such as a conventional channelizer. In the present invention, the sampling rate is twice the IBW. The input signal with a channelizer typically has a higher sample rate than the sample rate of the selected channel.
The NeurACore processor architecture, depicted in
As described above, a unique concept of the invention is the cascaded I/Q signal decomposition and related signal processing algorithms where the wideband complex valued (I/Q) input signal 302 is further decomposed into I and Q sub-channels (i.e., I of I, Q of I, I of Q and Q of Q) for a two layer I/Q decomposition scheme. The advantage of the two layer I/Q signal decomposition is that the sample rate of the four correlated sub-channels is reduced by a factor of two compared to the sample rate of the 1st layer I/Q decomposed input signal 310. This cascading operation can be continued until the condition offscas<=fc is satisfied. Here, fscas is the required sample rate for the time series data at the last cascading layer. For every new cascading layer, the sample rate is reduced by half (i.e., fscas=fs/2 for a single layer, fscas=fs/4 for two layers, fscas=fs/8 for three layers), where fs is the sample rate of the real-valued input signal.
(2.1) Ultra-Wideband NeurACore Architecture
The basic innovation that enables the ultra-wideband NeurACore architecture is shown in
These cascaded I/Q signal decompositions can be continued to many levels, ensuring that the actual digital signal processor clock speed can always be larger than the sample rate of the last decomposition level (i.e., ffinal I/Q sampling<fclock). For example, an existing NeurACore hardware implementation (disclosed in U.S. application Ser. No. 17/375,724, which is hereby incorporated by reference as though fully set forth herein) operated at 300 MSps (mega samples per second), so a three layer I/Q signal decomposition will ensure processing signals with >1 gigahertz (GHz) IBW. This means that most of the internal signals used in the Core and Blind Source Separation (BSS) will be eight-dimensional so one must use an eight-dimensional gradient descent online learning algorithm, as described in detail below.
The following are references that describe the use of the gradient descent online learning algorithm for updating output layer weights for an online learning system: M. Lukosevicius and H. Jaeger, “Reservoir computing approaches to recurrent neural network training”, Computer Science Review, 2009, and Jing Dai, et al., “An Introduction to the Echo State Network and its Applications in Power System”, 15th International Conference on Intelligent System Applications to Power Systems, 2009, both of which are hereby incorporated by reference as though fully set forth herein. The system according to embodiments of the present disclosure improves upon these approaches by extending the signal processing bandwidth beyond the clock speed of the processor (fs>fc), where the real-time denoising processing algorithm is implemented, and extending the basic gradient descent online learning algorithm into the cascaded I/Q decomposed signal domain. By utilizing additional learning layers, such as the global learning layer, along with the extended capability of the neuromorphic adaptive core 304, the present invention enables real-time signal denoising in ultra-wide bandwidth for both real-valued and complex (I/Q) input signals.
The most challenging aspect of the unique cascaded I/Q decomposition-based signal processing concept is to design the state space models for the nodes in the neuromorphic adaptive core that behave the same way as the nodes in the current, not cascaded, design (i.e., passive resonators with adaptable Q-values and resonant frequencies). In the current design, the standard two-dimensional state space models are used for these passive resonators that must be abstracted to high-dimensional models, assuming that their state space models will be driven by the cascaded I/Q decomposed high dimensional signals. In other words, one needs to design an abstract high-dimensional oscillator array with adaptable parameters. HRL Laboratories, LLC has developed and verified, by analysis and MatLab simulations, such abstract high dimensional signal processing nodes that form the key building blocks for the ultra-wide bandwidth NeurACore, as described in U.S. application Ser. No. 17/375,724.
The other significant algorithm change in the ultra-wide bandwidth NeurACore design compared to the previously disclosed, not cascaded, version is the online learning algorithm that must utilize the high-dimensional state variables from the core in the online learning/adaptation process. HRL Laboratories, LLC has developed such gradient descent online learning algorithm that is currently utilized in the existing NeurACore field-programmable gate array (FPGA) hardware prototype, as described in U.S. application Ser. No. 17/375,724. In the current hardware implementation, the learning algorithm utilizes two-dimensional (I and Q) state variables along with two-dimensional weight matrices. For the ultra-wide bandwidth NeurACore architecture described herein, the learning algorithm is extended to eight-dimensional state variables and weight matrices by properly cross coupling the eight-dimensional state variables in the weights update and output update equations.
Optionally, the disclosed ultra-wide IBW NeurACore can be extended with Blind Source Separation (BSS) and feature extractions algorithms that also need to be updated to properly interpret the denoised eight-dimensional state variables in order to accurately separate the unknown signal(s) from the signal mixture where all signals are represented by eight-dimensional state variables. Energy and phase maps for the real-time spectrogram are generated from the eight-dimensional state variables. At this stage of the processing one can convert back the abstract high-dimensional energy and phase maps into conventional spectrogram image(s) for the classification algorithm that is trained on conventional spectrogram images. However, the processing scheme described herein enables new classification approaches, where the classifier (e.g., deep learning neural network) can be trained directly on the high-dimensional energy and phase maps. It is believed that the high-dimensional energy and phase maps contain significantly more unique features about the signals than conventional spectrograms and will enable significantly improved classification performance. To increase the bandwidth of the NeurACore beyond the clock speed of the digital processor (i.e., fsampling>>fclock), cascaded In-phase and Quadrature-phase (I/Q) signal decomposition and related signal processing algorithms are utilized. The wideband, complex-valued (I/Q) input signal is further decomposed into I and Q sub-channels (i.e., I of I, Q of I, I of Q and Q of Q) for a two layer I/Q decomposition, as shown in
A MatLab simulation example showing how a single tone at 149.9 MHz will be translated to −0.1 MHz at the second decomposition level (312 in
The online learning algorithm must utilize the high-dimensional state variables from the core in the online learning/adaptation process. For the ultra-wide bandwidth NeurACore architecture that can achieve >1 GHz IBW, the learning algorithm is extended to eight-dimensional state variables and weights matrices by properly cross-coupling the eight-dimensional state variables in the weights update and outputs update equations. The BSS and feature extractions algorithms are also updated to properly interpret the detected and denoised eight-dimensional state variables in order to accurately separate the unknown signal(s) from the signal mixture where all signals are represented by eight-dimensional state variables. Energy and phase maps for the real-time spectrogram are generated from the eight-dimensional state variables.
During the development of this concept, a systemic gradual approach was utilized in validating the cascaded I/Q decomposition for NeurACore by first focusing on the two-layer version for which most internal signals are four-dimensional and can achieve 2*fs IBW. The two-layer case was validated with four-dimensional state variables, gradient descent equations, and global learning layer for the output weights. The extensions of the BSS and feature extraction algorithms to the two-layer cascaded I/Q formulation were also validated. The lessons learned from the validation of the two-layer version can be incorporated into the generalized three layer and beyond cascaded I/Q algorithm formulation and implementation that will achieve >4*fs IBW.
Many commercial and military signal processing platforms require small size, ultra-wide bandwidth operation, ultra-low C-SWaP (cost, size, weight, and power) signal processing units, and artificial intelligence enhanced with real-time signal processing capability. This includes, but is not limited to radar, communication, acoustic, audio, video, and optical waveforms.
(2.2) Control of a Device
As shown in
In some embodiments, a drone or other autonomous vehicle may be controlled to move to an area where an object is determined to be based on the imagery. In yet some other embodiments, a camera may be controlled to orient towards the identified object. In other words, actuators or motors are activated to cause the camera (or sensor) to move or zoom in on the location where the object is localized. In yet another aspect, if a system is seeking a particular object and if the object is not determined to be within the field-of-view of the camera, the camera can be caused to rotate or turn to view other areas within a scene until the sought-after object is detected.
In addition, in a non-limiting example of an autonomous vehicle having multiple sensors, such as cameras, which might include noisy signals that need denoising. The system can denoise the signal and then, based on the signal, cause the autonomous vehicle to perform a vehicle operation. For instance, if two vehicle sensors detect the same object, object detection and classification accuracy is increased and the system described herein can cause a precise vehicle maneuver for collision avoidance by controlling a vehicle component. For example, if the object is a stop sign, the system may denoise a noisy input signal to identify the stop sign and then may cause the autonomous vehicle to apply a functional response, such as a braking operation, to stop the vehicle. Other appropriate responses may include one or more of a steering operation, a throttle operation to increase speed or to decrease speed, or a decision to maintain course and speed without change. The responses may be appropriate for avoiding a collision, improving travel speed, or improving efficiency. Non-limiting examples of devices that can be controlled via the NeurACore include a vehicle or a vehicle component, such as a brake, a steering mechanism, suspension, or safety device (e.g., airbags, seatbelt tensioners, etc.). Further, the vehicle could be an unmanned aerial vehicle (UAV), an autonomous ground vehicle, or a human operated vehicle controlled either by a driver or by a remote operator. As can be appreciated by one skilled in the art, control of other device types is also possible.
Finally, while this invention has been described in terms of several embodiments, one of ordinary skill in the art will readily recognize that the invention may have other applications in other environments. It should be noted that many embodiments and implementations are possible. Further, the following claims are in no way intended to limit the scope of the present invention to the specific embodiments described above. In addition, any recitation of “means for” is intended to evoke a means-plus-function reading of an element and a claim, whereas any elements that do not specifically use the recitation “means for”, are not intended to be read as means-plus-function elements, even if the claim otherwise includes the word “means”. Further, while particular method steps have been recited in a particular order, the method steps may occur in any desired order and fall within the scope of the present invention.
This is a Continuation-in-Part application of U.S. application Ser. No. 17/375,724, filed in the United States on Jul. 14, 2021, entitled, “Low Size, Weight and Power (SWAP) Efficient Hardware Implementation of a Wide Instantaneous Bandwidth Neuromorphic Adaptive Core (NeurACore)”, which is a Non-Provisional Application of U.S. Provisional Application No. 63/051,877, filed on Jul. 14, 2020 and U.S. Provisional Application No. 63/051,851, filed on Jul. 14, 2020, the entirety of which are hereby incorporated by reference. This is also a Non-Provisional Application of U.S. Provisional Application No. 63/150,024, filed in the United States on Feb. 16, 2021, entitled, “Ultra-Wide Instantaneous Bandwidth Complex Neuromorphic Adaptive Core Processor,” the entirety of which is incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
5694474 | Ngo | Dec 1997 | A |
5768474 | Neti | Jun 1998 | A |
6678421 | Daniell | Jan 2004 | B1 |
6801662 | Owechko | Oct 2004 | B1 |
6847737 | Kouri | Jan 2005 | B1 |
7272265 | Kouri | Sep 2007 | B2 |
7474756 | Rickard | Jan 2009 | B2 |
7587064 | Owechko | Sep 2009 | B2 |
7720268 | Slabaugh | May 2010 | B2 |
8031117 | Goldberg | Oct 2011 | B2 |
9042496 | Su | May 2015 | B1 |
9349092 | Thibeault | May 2016 | B2 |
9515789 | Zhang | Dec 2016 | B2 |
9566174 | De Sapio | Feb 2017 | B1 |
9646248 | Benvenuto | May 2017 | B1 |
9749007 | Martin | Aug 2017 | B1 |
9753959 | Birdwell | Sep 2017 | B2 |
9798751 | Birdwell | Oct 2017 | B2 |
9954561 | Ray | Apr 2018 | B2 |
9978015 | Nugent | May 2018 | B2 |
10019470 | Birdwell | Jul 2018 | B2 |
10055434 | Birdwell | Aug 2018 | B2 |
10095718 | Birdwell | Oct 2018 | B2 |
10128820 | Petre | Nov 2018 | B2 |
10153806 | Petre | Dec 2018 | B1 |
10162378 | Rao | Dec 2018 | B1 |
10181100 | Benvenuto | Jan 2019 | B1 |
10192099 | Agaian | Jan 2019 | B2 |
10198691 | Nino | Feb 2019 | B2 |
10217047 | O'Shea | Feb 2019 | B2 |
10248675 | Birdwell | Apr 2019 | B2 |
10291268 | Migliori | May 2019 | B1 |
10305553 | O'Shea | May 2019 | B2 |
10310074 | Ni | Jun 2019 | B1 |
10324167 | Ray | Jun 2019 | B2 |
10324168 | Ray | Jun 2019 | B2 |
10341669 | Lin | Jul 2019 | B2 |
10346739 | Dockendorf | Jul 2019 | B1 |
10374863 | Xu | Aug 2019 | B2 |
10380062 | Rao | Aug 2019 | B1 |
10396919 | O'Shea | Aug 2019 | B1 |
10397039 | Zhang | Aug 2019 | B2 |
10404299 | Petre | Sep 2019 | B1 |
10429491 | Ray | Oct 2019 | B2 |
10484043 | Martin | Nov 2019 | B1 |
10495725 | Zhang | Dec 2019 | B2 |
10529320 | Shafran | Jan 2020 | B2 |
10541765 | O'Shea | Jan 2020 | B1 |
10546233 | Bhattacharyya | Jan 2020 | B1 |
10572830 | O'Shea | Feb 2020 | B2 |
10614358 | Nino | Apr 2020 | B2 |
10643153 | O'Shea | May 2020 | B2 |
10671912 | Gottfried | Jun 2020 | B2 |
10671917 | Bhattacharyya | Jun 2020 | B1 |
10712416 | Sandino | Jul 2020 | B1 |
10712425 | Rao | Jul 2020 | B1 |
10720949 | Rao | Jul 2020 | B1 |
10735298 | Chen | Aug 2020 | B2 |
10742475 | Lai | Aug 2020 | B2 |
10783430 | Wittenberg | Sep 2020 | B2 |
10788836 | Ebrahimi Afrouzi | Sep 2020 | B2 |
10789479 | Carreira | Sep 2020 | B2 |
10832168 | Krasser | Nov 2020 | B2 |
10846595 | Wild | Nov 2020 | B2 |
10878276 | Martin | Dec 2020 | B2 |
10892806 | O'Shea | Jan 2021 | B2 |
10921422 | Smith | Feb 2021 | B2 |
10929745 | Birdwell | Feb 2021 | B2 |
10951982 | Hayakawa | Mar 2021 | B2 |
10986113 | De Sapio | Apr 2021 | B2 |
11002819 | Wittenberg | May 2021 | B2 |
11032014 | O'Shea | Jun 2021 | B2 |
11037057 | Virbila | Jun 2021 | B1 |
11055614 | Nino | Jul 2021 | B2 |
11062489 | Chen | Jul 2021 | B2 |
11069082 | Ebrahimi Afrouzi | Jul 2021 | B1 |
11153503 | Ebrahimi Afrouzi | Oct 2021 | B1 |
11256988 | Guerci | Feb 2022 | B1 |
11270198 | Busch | Mar 2022 | B2 |
11274929 | Afrouzi | Mar 2022 | B1 |
11282505 | Hayakawa | Mar 2022 | B2 |
11309839 | Wen | Apr 2022 | B2 |
11366998 | Pugsley | Jun 2022 | B2 |
11366999 | Yamamoto | Jun 2022 | B2 |
11381286 | O'Shea | Jul 2022 | B2 |
11391830 | Au | Jul 2022 | B2 |
11392689 | Nguyen | Jul 2022 | B2 |
11392830 | Ozcan | Jul 2022 | B2 |
11403479 | Cao | Aug 2022 | B2 |
11423301 | O'Shea | Aug 2022 | B2 |
11514325 | Ozcan | Nov 2022 | B2 |
11521053 | Stepp | Dec 2022 | B2 |
11521075 | Clement | Dec 2022 | B2 |
11526424 | Deng | Dec 2022 | B2 |
11531639 | Petre | Dec 2022 | B2 |
11575544 | Andrews | Feb 2023 | B2 |
11580381 | Daval Frerot | Feb 2023 | B2 |
11614514 | Chen | Mar 2023 | B2 |
11625557 | Hoffmann | Apr 2023 | B2 |
11632181 | O'Shea | Apr 2023 | B2 |
11638160 | Montalvo | Apr 2023 | B2 |
11657531 | Ebrahimi Afrouzi | May 2023 | B1 |
11770286 | Timo | Sep 2023 | B2 |
11783196 | O'Shea | Oct 2023 | B2 |
11832110 | Montalvo | Nov 2023 | B2 |
11863221 | Adl | Jan 2024 | B1 |
11868882 | Pietquin | Jan 2024 | B2 |
20050047611 | Mao | Mar 2005 | A1 |
20100158271 | Park | Jun 2010 | A1 |
20120232418 | Kimura | Sep 2012 | A1 |
20140233826 | Agaian | Aug 2014 | A1 |
20140241211 | Zhang | Aug 2014 | A1 |
20150302296 | Thibeault | Oct 2015 | A1 |
20150347899 | Nugent | Dec 2015 | A1 |
20160132768 | Ray | May 2016 | A1 |
20160203827 | Leff | Jul 2016 | A1 |
20180076795 | Petre | Mar 2018 | A1 |
20180096246 | Yamamoto | Apr 2018 | A1 |
20180174042 | Srinivasa | Jun 2018 | A1 |
20180174053 | Lin | Jun 2018 | A1 |
20180307935 | Rao | Oct 2018 | A1 |
20180308013 | O'Shea | Oct 2018 | A1 |
20180314985 | O'Shea | Nov 2018 | A1 |
20190042915 | Akin | Feb 2019 | A1 |
20190042916 | Cao | Feb 2019 | A1 |
20190042920 | Akin | Feb 2019 | A1 |
20190042942 | Natroshvili | Feb 2019 | A1 |
20190080210 | Owechko | Mar 2019 | A1 |
20190120932 | Smith | Apr 2019 | A1 |
20190188565 | O'Shea | Jun 2019 | A1 |
20190205696 | Owechko | Jul 2019 | A1 |
20190230107 | De Sapio | Jul 2019 | A1 |
20190251421 | Wang | Aug 2019 | A1 |
20190349037 | O'Shea | Nov 2019 | A1 |
20200034331 | Petre | Jan 2020 | A1 |
20200042873 | Daval Frerot | Feb 2020 | A1 |
20200046240 | Angle | Feb 2020 | A1 |
20200066260 | Hayakawa | Feb 2020 | A1 |
20200111483 | Shafran | Apr 2020 | A1 |
20200218941 | Wang | Jul 2020 | A1 |
20200218959 | Srinivasa | Jul 2020 | A1 |
20200218977 | Paul | Jul 2020 | A1 |
20200225317 | Chen | Jul 2020 | A1 |
20200265290 | Paul | Aug 2020 | A1 |
20200265338 | O'Shea | Aug 2020 | A1 |
20200272883 | Cao | Aug 2020 | A1 |
20200272884 | Paul | Aug 2020 | A1 |
20200292660 | Meissner | Sep 2020 | A1 |
20200327225 | Nguyen | Oct 2020 | A1 |
20200334575 | O'Shea | Oct 2020 | A1 |
20200341109 | Meissner | Oct 2020 | A1 |
20200342321 | Paul | Oct 2020 | A1 |
20210049462 | Okumura | Feb 2021 | A1 |
20210093203 | Zhong | Apr 2021 | A1 |
20210133468 | Chen | May 2021 | A1 |
20210146531 | Tremblay | May 2021 | A1 |
20210209453 | Meissner | Jul 2021 | A1 |
20210304736 | Kothari | Sep 2021 | A1 |
20210341436 | Perdios | Nov 2021 | A1 |
20210357187 | Clement | Nov 2021 | A1 |
20210357210 | Clement | Nov 2021 | A1 |
20210357737 | Hamerly | Nov 2021 | A1 |
20210357742 | Restuccia | Nov 2021 | A1 |
20210367690 | O'Shea | Nov 2021 | A1 |
20220012637 | Rezazadegan Tavakoli | Jan 2022 | A1 |
20220014398 | Andrews | Jan 2022 | A1 |
20220055689 | Mandlekar | Feb 2022 | A1 |
20220066456 | Ebrahimi Afrouzi | Mar 2022 | A1 |
20220066747 | Drain | Mar 2022 | A1 |
20220067983 | Fidler | Mar 2022 | A1 |
20220075605 | Iyer | Mar 2022 | A1 |
20220198245 | Cleland | Jun 2022 | A1 |
20220200669 | Banuli Nanje Gowda | Jun 2022 | A1 |
20220217035 | Melodia | Jul 2022 | A1 |
20220222512 | Virbila | Jul 2022 | A1 |
20220222513 | Paramasivam | Jul 2022 | A1 |
20220368583 | Timo | Nov 2022 | A1 |
20230109019 | Petre | Apr 2023 | A1 |
20230262470 | Montalvo | Aug 2023 | A1 |
20230308192 | Wittenberg | Sep 2023 | A1 |
20230316083 | O'Shea | Oct 2023 | A1 |
20230324501 | Feigl | Oct 2023 | A1 |
Number | Date | Country |
---|---|---|
2977126 | Feb 2018 | CA |
1582385 | Feb 2005 | CN |
1278104 | Oct 2006 | CN |
1928500 | Mar 2007 | CN |
1640084 | Oct 2014 | CN |
106875002 | Jun 2017 | CN |
105075156 | Jan 2018 | CN |
109709521 | May 2019 | CN |
110088635 | Aug 2019 | CN |
110088635 | Aug 2019 | CN |
110728324 | Jan 2020 | CN |
111541466 | Mar 2021 | CN |
110301143 | Apr 2022 | CN |
110088635 | Sep 2022 | CN |
102019106529 | Sep 2020 | DE |
102021132995 | Jun 2022 | DE |
3293681 | Mar 2018 | EP |
2962416 | Jun 2018 | EP |
3660749 | Jun 2020 | EP |
3695783 | Aug 2020 | EP |
3736976 | Nov 2020 | EP |
3612356 | Jun 2021 | EP |
3637099 | Jun 2021 | EP |
3855388 | Jul 2021 | EP |
3571862 | Jun 2022 | EP |
10128820 | May 1998 | JP |
10153806 | Jun 1998 | JP |
10162378 | Jun 1998 | JP |
2018077213 | May 2018 | JP |
2018091826 | May 2018 | JP |
2019090795 | Jun 2019 | JP |
2019090795 | Jun 2019 | JP |
6758524 | Sep 2020 | JP |
2020203075 | Dec 2020 | JP |
7049085 | Apr 2022 | JP |
7068792 | May 2022 | JP |
7163011 | Oct 2022 | JP |
7228375 | Feb 2023 | JP |
WO-2014133506 | Sep 2014 | WO |
WO-2018136144 | Jul 2018 | WO |
WO-2018136785 | Jul 2018 | WO |
WO-2018136785 | Aug 2018 | WO |
WO2018136144 | Nov 2018 | WO |
WO-2018200529 | Nov 2018 | WO |
WO2018204632 | Nov 2018 | WO |
WO2018236932 | Dec 2018 | WO |
WO-2019002465 | Jan 2019 | WO |
WO-2019027926 | Feb 2019 | WO |
WO-2019055117 | Mar 2019 | WO |
WO-2019060730 | Mar 2019 | WO |
WO-2019161076 | Aug 2019 | WO |
WO-2019200289 | Oct 2019 | WO |
WO-2019240856 | Dec 2019 | WO |
WO-2020074181 | Apr 2020 | WO |
WO2020102204 | May 2020 | WO |
WO2020210673 | Oct 2020 | WO |
WO-2020210673 | Oct 2020 | WO |
WO-2020231005 | Nov 2020 | WO |
WO-2020236236 | Dec 2020 | WO |
WO-2020236236 | Dec 2020 | WO |
WO-2020236236 | Feb 2021 | WO |
Entry |
---|
Childers, R. Varga and N. Perry, “Composite signal decomposition,” in IEEE Transactions on Audio and Electroacoustics, vol. 18, No. 4, pp. 471-477, Dec. 1970, doi: 10.1109/TAU. 1970.1162135. (Year: 1970). |
H. Syed, R. Bryla, U. Majumder and D. Kudithipudi, “Toward Near-Real-Time Training With Semi-Random Deep Neural Networks and Tensor-Train Decomposition,” in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 14, pp. 8171-8179, 2021, doi: 10.1109/JSTARS.2021.3096195. (Year: 2021). |
A. Irmanova, O. Krestinskaya and A. P. James, “Neuromorphic Adaptive Edge-Preserving Denoising Filter,” 2017 IEEE International Conference on Rebooting Computing (ICRC), Washington, DC, USA, 2017, pp. 1-6, doi: 10.1109/ICRC.2017.8123644. (Year: 2017). |
N. D. Sidiropoulos, L. De Lathauwer, X. Fu, K. Huang, E. E. Papalexakis and C. Faloutsos, “Tensor Decomposition for Signal Processing and Machine Learning,” in IEEE Transactions on Signal Processing, vol. 65, No. 13, pp. 3551-3582, 1 Jul. 1, 2017, doi: 10.1109/TSP.2017.2690524. (Year: 2017). |
S. Choi, A. Cichocki, H.-M. Park, and S.-Y. Lee, “Blind Source Separation and Independent Component Analysis: A Review,” Neural Information Processing—Letters, vol. 6, No. 1, Jan. 2005, pp. 1-57. |
A. Cichocki and A. Belouchrani, “Sources separation of temporally correlated sources from noisy data using a bank of band-pass filters,” in Proc. of Independent Component Analysis and Signal Separation (ICA-2001), pp. 173-178, San Diego, USA, Dec. 9-13, 2001. |
A. Hyvarinen, “Complexity Pursuit: Separating Interesting Components from Time Series,” Neural Computation, vol. 13, No. 4, pp. 883-898, Apr. 2001. |
Igel, C. and Husken, M., “Improving the Rprop leaming algorithm”, in Proc. of the 2nd Int. Symposium on Neural Computation (NC'2000), pp. 115-121, ICSC Academic Press, 2000. |
R. Legenstein, et al. “Edge of Chaos and Prediction of Computational Performance for Neural Microcircuit Models,” Neural Networks, 20(3), pp. 323-334, 2007. |
W. Maass, “Liquid Computing”, Proc. of the Conference CIE'07 : Computability in Europe 2007, Siena (Italy), pp. 507-516. |
F. Takens, “Detecting Strange Attractors in Turbulence,” Dynamical Systems and Turbulence, Lecture Notes in Mathematics vol. 898, 1981, pp. 366-381. |
D. Verstraeten, et al. “An experimental unification of reservoir computing methods”, Neural Networks, vol. 20, No. 3, Apr. 2007, pp. 391-403. |
R. H. Walden, “Analog-to-digital converter survey and analysis,” IEEE J. Sel. Areas Commun., vol. 51, pp. 539-548, 1999. |
H. Yap, et al., “A First Analysis of the Stability of Takens' Embedding,” in Proc. of the IEEE Global Conference on Signal and Information Processing (GlobalSIP) symposium on Information Processing for Big Data, Dec. 2014, pp. 404-408. |
Office Action 1 for U.S. Appl. No. 15/817,906, Date mailed: Feb. 23, 2018. |
Response to Office Action 1 for U.S. Appl. No. 15/817,906, Date mailed: May 23, 2018. |
Notice of Allowance for U.S. Appl. No. 15/817,906, Date mailed: Jul. 6, 2018. |
Notification of Transmittal of International Search Report and the Written Opinion of the International Searching Authority for PCT/US2017/062561 ; date of mailing Feb. 6, 2018. |
International Search Report of the International Searching Authority forPCT/US2017/062561; date of mailing Feb. 6, 2018. |
Written Opinion of the International Searching Authority for PCT/US2017/062561; date of mailing Feb. 6, 2018. |
Notification of International Preliminary Report on Patentability (Chapter I) for PCT/US2017/062561; date of mailing Aug. 1, 2019. |
International Preliminary Report on Patentability (Chapter I) for PCT/US2017/062561; date of mailing Aug. 1, 2019. |
M. Lukosevicius, H. Jaeger: “Reservoir computing approaches to recurrent neural network training”, Computer Science Review (2009), Computer Science Review 3 ( 2009 ) pp. 127-149. |
Jing Dai, et al.: “An Introduction to the Echo State Network and its Applications in Power System”, 2009 15th International Conference on Intelligent System Applications to Power Systems, IEEE, pp. 1-7. |
Office Action 1 for U.S. Appl. No. 17/375,724, Date mailed: Dec. 23, 2022. |
Response to Office Action 1 for U.S. Appl. No. 17/375,724, Date mailed: Mar. 22, 2023. |
Office Action 2 for U.S. Appl. No. 17/375,724, Date mailed: May 2, 2023. |
A. Irmanova, 0. Krestinskaya and A. P. James, “Neuromorphic Adaptive Edge-Preserving Denoising Filter,” 2017 IEEE International Conference on Rebooting Computing (ICRC), 2017, pp. 1-6, doi: 10.1109/ICRC.2017.8123644. (Year: 2017). |
Benjamin et al. Neurogrid—A Mixed-Analog-Digital Multichip System for Large-Scale Neural Simulation, IEEE 2014 (Year: 2014). |
Neuromorphic Computing with Intel's Loihi 2 chip—Technology Brief, 2021 (Year: 2021). |
Office Action 1 for Chinese Patent Application No. 201780078246.2, Dated: Dec. 3, 2020. |
English translation of Office Action 1 for Chinese Patent Application No. 201780078246.2, Date mailed: Dec. 3, 2020. |
Andrius Petrenas, “Reservoir Computing for Extraction of Low Amplitude Atrial Activity in Atrial Fibrillation”, Computing in Cardiology(CINC), pp. 13-16. |
Response to Office Action 1 for Chinese Patent Application No. 201780078246.2, Date filed Apr. 14, 2021. |
English translation of amended claims in Response to Office Action 1 for Chinese Patent Application No. 201780078246.2, Date filed Apr. 14, 2021. |
Office Action 2 for Chinese Patent Application No. 201780078246.2, Dated: Jul. 21, 2021. |
English translation of Office Action 2 for Chinese Patent Application No. 201780078246.2, Date mailed: Jul. 21, 2021. |
Response to Office Action 2 for Chinese Patent Application No. 201780078246.2, Date filed Sep. 13, 2021. |
English translation of amended claims in Response to Office Action 2 for Chinese Patent Application No. 201780078246.2, Date filed Sep. 13, 2021. |
Decision of Rejection for Chinese Patent Application No. 201780078246.2, Dated: Jan. 4, 2022. |
Request for Reexamination for Chinese Patent Application No. 201780078246.2, Filed Mar. 30, 2022. |
English translation of amended claims in Request for Reexamination for Chinese Patent Application No. 201780078246.2, Date filed Mar. 30, 2022. |
Reexamination Decision for Chinese Patent Application No. 201780078246.2, Dated May 6, 2022. |
Amendment for Chinese Patent Application No. 201780078246.2, Dated Jun. 20, 2022. |
English translation of amended claims in Amendment for Chinese Patent Application No. 201780078246.2, Date filed Jun. 20, 2022. |
Notice of Allowance for Chinese Patent Application No. 201780078246.2, Date filed Jul. 5, 2022. |
English translation of Notice of Allowance for Chinese Patent Application No. 201780078246.2, Date filed Jul. 5, 2022. |
Patent Certificate for Chinese Patent No. CN 110088635 B, Dated Sep. 20, 2022. |
English translation of the Patent Certificate for Chinese Patent No. CN 110088635 B, Dated Sep. 20, 2022. |
Communication pursuant to Rules 161 (2) and 162 EPC for European Regional Phase Patent Application No. 17892664.8, dated Aug. 27, 2019. |
Response to the communication pursuant to Rules 161(2) and 162 EPC for European Regional Phase Patent Application No. 17892664.8, dated Mar. 6, 2020. |
Communication pursuant to Rules 70(2) and 70a(2) EPC (the supplementary European search report) for the European Regional Phase Patent Application No. 17892664.8, dated Oct. 22, 2020. |
Andrius Petrenas, et al, “Reservoir computing for extraction of low amplitude atrial activity in atrial fibrillation,” Computing in Cardiology (CINC). 2012. IEEE. Sep. 9, 2012 (Sep. 9, 2012). pp. 13-16. XP032317043. ISBN: 978-1-4673-2076-4. |
Ali Deihimi, et al, “Application of echo state network for harmonic detection in distribution networks,” IET Generation. Transmission&Distribution. vol. 11. No. 5. Dec. 21, 2016 (Dec. 21, 2016). pp. 1094-1101 . XP055733455. |
Herbert Jaeger, “Controlling Recurrent Neural Networks by Conceptors,” Technical Report No. 31, Jul. 22, 2016 (Jul. 22, 2016). XP055732541, Retrieved from the Intemet: URL:https:jarxiv.orgjpdf/1 403.3 369v2.pdf [retrieved on Sep. 21, 2020]. |
Ozturk, et al, “An associative memory readout for ESNs with applications to dynamical pattem recognition,” Neural Networks. Elsevier Science Publishers. Barking. GB. vol. 20. No. 3. Jun. 5, 2007 (Jun. 5, 2007). pp. 377-390. XP022104570. |
Response to the communication pursuant to Rules 70(2) and 70a(2) EPC (the supplementary European search report) for the European Regional Phase Patent Application No. 17892664.8, dated Apr. 22, 2021. |
Pathak et al., entitled, “Hybrid forecasting of chaotic processes: Using machine learning in conjunction with a knowledge-based model,” arXiv:1803.04779, 2018, pp. 1-9. |
M. Lukosevicius and H. Jaeger, “Reservoir computing approaches to recurrent neural network training”, Computer Science Review, 2009, pp. 127-149. |
Jing Dai, et al., “An Introduction to the Echo State Network and its Applications in Power System”, 15th International Conference on Intelligent System Applications to Power Systems, 2009, pp. 1-7. |
Response to Office Action 2 for U.S. Appl. No. 17/375,724, Date mailed: Aug. 1, 2023. |
Notice of Allowance for U.S. Appl. No. 17/375,724, Date mailed: Aug. 21, 2023. |
Number | Date | Country | |
---|---|---|---|
63150024 | Feb 2021 | US | |
63051877 | Jul 2020 | US | |
63051851 | Jul 2020 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 17375724 | Jul 2021 | US |
Child | 17579871 | US |