This application is a 371 National Phase of PCT Application No. PCT/GB2019/051170, filed on Apr. 26, 2019; which claims priority to Great Britain Patent Application No. 1806950.0 filed Apr. 27, 2018, and each of which is herein incorporated by reference in its entirety.
The invention generally relates to machine learning convolutional neural networks. In particular, the invention relates to accelerating convolutional neural networks using optical correlation-based processing systems.
Convolutional neural networks (CNNs or ConvNets) are well known, having become the pre-eminent machine learning technique in image analysis. They are deep, feed-forward artificial neural networks which achieve state-of-the-art performance for image recognition and classification. The life of a ConvNet is typically split into training and inference. Training a large convolutional network however is very time consuming—it can take several weeks, even when using state of the art a Graphic Processing Unit (GPU)s. Some of the more complicated ConvNets can take longer to run, both in the training and inference stage.
In ConvNets, the convolutional layers represent a very significant—often the majority—part of the computational load. Furthermore, increasing the resolution of the convolutions (increasing either the input size or the kernel size) imposes a significant further computational burden. This drives network configurations away from consisting of large numbers of high-resolution convolutional layers or causes them to modify the convolutional layers to reduce the computational burden.
Accelerating training and inference of ConvNets has been attempted digitally, using various algorithms implemented on, for example, GPU or FPGA architectures. However, further acceleration is highly desirable.
It is to this problem, amongst others, that embodiments of the present invention attempt to offer solutions.
In a first independent aspect, there is provided an optical processing system comprising at least one spatial light modulator, SLM, configured to simultaneously display a first input data pattern (a) and at least one data focusing pattern which is a Fourier domain representation (B) of a second input data pattern (b), the optical processing system further comprising a detector for detecting light that has been successively optically processed by said input data patterns and focusing data patterns, thereby producing an optical convolution of the first and second input data patterns, the optical convolution for use in a neural network.
The optical processing system comprises a 4f optical correlator, wherein the input data pattern (A) is in the input plane of the correlator and the data focusing pattern is in the Fourier plane. The data focusing pattern may comprise a convolutional kernel or filter. The SLMs have a dynamic modulating effect on impinging light. The SLMs may be parallel layers for example, or they may be in the same plane, as described in PCT/GB2013/051778. In such 4f optical correlators, light from the optical input is incident on said displayed patterns and successively optically processed thereby such that each of the input data patterns and focusing patterns form a successive optical path, before the light is captured at the detector.
The data focusing pattern is chosen to be a Fourier domain representation (B) of a second input data pattern (b). That is, the filter is computed from the second input data pattern to produce the required convolution for the ConvNet. For example, the data focusing pattern may be computed digitally.
The neural network may be a convolutional neural network (ConvNet), wherein the optical convolution is suitable for a convolutional layer of the neural network. Typically, the second input data pattern is referred to as a ‘kernel’ with which the first input data pattern is convolved.
Significantly, the optical correlator is a more suited platform for evaluating (2D) convolutions than known digital implementations. The optical approach provides improved performance over such methods.
In some embodiments, the first input data pattern (a) comprises a plurality (N) of tiled input so data patterns—or ‘feature maps’—each of the plurality of tile input data patterns corresponding to a member of a ‘batch’ of images being processed, and wherein a plurality of convolutions are produced in parallel for each of the plurality of tile input data patterns. In some embodiments, the second input data pattern comprises a plurality (M) of tiled kernel data patterns, each of the plurality of tiled kernel data patterns corresponding to a distinct member of a set of filter kernels (b), and wherein a plurality of convolutions are produced in parallel for each pair (N×M) formed of said tiled input data patterns and tiled kernel data pattern.
Accordingly, input 2D images and/or 2D kernels may be tiled. By tiling, we mean that the tiles do not overlap each other and are within the span of a detector area. Accordingly, each ‘tile’ is of a smaller resolution than the detector resolution. This results in different ways of using a given hardware resolution to perform multiple, lower resolution convolutions. Advantageously therefore, by appropriately selecting tiled inputs and kernels, the full resolution of the detector (e.g. camera) can be exploited. Tiling of both the input and kernels in order to fully utilise the convolution resolution produces results across the detector (camera) plane. Advantageously, a plurality of convolutions are achieved in parallel, for each input and kernel data pair (each one may be a ‘tile’). The parallel convolutions may be ‘batched’. A batch represents a plurality of smaller Fourier transforms.
Note that, in the case of tiling the kernels, they are tiled in the direct representation (b) with appropriate padding. They are then converted into a Fourier-domain filter representation (B). Due to the specifics of filter generation, this conversion is not perfect and can lead to crosstalk between the different operations.
Preferably, the optical processing system further comprises a processor for digitally obtaining the data focusing pattern (B) from the second input data pattern (b). Accordingly, the filter of the 4f optical correlator is computed digitally from the kernel. This is a relatively fast process due to low filter resolution (compared to optical detector resolution). Furthermore, once the net has been trained these patterns can be re-used; they do not need re-computing.
The processor may be configured to obtain the data focusing pattern using a minimum Euclidian distance (MED) method. This method assumes that the appropriate available SLM modulation value to use to represent a given complex value is the value which is nearest by distance measured on the complex plane.
Advantageously, the 4f optical correlator may be employed to speed up the inference process of a ConvNet, the net having been trained on a conventional digital platform with knowledge of the idiosyncrasies of the optical system.
Alternatively, the 4f optical correlator may be incorporated into the net and used directly in the training process. The optical correlator is used during the forward propagation through the net, while conventional digital convolutions are used during backward propagation.
It should be appreciated that the restrictions on the filter image due to properties of the spatial light modulator limit the data focusing patterns (B) which can be displayed, and hence the corresponding convolutional kernels (b) that can be implemented. However, in certain embodiments, it is important that the maximum spatial frequency components of the data focusing pattern (B) are limited, as this determines the effective size of the corresponding kernel (b). Recall that the data focusing pattern (B) and the kernel (b) are linked by a discrete Fourier Transform (DFT). This fact must be considered during the data focusing pattern design. Accordingly, the system controls the maximum spatial frequency content of the filter to limit the effective kernel width; that is, the size of the kernel and the distance beyond which points in the input cannot affect corresponding points in the output. This prevents undesired connectivity between different convolution operations. Key to successful operation is controlling this connectivity.
In some embodiments, the second input data pattern (b) is a signed kernel having positive and negative components.
In some embodiments, the kernel is decomposed into 1) a positive-valued kernel p and 2) a uniform bias kernel bl, such that f*k=f*(p−bl) being reconstructed from the two positive-valued convolutions. This overcomes the fact that the detector (camera) cannot measure negative amplitudes.
In some embodiments, the SLMs are binary SLMs rather than multi-level SLMs. Binary SLMs have high-bandwidth, providing high performance, but require the application to be adapted to the limitations of the devices.
In some embodiments, the system further comprises a lens or lens selection for adjusting magnification and for optically pooling the neural network.
In a further aspect, there is provided a method of producing an optical convolution, using an optical processing system as described above. Such a method may be used to convolve, or to convolve and pool a layer of a neural network. For example, the method may be used for training or inference of a convolutional neural network.
In a further aspect, there is provided the use of a system as described above in deep machine learning for image analysis.
In a second independent aspect, there is provided a method of configuring a neural network using a 4f optical correlator, the method comprising the steps of:
In some embodiments, the method further comprises the steps of tiling at least one of first and second input data patterns and producing a plurality of optical convolutions in parallel.
In some embodiments, different system components operate at different speeds. A camera operating slower than an input SLM is exposed across multiple frames in order to produce a result which is the sum of multiple optical convolutions.
It will be appreciated that all of the optional embodiments described with reference to the system aspects, apply to the methods.
The present inventors have realised that optical correlators may be utilised in specific ways to evaluate convolutions which are at the core of Convolutional Neural Nets (CNNs), or ConvNets. In other words, optical processing is being applied to deep learning processes, in particular to evaluate convolutions required by a ConvNet. This application is not trivial, however, as optical convolution has specific idiosyncrasies which require consideration. Specifically, the problems are related to limitations of the electro-optic and opto-electrical hardware available, as will be described in detail below.
The Optical Fourier Transform and Optical Correlators
Coherent optical processing systems are known. In coherent processing systems such as optical correlators, a laser or other coherent source is typically employed to be modulated in either phase or amplitude, or a combination of the two by one or more spatial light modulator (SLM) devices. SLMs are devices that have a dynamic modulating effect on impinging light. These typically incorporate liquid crystal devices but may also be micromirror microelectromechanical (MEMS) devices. Optical correlator devices are typically used as optical pattern recognition systems. In a 4f matched filter or Joint Transform Correlator (JTC) system, the SLM devices are addressed with functions that represent either input or reference patterns (which can be images) and/or filter patterns, usually based upon Fourier transform representations of reference functions/patterns that are to be “matched” to the input function.
A camera such as a complementary metal-oxide-semiconductor (CMOS) or charged couple device (CCD) sensor is typically positioned in the output plane of the optical system to capture the resulting optical intensity distribution, which in the case of an optical correlator system may contain localised correlation intensities denoting the similarity and relative alignment of the input and references functions.
Coherent optical information processing exploits the fact that a simple lens renders a Fourier transform and the most common function used in the type of coherent optical systems concerning both the prior art and the invention is the optical Fourier Transform (OFT)—the decomposition of a temporal or, in this case, spatial distribution into its frequency components. This is analogous to the pure form of the two-dimensional Fourier transform denoted by the following equation:
where (x,y) represent space/time variables and (u,v) are frequency variables.
The OFT may be achieved by the optical system shown in
The front focal plane (which may be referred to for clarity as the downstream focal plane) contains precisely the Fourier transform—both amplitude and phase—of a complex field found at the back focal plane (which may be referred to for clarity as the upstream focal plane). This is consistent with the fact that for a perfectly flat beam of infinite spatial extent, a single spot is obtained (the ‘DC’ term in Fourier theory).
In optical processing systems, the OFT may be employed as a direct replacement for the electronic/software-based Fast Fourier Transform (FFT) family of algorithms, offering significant advantages in terms of process time and resolution. This process may be used as the basis of a variety of functions.
Correlation between two or more functions may be achieved in an optical system in two main ways. The first way is a matched filter process, denoted by the following equation:
r(x,y)*g(x,y)=FT[R(u,v)*×G(u,v)]
Where upper case functions represent the Fourier transform of their lower case equivalents; “*” indicates the complex conjugate of the adjacent function and “*” denotes the correlation function.
The second way to achieve correlation is to use a Joint Transform Correlation process, such as the 1/f JTC described in EP1546838 (WO2004/029746).
In each case the correlation is formed as the inverse Fourier transform of the product of two functions, which have themselves been Fourier transformed. In the case of a matched filter, one optically by a lens; and one electronically during the filter design process. In the case of the JTC, both optically.
For a matched filter process, the pattern displayed by the pixels of the first SLM 6 will be the “input scene, a” g(x,y) and the pattern displayed on the second SLM 8 will represent a version of the Fourier transform of the reference function r(x,y).
As shown in
A(x,y)·t(x,y).
Input data a are placed at the front of the system. The filter 8 (B) effectively multiplies the optical field by a 2D function B. The camera sensor 10 images the output beam c2. The system performs the mathematical operation
c=−1{F{a}·B}
where c is in turn a complex amplitude function. (The second lens in reality performs a forward, rather than inverse Fourier transform, but the net effect is a coordinate inversion compensated for by the camera sensor orientation.) The camera sensor measures the intensity of this field, l=|c|2.
The convolution theorem uses the Fourier transform to effect the convolution (*) of two functions, ƒ and g, by simple multiplication:
{ƒ*g}={ƒ}·{g}.
The present inventors have realised that the effect of the optical system is to evaluate a convolution
a*−1{B}=a*b.
One of the inputs to the correlation, a, is directly input into the optical system. The second input to the correlation, b, is processed digitally. This is performed off-line, producing B using a digital discrete Fourier transform followed by post-processing to map onto the available filter levels (appropriate use of the optical correlator incurs a small overhead in generating filter B from target b).
Accordingly, the present inventors acknowledge that correlation is tightly related to the convolution process, corresponding to reversing coordinates in the function being convolved with. It is noted that in 2D images, a reversal in each coordinate is a rotation of the image. For clarity, these functions are defined in 1D for clarity, though they naturally extend to 2D.
Convolution (*) of two functions ƒ(x) and g(x) is defined, in both discrete and continuous representations, as:
ƒ*g(x)=Σiƒ(i)g(x−i),
ƒ*g(x)=∫ƒ(χ)g(x−χ)dχ.
Convolution (o) of the same two functions ƒ(x) and g(x) is defined as:
ƒ∘g(x)=Σiƒ(i)g(x+i),
ƒ∘g(x)=∫ƒ(χ)g(x+χ)dχ.
From these definitions, it can be seen that the operations are interchangeable under coordinate reversal of one of the functions. This reversal is performed in the optical correlator by rotating the filter. Accordingly, for symmetric functions, correlations and convolutions are equivalent. Importantly, the present inventors have realised that an optical correlator may be thought of as also an optical convolver.
Correlation is, amongst other things, very useful for pattern matching applications. The process can be perceived as dragging one function over another and taking the dot-product (projection) between them at the set of all displacements. Two functions will have a large dot-product, and produce a ‘correlation spot’ optically, corresponding to locations where their displaced versions match.
Information is encoded into the optical beam by using SLMs. SLMs are essentially very small displays (indeed, some of the devices used in optical processors system originate in display projectors). These devices use liquid crystal technology—combined with optical polarisers—to modulate the light beam. In general, the amplitude and relative phase of the light beams can be modulated. Different devices offer different capabilities. In general, devices can be split into two categories: multilevel SLMs and binary SLMs.
In the case of multilevel SLMs, each pixel on the device can be set to one of a number of different levels (normally 8-bit, or 256 levels). Depending on the optical polarisation, the device may modulate the amplitude or phase of the optical field in a different manner. The SLM is not capable of independently modulating the magnitude and phase of the optical field. In general, there is some coupled modulation. This modulation can be expressed by an ‘operating curve;’ a path on the complex plane describing the accessible modulation.
Binary SLMs typically operate much faster than multilevel SLMs, at around 10 KHz. However, binary SLMs only offer 2 different modulation levels, which may be either amplitude-modulating, phase-modulating or both. Despite only offering two levels, the much higher speed means that binary SLMs are the highest bandwidth devices.
It is noted that, if used as a filter SLM, a binary SLM is not restricted to representing a binary convolution kernel as the Fourier transform of a binary function is not necessarily binary. However, a binary filter does restrict the ability to control the spatial frequency content of the filter.
Convolutional Neural Networks (ConvNets)
An example schematic of a 2D ConvNet is summarised in
ConvNets typically comprise a number of different layers chained together, including:
Whilst it will be appreciated that there are many nuances to ConvNets, summarised here is only a high-level overview of a canonical architecture. These layers can be combined in a myriad of different ways. Configuring neural networks correctly, with good performance and without excessive computational demand, is one of the key challenges of the field.
At each layer (aside from the fully-connected layers) the state of the network being passed forward is a 3D object (or 4D when including the batch dimension), consisting of a set of x,y ‘feature maps’ due to the application of different convolutional kernels.
Within the layers there are further configuration options. For example, there are many ways the convolutional layer can combine the different feature maps from the preceding layer, and there are a variety of non-linear activation functions.
The life of a ConvNet can be split into training and inference.
After the configuration of the net has been defined, it must be trained. There are a large number of parameters within the net which must be empirically determined, including the convolution kernels and the weights of the fully-connected layers.
Initially, these parameters are randomly set. Training them requires a large pre-classified dataset. This training dataset is fed through the net, and the errors between the correct and reported classification are then fed back through the net (back-propagation). The errors with respect to each point in the net are used with a gradient-descent method to optimise the variables (i.e. weights in the kernel) at that point. Convolutions are performed during the backpropagation. It is unlikely that these should be implemented optically due to the higher precision requirements.
Once training is complete, the net can be deployed for inference, where data is simply presented to the net and propagated through to the final classification layer.
Applying Optical Processing to ConvNets/Implementing Optical Convolutions
An optical 4f correlator may be used to evaluate convolutions, and in particular to accelerate ConvNet inference applications. Advantageously, inference can be made to work at relatively lower precision (compared to training which requires high precision in order for the numerical gradient descent methods to work robustly—particularly during backpropagation).
The optical implementation of the convolution is not a symmetric operation. Referring back to the section ‘The optical Fourier transform and optical correlators’ above, one of the arguments a was input directly into the optical system, whereas for the second argument b its Fourier-domain representation B was computed and displayed as a ‘filter’, in the vernacular of the optical correlator. This is in contrast to a pure convolution, where a and b are interchangeable.
Accordingly, the process of creating the filter and input leads to asymmetric performance. Schematically, this Fourier domain convolution is summarised in
This architecture requires digital computation of the filter based on the kernel. This is not a significant overhead. The pre-computed filters can be stored once training is finished removing this overhead during interference. Secondly, the relatively low resolution of the kernel simplifies the filter computation. The technical aspects of kernel calculation will be discussed below.
Taking the hardware into consideration, the optical convolution is an O(1) process. A convolution at the resolution of the SLM and camera is performed in the “cycle time” of the system; that is, the period it takes for the system to update itself. Different aspects of the system (input SLM, filter SLM, and camera) can have different cycle times. The effective throughput of the system is determined by the speed of the fastest component; slower components constrain the actual operations that the system can perform.
These hardware considerations lead to at least four different technical problems with using the optical processor to evaluate convolutions for use in a neural net:
These problems and their solutions are now addressed in turn.
1. Fixed Resolution Convolutions
As discussed, the convolution performed is at the resolution of the system hardware. While a ConvNet may have a relatively high-resolution input, the pooling stages mean that as the net progresses the resolution of the convolutions decreases. Even the first convolutional layers are unlikely to utilise the resolution of the correlator. This issue may be addressed by optically parallelising the convolutions. By arranging a number of inputs on the input SLM, the convolution with respect to the same kernel in parallel may be found.
Referring back to
The input must be separated appropriately. When evaluating discrete convolutions, the ‘full’ convolution will extend a kernel width beyond the input footprint. Thus, the different inputs must be tiled with sufficient separation to allow this data to be extracted without crosstalk between the different results. To obtain only the ‘valid’ convolution, one can tile more tightly by neglecting the borders of the result. Preferably, when tiling more tightly, the system may be configured to avoid the full convolution regions overlapping with the same-padded regions of neighboring images.
This may technically be referred to as “same-padding” convolution. In CNN terminology: “valid” convolution refers to the case where no zero-padding is applied (i.e. the kernel must stay entirely within the “valid” region defined by the image dimensions): this results in an output image that is a kernel-width smaller than the input image. For small kernels this is a minor difference, but in most CNN cases, “same-padding” convolution is preferred rather than full or valid.
However, input tiling is not the only form of tiling available to use. The kernels can also be tiled before converting them into optical filters. The corresponding convolutions will then be tiled. Enough space should be allowed between the tiled kernels such that the resulting convolutions do not run into each other. This form of tiling is more challenging to implement as it is more demanding of the optical filter function and can lead to degraded performance as more kernels are tiled together.
Evaluating a batch of lower-resolution Fourier transforms is not as effective a use of the system as evaluating a single high-resolution Fourier transform. Parallelisation of this fashion has an effect on the competitiveness of the optical approach relative to a digital approach. Advantageously, the optical approach offers a Fourier transform as an O(1) process, as compared to an O(N log N) process as offered by a digital Fourier transform. The size of the Fourier transform is determined by the resolution of the system. It is when exploiting this full resolution that we can achieve the biggest performance gain.
When we do not make use of the full resolution Fourier transform, we do not make use of the full system performance, although tiling goes some way towards recouping this. While we are still making use of the full resolution of the system, we are not capitalising on the high-resolution of the corresponding Fourier transform. Instead we are using it to implement a ‘batch’ convolution process. This batch approach is instead equivalent to a number of smaller Fourier transforms.
A fraction of the latent performance is ‘lost’ when the full resolution of the inherent Fourier transform is not used directly, but instead is used to perform a batch of lower resolution transforms. This following comparison between these two applications is made using simple computational scaling arguments.
The comparative computational scaling is considered to be dominated by the scaling of the Fourier transforms (the element-wise multiplication is ‘cheap’ by comparison). Thus, one can compare our performance respectively as:
FFT:O(N log N)
OFT:O(1),
There are N pixels in total. Instead of evaluating one size-N transform, a batch of P size-M transforms are now evaluated, where N=M·P. The same O(1) scaling applies for the OFT, but the FFT now has scaling:
Batch=FFT:O(P·M log M).
A “slowdown factor” S is defined, which describes the factor of the roofline performance we can be achieved, subject to these simple scaling arguments:
The bases of the logarithms in this formula do not matter. This formula is the fraction of the roofline performance advantage of the optical system relative to a computer one expects to realise when performing batched operations.
2. Accommodating SLM Operating Ranges
As discussed, both the input and filter SLM have limited operating range. The implication of this is that an arbitrary convolution cannot be performed. It is not necessarily a straightforward case of simply implementing convolution kernels optically.
This can be illustrated by considering the filter SLM. Consider that we have a target kernel and wish to determine the corresponding filter. Given the known operating range of the SLM, one can find the optimal representation for this filter. However, it will not represent exactly the target kernel, but a different kernel. In CNN applications, we may call this actual kernel the ‘hidden kernel.’ This process is illustrated in
As shown in
This new kernel has the property of still selecting the same features as the original kernel, but it is not identical. However, as shown
The broad workflow addressed is one where the net is trained on a conventional digital computer, and then deployed on the optical platform for accelerated inference applications. The training must be cognisant of the performance of the optics. The fundamental principle being capitalised on is the fact that the minimisation landscape explored when training a ConvNet is not particularly dramatic. There should be an adequate solution in a region accessible to the optical convolutions. There are two fundamental ways the limited SLM range can be addressed, in the form of different ways in which the filter can be derived from the kernel:
In one implementation, training occurs entirely digitally, using either a conventional convolution operation or a high-fidelity simulation of the correlator. The trained neural net is then deployed on the optics for inference, potentially after an intermediate quantisation or calibration step. In a second implementation, the optical convolution is used directly during training. Bearing in mind optical aberrations, device imperfections etc, the latter is expected to work most effectively.
It is noted that the restrictions on kernel representation have implications during the training step. Back-propagation involves setting the convolution kernels to the errors, and needs to be performed with appropriate fidelity. This is not an issue if the model is trained in a computer and then deployed optically (an important first step in the development). A 630 number of techniques can be used to enable the error-kernels to be implemented optically by improving filter performance, for example: improved SLM characterisation and optimisation; filter spatial- and temporal-dithering and use of multiple SLMs in conjunction to extend the SLM modulation range.
When attempting to implement a given kernel, an appropriate method to use is the minimum Euclidian distance (MED) method common in generating optical filters for correlators. The principle is to find the closest accessible point on the complex plane to a given complex value required by the Fourier transform of the filter.
This method is most successful if the complex values available to the SLM (the ‘operating curve’) are mixed-mode, in that they modulate both amplitude and phase.
It is important that the filter does not contain spatial frequency components that make the kernel effectively larger than it is supposed to be. In most embodiments, subsequent layers for the same input cannot be computed simultaneously, so filter-bleed crosstalk occurs between different training examples in the image batch, not different layers of the network.
A technique to avoid this is to ensure that the kernel design process does not introduce any higher spatial frequency components. For example, if a higher kernel resolution is required to display on the SLM, padding with zeros and then Fourier transforming, or Fourier transforming and then performing a sinc-interpolation are both valid approaches (other interpolation schemes—while sub-optimal—might offer adequate performance).
The use of the camera to measure optical intensity means that one cannot directly determine the sign of the output function. While some net configurations may tolerate this, it is nonetheless potentially an important issue as, for example, commonly applied nonlinearities (e.g. ReLu) depend on the sign of the measured function. However, the present inventors have developed a method to straightforwardly determine the sign of the resultant convolution by making use of a bias function.
Consider that the aim is to optically evaluate the convolution f*k where f is positive, and k is a bipolar kernel. (Bold font is used here to denote a 2D matrix). This convolution can be manipulated into the difference between two separate convolutions:
where I is an identity matrix of the same size as the kernel. A bias b was applied to the kernel in order to produce a positive biased kernel p. We then have a second kernel which is simply a matrix of 1s; application of this kernel is the same as boxcar averaging. Both of these two convolutions now have exclusively positive inputs, so have positive outputs. The camera can measure without having lost amplitude information as it is known a priori. The difference between these two functions then yields the convolution desired.
This method has minimal performance overhead. The filter corresponding to the kernel I is trivial, and the overhead of differencing two functions is trivial. These two operations could be conducted in parallel.
Moreover, if the input f is also bipolar, it can be split into positive and negative components, these processed separately, and the result reconstituted by combining the resulting convolutions.
Due to hardware limitations, it will often be the case that the camera operates slower than the SLMs. Thus, while the system will be able to process data at a rate determined by the throughput of the SLMs, the resulting frames cannot be independently captured. A particular implementation may involve a fast input SLM, a slower filter SLM, and an intermediate-speed camera.
However, the system can still be used usefully. The camera exposure can span multiple input SLM frames. The camera then captures multiple convolutions, which are optically integrated; the frames are added together.
This is conducive to a typical convolution operation found within neural networks, which actively implement this addition. Consider, for example, that a convolutional layer in a Theano neural network operates on 4D-tensors. The operation inputs (A,B) and output (C) are defined as:
A=A(batch,input_channel,x,y) % Feature maps
B=B(output_channel,input_channel,x,y) % Kernels
C=C(batch,output_channel,x,y) % Outputs=Σi conv2[A(batch,i,:,:,),B(output_channel,i,:,:)]
It can be seen that while the primitive operation is a 2D convolution, this is immediately incorporated in a summation with other 2D convolutions. This can be directly emulated optically, by exposing the camera throughout multiple input frames (albeit with the caveat that we will be summing the intensity magnitude, rather than the amplitude which represents the true convolution result).
Other Aspects
In some embodiments, pooling schemes are implemented optically. The objective of pooling is to reduce the resolution of a given feature map using some down-sampling scheme. By engineering the optical scaling of the system, a given input resolution can be rendered onto a given (smaller) output resolution. The effect of this is naturally to implement an l2 pooling (sum of intensities).
The highest bandwidth SLMs are binary and may be used to represent the filter. A binary filter B can be made that corresponds to a far more general kernel b. There are some constraints on the kernel due to a binary filter—such as symmetry—but there are a number of ways around this (for example, use of a fixed random phase mask in conjunction with the filter SLM). The restriction that the filter be binary does not correspond a restriction that the kernel is binary. However, it does have significant implications for limiting the spectral content of the filter, and it is challenging to control the kernel width with a binary filter. For this reason, it is preferable to not use the binary SLM as the filter SLM.
Another—more useful—way the high-bandwidth binary SLM could be used is in place of the input, representing multi-level inputs through temporal or spatial dithering, or a net configuration designed to use binary-only inputs could be implemented.
An important requirement of drive-side integration is to rapidly transfer in-to and out-of the optical domain with minimal latency. ConvNets represent a high computational load of different primitive operations—coping with this computational bandwidth requires a high-performance system. Furthermore, in order to achieve high system utilisation, a number of jobs must be batched together. This will, however, have effects on latency.
The use of an FPGA-based drive system affords significant power and flexibility. Beyond simply driving the I/O, it can be used to leverage the power of the optical system. Functions such as organising the batch process, filter generation, and determining bipolar convolutions can be implemented in hardware.
Furthermore, the FPGA allows, in certain embodiments, for development of an integrated solution beyond the convolution layer. The non-linear activation function may be implemented in hardware. Furthermore, pooling schemes may also be implemented, either optically, in hardware in the FPGA, or in software.
In a further implementation, an integrated solution may be offered where the drive electronics also implement the other layers of the neural net. This means data can be rapidly digitised and then transferred back into the optical domain between the convolutional layers with minimal latency.
Number | Date | Country | Kind |
---|---|---|---|
1806950 | Apr 2018 | GB | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/GB2019/051170 | 4/26/2019 | WO |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2019/207317 | 10/31/2019 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
5909500 | Moore | Jun 1999 | A |
6529614 | Chao et al. | Mar 2003 | B1 |
7603038 | Berman | Oct 2009 | B1 |
8886283 | Chen | Nov 2014 | B1 |
20130222582 | Mohan | Aug 2013 | A1 |
20170234985 | Kadambi | Aug 2017 | A1 |
20180074304 | Hernandez-Cubero | Mar 2018 | A1 |
20180341248 | Mehr | Nov 2018 | A1 |
20190019100 | Roques-Carmes | Jan 2019 | A1 |
Number | Date | Country |
---|---|---|
0499469 | Aug 1992 | EP |
0621524 | Oct 1994 | EP |
1546838 | May 2019 | EP |
WO-9931563 | Jun 1999 | WO |
WO-2004029746 | Apr 2004 | WO |
WO-2008110779 | Sep 2008 | WO |
2014087126 | Jun 2014 | WO |
WO-2014087126 | Jun 2014 | WO |
WO-2016110667 | Jul 2016 | WO |
WO-2017141997 | Aug 2017 | WO |
Entry |
---|
Lizana A, Márquez A, Lobato L, Rodange Y, Moreno I, lemmi C, Campos J. The minimum Euclidean distance principle applied to improve the modulation diffraction efficiency in digitally controlled spatial light modulators. Optics express. May 10, 2010;18(10): 10581-93. (Year: 2010). |
Liu JS, Collings N, Crossland WA, Chu DP, Waddie A, Taghizadeh MR. Simulation and experiment on generation of an arbitrary array of intense spots by a tiled hologram. Journal of Optics. Jul. 30, 2010;12(8):085402. (Year: 2010). |
Harasthy T, Ovseník L, Turán J. Current summary of the practical using of optical correlators. Acta Electrotechnica et Informatica. Oct. 1, 2012;12(4):30. (Year: 2012). |
Kodate, K., Watanabe, E., Delac, K., & Grgic, M. (2007). Compact Parallel Optical Correlator for Face Recognition, and Its Application (pp. 235-260). IntechOpen. (Year: 2007). |
International Search Report and Written Opinion dated Jul. 16, 2019 in International Application No. PCT/GB2019/051170. |
Chen Hauijin G et al.: “ASP Vision: Optically Computing the First Layer of Convolutional Neural Networks Using Angle Sensitive Pixels”, 2016 IEEE CVPR, Jun. 27, 2016. |
Nguyen Thanh et al.: “Compuational optical tomography using 3-D deep convolutional neural networks”, Optical Engineering, Soc. of Photo-Optical Instrumention Engineers, Bellingham, vol. 57, No. 4, Apr. 1, 2018. |
Number | Date | Country | |
---|---|---|---|
20210056358 A1 | Feb 2021 | US |