The present disclosure relates to a neuron, in particular to, a second order neuron for machine learning.
In the field of machine learning, artificial neural networks (ANNs), particularly deep neural networks such as convolutional neural networks (CNNs), have achieved success in various types of applications including, but not limited to, classification, unsupervised learning, prediction, image processing, analysis, etc. Generally, ANNs are constructed with artificial neurons of a same type. The artificial neurons generally include two features: (1) an inner (i.e., dot) product between an input vector and a matching vector of trainable parameters and (2) a nonlinear excitation function. These artificial neurons can be interconnected to approximate a general function but the topology of the resulting network is not unique.
In some embodiments, an apparatus includes a second order neuron. The second order neuron includes a first dot product circuitry and a second dot product circuitry. The first dot product circuitry is configured to determine a first dot product of an intermediate vector and an input vector. The intermediate vector corresponds to a product of the input vector and a first weight vector or the input vector and a weight matrix. The second dot product circuitry is configured to determine a second dot product of the input vector and a second weight vector. The input vector, the intermediate vector, the first weight vector and the second weight vector each contain a number, n, elements.
In some embodiments of the apparatus, the second order neuron further includes a nonlinear circuitry configured to determine the output of the second order artificial neuron based, at least in part, on the first dot product and based, at least in part, on the second dot product.
In some embodiments of the apparatus, each element of the intermediate vector corresponds to a product of a respective weight of the first weight vector and a respective element of the input vector.
In some embodiments of the apparatus, the intermediate vector corresponds to the product of the weight matrix and the input vector, the weight matrix having dimension n×n.
In some embodiments of the apparatus, the second order neuron further includes a third dot product circuitry, a multiplier circuitry and a summer circuitry. The third dot product circuitry is configured to determine a third dot product of the input vector and a third weight vector. The third weight vector containing the number, n, elements. The multiplier circuitry is configured to multiply the second dot product and the third dot product to yield an intermediate product. The summer circuitry is configured to add the intermediate product and the first dot product to yield an intermediate output. The output of the second order neuron is related to the intermediate output.
In some embodiments of the apparatus, the second order neuron further includes a summer circuitry configured to add the first dot product and the second dot product to yield an intermediate output. The output of the second order neuron is related to the intermediate output.
In some embodiments of the apparatus, the n is equal to two and the second order neuron is configured to implement an exclusive or (XOR) function or a NOR gate. In some embodiments of the apparatus, the second order neuron is configured to classify a plurality of concentric circles. In some embodiments of the apparatus, each weight is determined by training.
In some embodiments of the apparatus, the nonlinear circuitry is configured to implement a sigmoid function.
In some embodiments, a system includes a device and an artificial neural network (ANN). The device includes a processor circuitry, a memory circuitry and an artificial neural network (ANN) management circuitry. The ANN includes a second order neuron. The device is configured to provide an input vector to the ANN. The second order neuron includes a first dot product circuitry and a second dot product circuitry. The first dot product circuitry is configured to determine a first dot product of an intermediate vector and the input vector. The intermediate vector corresponds to a product of the input vector and a first weight vector or the input vector and a weight matrix. The second dot product circuitry is configured to determine a second dot product of the input vector and a second weight vector. The input vector, the intermediate vector, the first weight vector and the second weight vector each contain a number, n, elements.
In some embodiments of the system, the second order neuron further includes a nonlinear circuitry configured to determine the output of the second order artificial neuron based, at least in part, on the first dot product and based, at least in part, on the second dot product.
In some embodiments of the system, each element of the intermediate vector corresponds to a product of a respective weight of the first weight vector and a respective element of the input vector.
In some embodiments of the system, the intermediate vector corresponds to the product of the weight matrix and the input vector, the weight matrix having dimension n×n.
In some embodiments of the system, the second order neuron further includes a third dot product circuitry, a multiplier circuitry and a summer circuitry. The third dot product circuitry is configured to determine a third dot product of the input vector and a third weight vector. The third weight vector containing the number, n, elements. The multiplier circuitry is configured to multiply the second dot product and the third dot product to yield an intermediate product. The summer circuitry is configured to add the intermediate product and the first dot product to yield an intermediate output. The output of the second order neuron is related to the intermediate output.
In some embodiments of the system, the second order neuron further includes a summer circuitry configured to add the first dot product and the second dot product to yield an intermediate output. The output of the second order neuron is related to the intermediate output.
In some embodiments of the system, the n is equal to two and the second order neuron is configured to implement an exclusive or (XOR) function or a NOR gate. In some embodiments of the system, the second order neuron is configured to classify a plurality of concentric circles.
In some embodiments, the system further includes training circuitry configured to determine each weight.
In some embodiments of the system, the nonlinear circuitry is configured to implement a sigmoid function.
The drawings show embodiments of the disclosed subject matter for the purpose of illustrating features and advantages of the disclosed subject matter. However, it should be understood that the present application is not limited to the precise arrangements and instrumentalities shown in the drawings, wherein:
A model of single neurons (also known as perceptrons) has been applied to solve linearly separable problems. For linearly inseparable tasks, a plurality of layers of a plurality of single neurons may be used to perform multi-scale nonlinear analysis. In other words, such single neurons may be configured to perform linear classification individually and their linear functionality may be enhanced by connected a plurality of such single neurons into an artificial organism.
A single neuron may be configured to receive a plurality of inputs: x0, x1, x2, . . . , xn, where x1, x2, . . . , xn are n elements of a size n input vector and x0 may correspond to a bias term. As used herein, “vector” corresponds to a one-dimensional array, e.g., 1× n, an n element vector corresponds to an n element array. The single neuron may be configured to generate an intermediate function f(x) as:
where wi, i=1, 2, . . . , n are trainable parameters (i.e., weights), b=w0 and x0=1. In this example, b may correspond to a bias that is determined during training and is fixed during operation. It may be appreciated that the sum over i corresponds to the inner (i.e., dot) product of the input vector and a vector of trainable weights. The intermediate function may then be input to a nonlinear function g(f) to produce an output y=g(f(x)). In one nonlimiting example, the nonlinear function may be a sigmoid. In another nonlimiting example, the nonlinear function may correspond to a rectified linear unit (ReLU). A single neuron may separate (i.e., classify) two sets of inputs that are linearly separable. Classifying linearly inseparable groups of inputs using single neuron(s) may result in classification errors.
Generally, the present disclosure relates to a second order neuron for machine learning. The second order neuron is configured to implement a second order function of an input vector, i.e., is configured to include a multiplicative product of elements of the input vector. As used herein, “product” corresponds to a multiplicative product. A second order neuron, consistent with the present disclosure, is configured to implement a quadratic function of an input vector that includes n elements. Generally, the second order neuron may be configured to determine a first dot product of an intermediate vector and an input vector. The intermediate vector may correspond to a product of the input vector and a first weight vector or a product of the input vector and a matrix of weights (“weight matrix”). As used herein, a matrix corresponds to a two-dimensional array, e.g., n×n. As used herein, weights may correspond to structural parameters. Structural parameters may further include bias values, e.g., offsets.
The input vector, the intermediate vector and the first weight vector each have size, n, i.e., contain n elements. The second order neuron may be further configured to determine a second dot product of the input vector and a second weight vector containing n elements. The second order neuron may be further configured to determine an output of the second order neuron based, at least in part, on the first dot product and based, at least in part, on the second dot product. For example, an intermediate output may be input to a nonlinear function circuitry and an output of the nonlinear function circuitry may then correspond to the output of the second order neuron.
As used herein, “second order neuron” corresponds to “second order artificial neuron”. For ease of description, in the following, an example second order artificial neuron is referred to as “example second order neuron” and a general second order artificial neuron is referred to as “general second order neuron”.
The intermediate output of the general second order neuron may be described mathematically as:
where aij and bk are weights; xi, xj, xk are elements of an input vector and c is a bias term. The first summing term may correspond to a dot product of an intermediate vector and the input vector, x1, i=1, 2, . . . , n, with the intermediate vector corresponding to a product of a weight matrix (aij, i=1, 2, . . . , n; j=1, 2, . . . , n and i≥j) and the input vector. In one nonlimiting example, the weight matrix may be a lower triangular matrix. The second summing term corresponds to the second dot product of the input vector and a second weight vector (bk, k=1, 2, . . . , n). The intermediate function may then correspond to a sum of the first dot product and the second dot product (including the bias term).
The intermediate output of the example second order neuron may be described mathematically as:
where wir, wig, wib (i=1, 2, . . . , n) are trainable weights, xi (i=1, 2, . . . , n), are elements of the input vector and b1, b2 and c are bias terms (e.g., b1=w0rx0, b2=w0gx0, c=w0bx02, x0=1). The third summing term (that sums wibxi2) corresponds to a dot product of an intermediate vector and the input vector with the intermediate vector a product of the input vector (x1, i=1, 2, . . . , n) and the first weight vector (wib, i=1, 2, . . . , n). The product of the input vector and the first weight vector may be performed element by element so that element i of the intermediate vector corresponds to the product of element i of the input vector and element i of the first weight vector (i.e., wibxi). The first and second parenthetical terms correspond to the second dot product of the input vector and a second weight vector (wir, i=1, 2, . . . , n) and a third dot product of the input vector and a third weight vector (wig, i=1, 2, . . . , n). The second dot product and the third dot product may then be multiplied to yield an intermediate product. The intermediate output may then correspond to a sum of the intermediate product and the first dot product.
The intermediate output of the second order neuron may then be provided to a nonlinear function. In one nonlimiting example, the nonlinear function may correspond to a sigmoid function. The sigmoid function may be described as:
Thus, a second order neuron may be configured to receive an input vector and to determine an intermediate output that corresponds to a quadratic function of the input vector and a plurality of trainable weights. The intermediate output may then be provided to a nonlinear function circuitry configured to determine the second order neuron output.
In one nonlimiting example, the example neuron may be configured, with a two element input vector, to model linearly inseparable functions and/or classify linearly inseparable patterns. Linearly inseparable functions and/or patterns may include, but are not limited to, exclusive-OR (“XOR”) functions, XOR-like patterns, NOR functions, NOR-like patterns, concentric rings, fuzzy logic, etc.
Generally, the present disclosure relates to a second order artificial neuron. The second order artificial neuron includes a first dot product circuitry and a second dot product circuitry. The first dot product circuitry is configured to determine a first dot product of an intermediate vector and an input vector. In one nonlimiting example, the intermediate vector corresponds to a product of the input vector and a first weight vector. In another nonlimiting example, the intermediate vector corresponds to a product of the input vector and a weight matrix. The second dot product circuitry is configured to determine a second dot product of the input vector and a second weight vector. The input vector, the intermediate vector, the first weight vector and the second weight vector each contain a number, n, elements. The second order artificial neuron may further include a nonlinear circuitry configured to determine the output of the second order artificial neuron based, at least in part, on the first dot product and based, at least in part, on the second dot product.
Second order neuron 100 is configured to receive an input vector that includes a number, n, elements. Second order neuron 100 may be further configured to receive a first weight vector, a second weight vector, and/or a third weight vector. Each weight vector may include the number, n, weights. In some embodiments, second order neuron 100 may be configured to receive a weight matrix having dimension n×n. In one nonlimiting example, the weight matrix may be a lower triangular matrix. The weights of the weight vectors and/or the weight matrix may be trainable, i.e., may be determined during training, as described herein.
Second order neuron 100 is configured to determine an intermediate output f(x). The intermediate output may then be provided to nonlinear circuitry 108 that is configured to implement a nonlinear function g(f). An output g(f(x)) of the nonlinear circuitry 108 may then correspond to an output, y, of the second order neuron.
First dot product circuitry 102-1 is configured to receive the input vector and an intermediate vector and to determine a first dot product based, at least in part, on the input vector and based, at least in part, on the intermediate vector. Second dot product circuitry 102-2 is configured to receive the input vector and a second weight vector and to determine a second dot product based, at least in part, on the input vector and based, at least in part, on the second weight vector. Summer circuitry 106 is configured to sum the first dot product and the second dot product or the intermediate product to yield an intermediate output. Nonlinear circuitry 108 is configured to receive the intermediate output and to determine the second order neuron output based, at least in part, on the intermediate output. In one nonlimiting example, nonlinear circuitry 108 may be configured to implement a sigmoid function. In another nonlimiting example, nonlinear circuitry 108 may be configured to implement a rectified linear unit (ReLU).
In an embodiment, second order neuron 100 may correspond to a general second order artificial neuron, as described herein. The general second order neuron may include intermediate multiplier circuitry 110-1, first dot product circuitry 102-1, second dot product circuitry 102-2, summer circuitry 106 and nonlinear circuitry 108. In another embodiment, second order neuron 100 may correspond to an example second order artificial neuron, as described herein. The example second order neuron may include first multiplier circuitry 110-2, first dot product circuitry 102-1, second dot product circuitry 102-2, third dot product circuitry 102-3, multiplier circuitry 104, summer circuitry 106 and nonlinear circuitry 108.
For the general second order neuron, the intermediate vector corresponds to an output of intermediate multiplier circuitry 110-1. Intermediate multiplier circuitry 110-1 is configured to receive the input vector and a weight matrix. According to Equation (Eq.) (2), the weight matrix includes elements aij, where i=1, 2, . . . , n; j=1, 2, . . . , n; and i≥j. Intermediate multiplier circuitry 110-1 may then be configured to determine the corresponding intermediate vector. For example, intermediate multiplier circuitry 110-1 may be configured to multiply the weight matrix by the input vector to yield the intermediate vector. First dot product circuitry 102-1 may then be configured to determine the first dot product of the input vector and the intermediate vector. The first dot product may then correspond to the first term of Eq. (2). Continuing with the general second order neuron, the summer circuitry 106 is configured to receive the first dot product from the first dot product circuitry 102-1 and the second dot product from the second dot product circuitry 102-2. The second dot product corresponds to the dot product of the input vector and the second weight vector. The summer circuitry 106 is configured to add the first dot product and the second dot product to yield the intermediate output.
For the example second order neuron, the first multiplier circuitry 110-2 is configured to receive the input vector and a first weight vector. The first multiplier circuitry 110-2 may then be configured to perform an element by element multiplication to yield the intermediate vector. In one nonlimiting example, each element of the first weight vector may be multiplied by a corresponding element of the input vector. In other words, for vector index, j, in the range of 1 to n, a jth element of the first weight vector may be multiplied by a jth element of the input vector. Thus, each element of the intermediate vector may correspond to an element multiplication of the first weight vector and the input vector.
Continuing with the example second order neuron, first dot product circuitry 102-1 is configured to receive the input vector and the intermediate vector from the first multiplier circuitry 110-2 and to determine the first dot product. The first dot product corresponds to the dot product of the input vector and the intermediate vector. Second dot product circuitry 102-2 is configured to receive the input vector and the second weight vector and to determine a corresponding second dot product. The second dot product corresponds to the dot product of the input vector and the second weight vector. Third dot product circuitry 102-3 is configured to receive the input vector and a third weight vector and to determine a third dot product. The third dot product corresponds to the dot product of the input vector and the third weight vector. Multiplier circuitry 104 is configured to receive the second dot product and the third dot product and to multiply the second dot product and the third dot product to yield an intermediate product. Summer circuitry 106 is configured to receive the first dot product and the intermediate product and to add to the first dot product and the intermediate product to yield the intermediate output.
Thus, a second order neuron may be implemented using multiplier circuitry, summer circuitry and dot product circuitry. It may be appreciated that a dot product function may be implemented by multiplier circuitry and summer circuitry.
Each inner product circuitry 202-r, 202-g, 202-b includes a respective summing circuitry 206-r, 206-g, 206-b and a plurality of multiplier circuitries indicated by lines with arrows. Each inner product circuitry 202-r, 202-g, 202-b is configured to receive the input vector and to determine a dot product of the input vector and a weight vector or intermediate vector. Each weight vector includes n weight elements and the intermediate vector includes n intermediate elements. Each multiplier circuitry is represented by a line labeled with its corresponding weight element value or intermediate element value.
First inner product circuitry 202-b includes n multiplier circuitries with respective intermediate element values x0w0b, x1w1b, . . . , xnwnb. Second inner product circuitry 202-r includes n multiplier circuitries with respective weight element values w0r, w1r, . . . , wnr. Third inner product circuitry 202-g includes n multiplier circuitries with respective weight element values w0g, w1g, . . . , wng.
Thus, the first summing circuitry 206-b is configured to receive intermediate input values w0bx02, w1bx12, . . . , wnbxn2; second summing circuitry 206-r is configured to receive weighted input values w0rx0, w1rx1, . . . , wnrxn and the third summing circuitry 206-g is configured to receive weighted input values, w0gx0, w1gx1, . . . , wngxn and. Each summing circuitry is then configured to determine a respective sum of the weighted or intermediate input values, i.e., a respective dot product of the input vector and the respective weight or intermediate vector.
Multiplier circuitry 204 is configured to receive a second dot product 203-r from the second dot product circuitry 202-r and a third dot product 203-g from the third dot product circuitry 202-g. Multiplier circuitry 204 is configured to multiply the second dot product and the third dot product to yield an intermediate product 205. Summer circuitry 206 is configured to receive the intermediate product from multiplier circuitry 204 and a first dot product 203-b from first dot product circuitry 202-b. Summer circuitry 206 is configured to add the intermediate product and the first dot product to yield an intermediate output, f(x). Nonlinear excitation circuitry 208 is configured to receive the intermediate output and to determine an output, y, of the example second order artificial neuron 200.
Thus, example second order neuron 200 is one example second order neuron configured to implement Eq. (3).
Device 302 includes processor circuitry 312, memory circuitry 314 and input/output (I/O) circuitry 316. Device 302 may further include training circuitry 320, ANN management circuitry 322, training data pairs 324, an objective function 326 and/or training parameters 328. Processor circuitry 312 may be configured to perform operations of device 302 and/or ANN 304. Memory circuitry 314 may be configured to store one or more of training data pairs 324, objective function 326 and objective function associated parameters (if any) and/or training parameters 328.
Training circuitry 320 may be configured to manage training operations of ANN 304, as will be described in more detail below. ANN management circuitry 322 may be configured to manage operation of device 302 and/or ANN 304.
Device 302 may be configured to provide an input vector to ANN 304 and to receive a corresponding output from ANN 304. Device 302 may be further configured to provide structural parameters including weights (e.g., weight vectors and/or a weight matrix) and/or bias values to ANN 304. During training, training circuitry 320 may be configured to provide a training input vector to ANN 304 and to capture a corresponding actual output. Training data pairs 324 may thus include a plurality of pairs of training input vectors and corresponding target outputs. Training circuitry 320 may be configured to compare the actual output with a corresponding target output by evaluating objective function 326. Training circuitry 320 may be further configured to adjust one or more weights to reduce and/or minimize an error associated with objective function 326. Training parameters 328 may include, but are not limited to, an error threshold and/or an epoch threshold. In one nonlimiting example, a gradient descent method may be utilized during training.
In one nonlimiting example, an example second order neuron configured to implement Eq. (3), e.g., example second order neuron 200 of
A training data set, i.e., training data pairs 324, may include a number, m, samples, i.e., training data pairs Xk, yk, k=1, 2, . . . , m, where Xk=(x1k, x2k . . . , xnk) corresponds to the kth input vector and yk is the corresponding kth target output of the training data set. The output of the example second order neuron may then be written as:
An error function may then be defined as:
It may be appreciated that the error function (Eq. (6)) depends, at least in part, on the structural parameters (i.e., weights): {right arrow over (wr)}, {right arrow over (wg)}, {right arrow over (wb)}, b1, b2 and c, where {right arrow over (wr)}=(w1r, w2r, . . . wnr), {right arrow over (wg)}=(w1g, w2g, . . . , wng) and {right arrow over (wb)}=(w1b, w2b, . . . wnb). Training, i.e., optimization, is configured to determine optimal parameters (e.g., weights) that minimize an objective function. In one nonlimiting example, gradient descent may be used, with an appropriate initial guess, to determine and/or identify the optimal parameters. During training, {right arrow over (wr)}, {right arrow over (wg)}, {right arrow over (wb)}, b1, b2 and c may be iteratively updated in the form of:
where α corresponds to a generic variable of the objective function and η, the step size, is set between zero and one for the optimization. The gradient of the objective function for any sample may then be written as:
Training may be iterative and may end when an error is less than or equal to an error threshold or a number of training epochs is at or above an epoch threshold.
In another nonlimiting example, for the general second order neuron (Eq. (2)), a training data set may include {{right arrow over (x)}p} and {yp}. The parameters {aij}, {bk} and c may be updated using a gradient descent technique. The gradient of the objective function for any sample may then be written as:
Thus, a second order neuron consistent with the present disclosure may be trained using a gradient descent technique.
Operations of flowchart 400 may begin with setting protocol parameters and initializing a training epoch to 1 at operation 402. Structural parameters may be initialized randomly at operation 404. Structural parameters may include, but are not limited to, weights (e.g., weight elements in a weight matrix and/or a weight vector). Structural parameters may further include one or more bias values. Inputs may be presented and outputs may be determined at operation 406. For example, an input vector may be provided to a second order neuron and an output may be determined based, at least in part, on the input vector.
An error may be evaluated at operation 408. For example, an objective function may be evaluated to quantify an error between an actual output and a target output of the ANN. Whether the error is less than or equal to an error threshold may be determined at operation 410. If the error is less than the error threshold, then training may be stopped at 412. If the error is not less than or equal to the error threshold, then whether an epoch is greater than or equal to an epoch threshold may be determined at operation 414. If the epoch is greater than or equal to the epoch threshold, then training may stop at operation 412. If the epoch is not greater than or equal to the epoch threshold, then structural parameters may be updated at operation 416. The epoch may then be incremented at operation 418. The program flow may proceed to initializing structural parameters randomly at operation 404.
Thus, a neural network that includes a second order artificial neuron may be trained.
Generally, the present disclosure relates to a second order neuron for machine learning. The second order neuron is configured to implement a second order function of an input vector. Generally, the second order neuron may be configured to determine a first dot product of an intermediate vector and an input vector. The intermediate vector may correspond to a product of the input vector and a first weight vector or a product of the input vector and a weight matrix. The second order neuron may be further configured to determine a second dot product of the input vector and a second weight vector containing n elements. The second order neuron may be further configured to determine an output of the second order neuron based, at least in part, on the first dot product and based, at least in part, on the second dot product. For example, an intermediate output may be input to a nonlinear function circuitry and an output of the nonlinear function circuitry may then correspond to the output of the second order neuron.
As used in any embodiment herein, the term “logic” may refer to an app, software, firmware and/or circuitry configured to perform any of the aforementioned operations. Software may be embodied as a software package, code, instructions, instruction sets and/or data recorded on non-transitory computer readable storage medium. Firmware may be embodied as code, instructions or instruction sets and/or data that are hard-coded (e.g., nonvolatile) in memory devices.
“Circuitry”, as used in any embodiment herein, may include, for example, singly or in any combination, hardwired circuitry, programmable circuitry such as computer processors including one or more individual instruction processing cores, state machine circuitry, and/or firmware that stores instructions executed by programmable circuitry. The logic may, collectively or individually, be embodied as circuitry that forms part of a larger system, for example, an integrated circuit (IC), an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), a programmable logic device (PLD), a complex programmable logic device (CPLD), a system on-chip (SoC), etc.
Processor circuitry 312 may include, but is not limited to, a single core processing unit, a multicore processor, a graphics processing unit, a microcontroller, an application-specific integrated circuit (ASIC), a field programmable gate array (FPGA), a programmable logic device (PLD), etc.
Memory circuitry 314 may include one or more of the following types of memory: semiconductor firmware memory, programmable memory, non-volatile memory, read only memory, electrically programmable memory, random access memory, flash memory, magnetic disk memory, and/or optical disk memory. Either additionally or alternatively memory circuitry 414 may include other and/or later-developed types of computer-readable memory.
Embodiments of the operations described herein may be implemented in a computer-readable storage device having stored thereon instructions that when executed by one or more processors perform the methods. The processor may include, for example, a processing unit and/or programmable circuitry. The storage device may include a machine readable storage device including any type of tangible, non-transitory storage device, for example, any type of disk including floppy disks, optical disks, compact disk read-only memories (CD-ROMs), compact disk rewritables (CD-RWs), and magneto-optical disks, semiconductor devices such as read-only memories (ROMs), random access memories (RAMs) such as dynamic and static RAMs, erasable programmable read-only memories (EPROMs), electrically erasable programmable read-only memories (EEPROMs), flash memories, magnetic or optical cards, or any type of storage devices suitable for storing electronic instructions.
This application claims the benefit of U.S. Provisional Application No. 62/662,235, filed Apr. 25, 2018, and U.S. Provisional Application No. 62/837,946, filed Apr. 24, 2019, which are both incorporated by reference as if disclosed herein in their entirety.
Number | Date | Country | |
---|---|---|---|
62662235 | Apr 2018 | US | |
62837946 | Apr 2019 | US |