One paradigm in the design of neuromorphic systems is to achieve robust and efficient recognition performance by mimicking the architecture and the dynamics of biological neuronal networks. A typical neuromorphic design-flow follows a bottom-up approach, as shown in
Investigating the self-optimizing behaviors of populations of neurons in the context of well-established machine learning algorithms such as SVMs and deep neural networks (DNNs) represents one possible way to advance the understanding of neuromorphic systems, since machine learning algorithms already achieve state-of-the-art recognition performance under real-world conditions. However, these machine learning algorithms typically follow a top-down synthesis approach as illustrated in
In one aspect, a growth transform neural network system is provided that includes a computing device. The computing device includes at least one processor and a memory storing a plurality of modules. Each module includes instructions executable on the at least one processor. The plurality of modules includes a growth transform neural network module, a growth transform module, and a network convergence module. The growth transform neural network module defines a plurality of mirrored neuron pairs that include a plurality of first components and a plurality of second components. Each mirrored neuron pair includes one first component and one second component connected by a normalization link. The plurality of first components is interconnected according to an interconnection matrix, and the plurality of second components is also interconnected according to the interconnection matrix. The growth transform module updates each first component of each mirrored neuron pair of a plurality of mirrored neuron pairs according to a growth transform neuron model. The network convergence module converges the plurality of mirrored neuron pairs to a steady state condition by solving a system objective function subject to at least one normalization constraint.
In another aspect, a ΣΔ SVM is provided that includes a growth transform neural network system. The growth transform neural network system includes a computing device. The computing device includes at least one processor and a memory storing a plurality of modules. Each module includes instructions executable on the at least one processor. The plurality of modules includes a growth transform neural network module, a growth transform module, and a network convergence module. The growth transform neural network module defines a plurality of mirrored neuron pairs that include a plurality of first components and a plurality of second components that are interconnected according to an interconnection matrix. The growth transform module updates each first component of each mirrored neuron pair of a plurality of mirrored neuron pairs according to a growth transform neuron model. The network convergence module converges the plurality of mirrored neuron pairs to a steady state condition by solving a system objective function subject to at least one normalization constraint. The first component and the second component of each mirrored neuron pair in the steady state condition may each produce a neuron response that includes a steady state value or a limit cycle with ΣΔ modulation according to a user-defined potential function Φ(pik) given by Φ(pik)=|pik−½|, wherein pk is the response of ith neuron of the plurality of mirrored neuron pairs, and k is 1 or 2.
In an additional aspect, a spiking SVM is provided that includes a growth transform neural network system. The growth transform neural network system includes a computing device. The computing device includes at least one processor and a memory storing a plurality of modules. Each module includes instructions executable on the at least one processor. The plurality of modules includes a growth transform neural network module, a growth transform module, and a network convergence module. The growth transform neural network module defines a plurality of mirrored neuron pairs that include a plurality of first components and a plurality of second components that are interconnected according to an interconnection matrix. The growth transform module updates each first component of each mirrored neuron pair of a plurality of mirrored neuron pairs according to a growth transform neuron model. The network convergence module converges the plurality of mirrored neuron pairs to a steady state condition by solving a system objective function subject to at least one normalization constraint. The first component and the second component of each mirrored neuron pair in the steady state condition may each produce a neuron response that includes a steady state value or a limit cycle with steady state value or a limit cycle with spiking according to a user-defined potential function Φ(pik) given by:
W+|p
ik−(½−∈)| for 0≤pik<½−∈
W|p
ik−½| for ½−∈≤pik≤½+∈, and
W+|p
ik−(½+∈) for ½+∈<pik≤1,
in which pik is the response of ith neuron of the plurality of mirrored neuron pairs, k is 1 or 2, W>1 and ∈>0.
In another aspect, a spiking and bursting SVM is provided that includes a growth transform neural network system. The growth transform neural network system includes a computing device. The computing device includes at least one processor and a memory storing a plurality of modules. Each module includes instructions executable on the at least one processor. The plurality of modules includes a growth transform neural network module, a growth transform module, and a network convergence module. The growth transform neural network module defines a plurality of mirrored neuron pairs that include a plurality of first components and a plurality of second components that are interconnected according to an interconnection matrix. The growth transform module updates each first component of each mirrored neuron pair of a plurality of mirrored neuron pairs according to a growth transform neuron model. The network convergence module converges the plurality of mirrored neuron pairs to a steady state condition by solving a system objective function subject to at least one normalization constraint. The first component and the second component of each mirrored neuron pair in the steady state condition may each produce a neuron response that includes a steady state value or a limit cycle with ΣΔ modulation according to a user-defined potential function Φ(pik) given by:
W
1∈1+|pik−(½−∈1)| for 0≤pik<½−∈1,
W
1
|p
ik−½| for ½−∈1≤pik≤½,
W
2
|p
ik−½| for ½<pik<½+∈2, and
W
2∈2+|pik−(½−∈2)| for ½+∈2<pik≤1,
in which pik is the response of ith neuron of the plurality of mirrored neuron pairs, k is 1 or 2, W1>1, W2>1 ∈1>0, and ∈2>0.
In various aspects, a growth transform neuron model is mutually coupled with a network objective function of a machine learning model, as illustrated in
In various aspects, growth transform neural network systems are disclosed that incorporate the dynamical properties of a network of neurons in which each neuron implements an asynchronous mapping based on polynomial growth transforms. The disclosed growth transform neural network systems make use of a geometric approach for visualizing the dynamics of the network in which each of the neurons traverses a trajectory in a dual optimization space, and in which the network itself traverses a trajectory in an equivalent primal optimization space. In various other aspects, as the network learns to solve basic classification tasks, different choices of primal-dual mapping produce unique, but interpretable neural dynamics, such as noise-shaping, spiking and bursting. The disclosed growth transform neural network systems in some aspects are compatible with the design of support vector machines (SVMs) including, but not limited to, ΣΔ SVMs that exhibit noise-shaping properties similar to that of ΣΔ modulators and spiking/bursting SVMs that encode information using spikes and bursts. As described in detail below, individual neurons within the disclosed growth transform neural network systems learn to generate switching, spiking and burst dynamics to encode each neuron's respective margins of separation from a classification hyperplane the parameters of which are encoded by the network population dynamics.
The disclosed growth transform neuron model and the underlying geometric visualization connect well-established machine learning algorithms, such as SVMs, to neuromorphic principles, such as spiking, bursting, population encoding and noise-shaping. However, unlike conventional neuromorphic approaches, the growth transform neuron model is tightly coupled to the system objective function which results in network dynamics that are reliably stable, interpretable and the process of spike generation and encoding is the result of the optimization process.
In one aspect, the disclosed neural network system is incorporated into a spiking support vector machine (SVM) that includes a network of growth transform neurons, as described herein below. Each neuron in the SVM network learns to encode output parameters such as spike rate and time-to-spike responses according to an equivalent margin of classification and those neurons corresponding to regions near the classification boundary learn to exhibit noise-shaping dynamics similar to behaviors observed in biological networks. As a result, the disclosed spiking support vector machine (SVM) enables large-scale population encoding, for examples for a case when the spiking SVM learns to solve two benchmark classification tasks, resulting in classification performance similar to that of the state-of-the-art SVM implementations.
Neural networks, in their generic form, include a set of basic computing units called neurons that are interconnected with each other through a set of synaptic junctions. Mathematically, the response of each of the neurons is modeled according to Eqn. (1):
α1=Θ(ΣjQijαj+bi) (1)
where αi∈ corresponds to the response of the neuron i∈{1, . . . , N}, N being the total number of neurons in the network, Qij∈ corresponds to an element of the synaptic weight matrix that connects neuron/with neuron i, bi∈ corresponds to an activation threshold for the neuron i and Θ(.) corresponds to a generic activation function that produces the response αi.
By choosing different forms of the mapping function Θ(.) and by choosing different forms of the weight matrix Q={Qij}, the simple model in Eqn. (1) is adapted to implement different variants of neural networks. Some of the examples include multi-layer perceptron networks, recurrent neural networks, cellular neural networks and support vector machines (SVMs). In the most popular implementations, the choice of the activation function Θ(.) in Eqn. (1) is a simple compressive mapping like the sigmoidal or the logistic functions. However, a network comprising of feedforward and feedback synaptic connections and comprising of neurons with a more complex mapping Θ(.) can exhibit complex dynamics which includes limit-cycles and chaotic oscillations. These rich sets of dynamics, except for specific classes of cellular neural networks, have been found to be difficult to interpret and control in a manner that the overall network can achieve a desired system objective, for instance solve a complex classification task.
In one aspect, the growth transform neural network design and analysis includes estimating an equivalent dual optimization function based on the mapping given by Eqn. (1). Each neuron implements a continuous mapping based on a polynomial growth transform update that also dynamically optimizes the cost function. Because the growth transform mapping is designed to evolve over a constrained manifold, the neuronal responses and the network are stable. The switching, spiking and bursting dynamics of the neurons emerge by choosing different types of potential functions and hyper-parameters in the dual cost function. The use of this approach is suitable for use in the design of SVMs that exhibit ΣΔ modulation type limit-cycles, spiking behavior and bursting responses. To understand the relationship between the population dynamics of the neural network and the system objective function, the primal formulation corresponding to Eqn. (1) is also estimated. Different statistical properties of the growth transform neurons encode the classification properties of the network which include the classification margin and noise-shaping.
In various aspects, a geometric framework is derived and presented to visualize the process of primal-dual optimization satisfying Eqn. (1) as their respective first-order conditions. This approach expands methods of visualizing the solution to different types of SVMs and further extends these methods to the visualization of the trajectories of the variables during optimization. In section I, the model of the neuron derived by applying growth transforms to the dual optimization function is presented. In various other aspects, the graphical approach is used to design an SVM classifier that exhibits ΣΔ limit-cycles, spiking and bursting dynamics. The statistical properties of these dynamics are shown to encode the margin of separation for an underlying classification problem and the dynamics are shown to be adjustable to obtain different encoding properties.
A continuous-time variant of the growth transform neuron model is described herein and a network of growth transform neurons is used to implement a spiking SVM. The growth transform neuron in the SVM network may learn to encode (rate and time-to-spike response) its output according to an equivalent margin of classification and the neurons corresponding to regions near the classification boundary may learn to exhibit noise-shaping dynamics similar to what has been reported in biological networks. In one aspect, the model of the growth transform neuron is summarized along with its dynamical properties. In another aspect, an SVM formulation is mapped onto a growth transform neural network and different spiking dynamics are demonstrated based on synthetic and benchmark datasets.
A. Geometric View of Primal-Dual Optimization
Without any loss of generality, it is assumed that the response of the neuron i, αi is bounded according to:
|αi|≤1. (2)
These constraints are typical of neural network optimizations where a sigmoidal type of function for Θ(.) is used to bound αi. In various aspects, the form of Θ(.) is kept general to enable implementation of both continuous and switching or spiking responses.
The response αi of the ith neuron and the corresponding bias bi is decomposed into two differential components:
αi=pi1−pi2, and (3)
b
i
=b
i1
b
i2 (4)
where pi1 and pi2 satisfy the following constraints such that Eqn. (2) holds:
p
i1
+p
i2=1 (5)
p
i1
,p
i2≥0. (6)
Eqn. (1) is re-written as:
or,
Θ−1(pi1−pi2)=(ΣjQijpj1+bi1)−(ΣjQijpj2+bi2) (7)
Θ(.) is chosen such that it is decomposed as follows:
Θ−1(pi1−pi2)=Ψ−1(pi1)−Ψ−1(pi2), (8)
where Ψ−1(.) is a mirror function designed to include discontinuities. Note that if Ψ−1(.) is anti-symmetric about 0.5 and Θ−1 (.) is anti-symmetric about 0.0:
Ψ−−(u)=½Θ−1(2u−1), for 0≤u≤1. (9)
From Eqns. (7) and (8), it follows that
Ψ−1(pik)=ΣjQijpik+bik,k=1,2 (10)
Thus, the neural network is re-modeled using variables pi1 and pi2 according to:
p
ik=Ψ(ΣjQijpik+bik),k=1,2 (11)
under the constraints of Eqns. (5) and (6).
This formulation is now consistent with the multi-class probability regression framework which was used for deriving different variants of SVMs. Introducing variables yjk, Eqn. (10) is expressed as:
Ψ−1(pik)=ΣjQij(pjk+yjk) (12)
where bik satisfies the relation:
b
ik=−ΣjQijyjk,k=1,2 (13)
under the assumption that Q−1 exists. Eqn. (12) along with the constraints given by Eqns. (5) and (6) are viewed as a first-order condition for the following minimization problem:
where
Φ(pik)=−∫Ψ−1(pik)dpik. (16)
Φ(.) is referred to as the potential function. If it is assumed that the matrix Q is positive-definite, the first part of the optimization function in Eqn. (15) is equivalent to minimizing a quadratic distance between the responses pik and the variables yik. The second part of the optimization function is equivalent to minimizing a cumulative potential function Φ(.) corresponding to each neuron.
To complete the geometric framework and show its connection with SVMs, a primal cost function is derived that is used to visualize the response of the network when the trajectory of the neuron evolves according to Ψ(.). A network variable zik is introduced, given by
z
ik=ΣjQijpik+bik (17)
such that Eqn. (11) is rewritten as
p
ik=Ψ(zik). (18)
Since the interconnection matrix Q is assumed to be positive-definite, each of its elements Qij are written as an inner-product between two vectors as Qij=xi·xj, xi∈RD, similar to that of a kernel matrix used in SVMs. Thus, each neuron i in the network is associated with a vector xi which enables the neuron to be mapped onto a metric space RD, providing an alternate geometric representation of the neural network. Thus,
represents the distance of the vector xi from a hyper-plane in the co-ordinate space parameterized by a weight vector wk and offset bik where
w
k=Σjxjpjk=ΣjxjΨ(zjk). (21)
Eqns. (20) and (21) are considered to be the first order condition for minimizing a primal cost function P with respect to the vector wk, where P is given by:
where
g(zik)=∫Ψ(zik)dzik. (23)
The mapping between the response of the neuron (pik) to the response of the network g(zik) according to
is useful for visualizing the nature of the solution and the network's dynamical response, as demonstrated next using a specific example of a previously defined probabilistic GiniSVM.
For a GiniSVM, the function Ψ(.) is a piece-wise linear continuous function, as shown in
This visualization is used to understand the sparsity of an SVM solution or to determine the location of the support vectors. In an aspect, the visualization tool is also used to understand the dynamics of the optimization process as the dual cost function is optimized. This visualization is illustrated in
B. Growth Transform Neuron Model
In various aspects, since Eqn. (1) also represents a first-order condition for the dual optimization problem, one approach to implement a dynamic model of a neuron is to update the variables pik, k=1, 2 such that the dual cost function D is optimized over a manifold H defined by:
H={p
ik
:p
ik≤0 and pi1+pi2=1} (25)
In an aspect, this evolution process is implemented using a polynomial growth transform, which is a fixed-point algorithm for optimizing polynomial functions over a probability manifold, such as H Growth transforms are applied to optimize dual-cost functions that occur in different variants of SVMs and in various aspects, a similar approach is used to implement the model of a neuron that can exhibit different dynamical properties based on different choices of the mapping function in Eqn. (1).
For the cost function D{pik} in Eqn. (15), a growth-transform neuron updates its response pik according to:
S
ik=Φ′(pik) (26)
where
is a normalization factor that ensures pi1+pi2=1.
K={Ki}, i=1 . . . N is a constant vector of nonnegative elements which is admissible if
If Φ′0(.) (or S(.)) is a continuous polynomial function, growth transform ensures that:
{σ(pik)}≥{pik}. (28)
with equality only if pik is a critical point of D.
However, if Φ′0(.) is discontinuous over a sub-domain X∈H and if the critical point is not reachable by the growth transform updates, some of the variables pik exhibit limit-cycles about the sub-domain X The dynamic properties of the limit-cycle are determined by the shape of the potential function Φ(.), which may produce ΣΔ modulation, spiking, and/or bursting as the network converges to a steady-state solution.
The architecture of a growth transform neural network includes a set of basic computing units (neurons) interconnected by a set of synaptic junctions. One architectural difference between existing neural networks and growth transform neural networks is that each of the neurons in a growth transform neural network are mirrored, as shown in
The mathematical model that governs the evolution of the variables pi+, pi− is summarized in
In various aspects, the growth transform neural network is incorporated into the design of a ΣΔ support vector machine (SVM). In this aspect, the potential function is given by Φ(pik)=pik−½|, as shown in
S
ikΦ′(pik)=sgn(pik−0.5) (29)
and represents a binary output that switches between two values +1 and −1.
However, not all neurons in the ΣΔ support vector machine (SVM) exhibit a switching behavior, as is inferred from the geometric visualization of the primal-dual formulation shown in
which in turn is related to the variable pik. Using Eqns. (18) and (26):
This model was been verified as illustrated in
Noise-shaping refers to the mechanism of shifting the energy contained in quantization noise and interference out of the spectral regions where the desired information is present. Using the mechanism of noise-shaping, ΣΔ modulators and biological neuronal networks can achieve encoding that can track the input stimuli with very high-fidelity. Previous attempts towards connecting principles of noise-shaping with learning resulted in networks with relatively simple feed-forward topologies, to ensure network stability. However, by construction, the dynamics of the proposed growth-transform neural network is always stable irrespective of the choice of the positive-definite interconnection matrix Q.
This phenomenon is illustrated by the results shown in
In various other aspects, the growth transform neural network is incorporated into the design of support vector machine (SVM) characterized by spiking or bursting responses. The switching dynamics of the spiking/bursting SVMs are modified using a variation of the potential function. In particular, if Φ(.) is chosen to be non-convex, spiking dynamics are generated by the network of growth-transform neurons.
By way of non-limiting example, the potential function Φ(.) as shown in
Similar to the ΣΔ SVM, the expected value of the output Sik for individual neurons also encodes the margin of separation, as shown in
The stimuli used were binary labels assigned to the neurons that determine the network configuration for a given classification problem, and the strength of the stimulus is higher for neurons closer to the classification hyperplane (i.e., for the ‘support vectors’). As the margin of separation z from the hyperplane decreased, the spiking rate for a support vector increases and it starts spiking earlier in the convergence process.
In an aspect, the spiking potential function is modulated to change the spiking dynamics across the population. For a particular support vector neuron, the spiking rate decreases with an increase in the slope of the transition region at the classification boundary, as expressed by W (see Table I below).
By way of non-limiting example, three SVMs were developed using potential functions with differing values of W, as illustrated in
The classification boundary for the non-linear classification example using the spiking SVM is shown in
In various other aspects, the growth transform neural network is incorporated into the design of bursting support vector machine (SVM) characterized by spiking and bursting responses. The switching dynamics of bursting SVM is modified using a variation of the potential function shown in Table 1, in which the previous potential function is made asymmetric by making the upward and downward transition slopes and their widths unequal as shown in
The mean value of the output Sik of the neurons plotted against the margin variable for the support vectors are given in
By way of non-limiting example, three SVMs were developed using potential functions with differing values of W2 as illustrated in
In various aspects, methods of solving a general class of problems given in Eqn. (1) for different types of the network mapping function Ψ(.), and hence Θ(.) were described. A geometric approach was described for solving primal-dual optimization problems using a novel growth transform neuron model that asymptotically satisfies Eqn. (1) as an equivalent first-order condition. This geometric framework was then shown to be applicable for generating different types of support vector networks with different dynamical properties ranging from ΣΔ modulation to spiking and bursting.
One insight that emerged from the disclosed geometric framework is that, while each individual neuron is optimizing a relatively simple dual cost function, the network as a whole exhibits complex dynamics corresponding to primal loss-functions with hysteresis and discontinuities. In each of the support vector networks that incorporate the disclosed growth transform neural networks as described above, irrespective of the nature of the output (ΣΔ modulation, spiking or busting), the output of the neuron faithfully encodes an equivalent classification margin. In all described networks, those neurons located in close proximity (with respect to the classification margin) to the classification boundary exhibit switching dynamics. Also, the switching rates (for example spiking rates) increased for neurons located close to the classification boundary, implying that the network self-optimizes for energy (switching energy) based on the significance of the neuron.
While the systems described herein were demonstrated using simple two-dimensional synthetic problems (for the ease of visualization), it is to be understood that the results are suitable for larger and more complex tasks. By way of non-limiting example, Table I summarizes the classification results of different variants of switching SVMs, trained and evaluated on a benchmark ‘Adult (a3a)’ dataset. The training dataset (3185 instances) and testing dataset (29376 instances) provided on the LIBSVM website were used for training and cross-validation respectively. The classification accuracies produced by the different SVM variants are comparable to each other and comparable to previously reported classification results for this specific dataset. The geometric framework described herein provides a useful tool to connect existing machine learning models to neuromorphic principles like noise-shaping, spiking and bursting and hence pave the way towards designing scalable neuromorphic processors based on growth transform neuron models.
In another aspect, in the context of a probability regression framework, a two-class SVM solves a system objective according to:
minp
under the constraints:
p
i
+
+p
i
−=1 (33)
and
p
i
+
,p
i
−≥0 (34)
where i corresponds to the index of an input data vector or a support vector neuron and yi+, yi− correspond to the a-priori probabilities (or labels) associated with each of the two classes (denoted as + and −).
The first term in the cost function minimizes a kernel distance between the class labels and the probability variables pi+, pi−, and the second term minimizes a cumulative potential function Ω(.) corresponding to each neuron. The kernel or the interconnection matrix Q is a positive definite matrix such that each of its elements is written as an inner-product in a high-dimensional space as Qij=Ψ(xi)·Ψ(xj) where xi∈RD correspond to the input data vector and Ψ(.) represents a high-dimensional mapping function.
A growth transformation is used for connecting the system objective to the model of the neuron described in
Due to the positive-definite property of the interconnection matrix Q, each of the neurons i, i=1 . . . N, in the network is mapped to a vector xi in a metric space RD, providing an alternate geometric representation of the neural network. Then, the margin variable given by:
represents the distance (or the margin) of the vector xi (hence the neuron) from a classification hyper-plane located in a high-dimensional coordinate space.
A simple two-class, two-dimensional, linearly separable classification problem is used to illustrate different spiking dynamics produced by a growth-transform neural network. Starting from a well-defined initial state, the network optimizes the system objective function given in Eqn. (32) and as a result each neuron in the network produces its own unique spiking dynamics.
The expected value of the output Si+ for each individual neuron faithfully encoded its margin of separation from the classification boundary, as shown in
The spiking SVM model modulates its information using both the mean-firing-rate and time-to-first-spike encoding. Mean firing rate over time is a widespread rate coding scheme that claims that the spiking frequency or rate increases with the intensity of stimulus. A temporal code like time-to-first-spike, on the other hand, claims that the first spike after the onset of stimulus contains all the information, and the time-to-first-spike is smaller for a stronger stimulus. For an SVM formulation, the input stimuli are determined by the class labels (a-priori probabilities) which determine the network configuration (location of each individual neuron with respect to the boundary for a given set of labels). The modulation in the SVM network as shown in
In another aspect, the spiking SVM learned to exhibit noise-shaping, another spectral property observed in biological neuronal networks. Noise-shaping is characterized by neurons shifting the energy contained in noise and interference out of frequency bands where target information is present. For a spiking SVM, more discriminatory information is encoded by the neurons closer to the classification boundary and a spectral plot of one of these neurons clearly reveals the noise-shaping, as shown in
In another aspect, the dynamical properties are conserved when the SVM is scaled to a larger and a more complicated classification problem. By way of non-limiting example, the spiking SVM was applied to solve two benchmark UCI datasets ‘Heart disease (Cleveland)’ and ‘Diabetes’. The classification results using 5-fold cross-validation are summarized in Table 2, along with results obtained from literature which use variants of support vector machines. The datasets are labeled with (N, d), where N denoted the number of instances (i.e. neurons) and d denotes the dimension of the feature vector. The ‘Heart disease’ dataset was used as a binary dataset to distinguish only presence from absence of the disorder. The classification accuracies produced by the spiking SVM were comparable to previously reported results.
A raster plot of spiking neurons (i.e., the ‘support vectors’) arranged in the increasing order of margin of separation is shown in
A spiking neuron model based on growth transform updates demonstrates how the disclosed model is used for designing a large-scale spiking SVM capable of solving classification tasks with accuracies comparable to that of standard SVMs. The support vector network produced spiking dynamics that faithfully encoded an equivalent classification margin for each individual neuron using a combination of different spike-based encoding techniques. Neurons located close to the classification boundary were seen to exhibit these dynamics, and spiking rates increased for neurons with lower margins, implying that the network self-optimized for switching energy based on the relative importance of the neuron. The growth transform neural network serves as an important tool in connecting existing machine learning models to spiking neural networks and hence paves the way towards designing scalable neuromorphic processors.
In various aspects, the methods described herein are implemented using a remote and/or local computing device as described herein below.
Computing device 302 also includes at least one media output component 308 for presenting information to a user 310. Media output component 308 is any component capable of conveying information to user 310. In some embodiments, media output component 308 includes an output adapter such as a video adapter and/or an audio adapter. An output adapter is operatively coupled to processor 304 and is operatively coupleable to an output device such as a display device (e.g., a liquid crystal display (LCD), organic light emitting diode (OLED) display, cathode ray tube (CRT), or “electronic ink” display) or an audio output device (e.g., a speaker or headphones). In some embodiments, media output component 308 is configured to present an interactive user interface (e.g., a web browser or client application) to user 310.
In some embodiments, client computing device 302 includes an input device 312 for receiving input from user 310. Input device 312 may include, for example, a keyboard, a pointing device, a mouse, a stylus, a touch sensitive panel (e.g., a touch pad or a touch screen), a camera, a gyroscope, an accelerometer, a position detector, and/or an audio input device. A single component such as a touch screen may function as both an output device of media output component 308 and input device 312.
Computing device 302 may also include a communication interface 314, which is communicatively coupleable to a remote device such as SE computing device. Communication interface 314 may include, for example, a wired or wireless network adapter or a wireless data transceiver for use with a mobile phone network (e.g., Global System for Mobile communications (GSM), 3G, 4G, or Bluetooth) or other mobile data network (e.g., Worldwide Interoperability for Microwave Access (WIMAX)).
Stored in memory area 306 are, for example, computer-readable instructions for providing a user interface to user 310 via media output component 308 and, optionally, receiving and processing input from input device 312. A user interface may include, among other possibilities, a web browser and client application. Web browsers enable users 310 to display and interact with media and other information typically embedded on a web page or a website from a web server associated with a merchant. A client application allows users 310 to interact with a server application.
Processor 404 is operatively coupled to a communication interface 408 such that server computing device 402 is capable of communicating with a remote device such as computing device 302 shown in
Processor 404 may also be operatively coupled to a storage device 410. Storage device 410 is any computer-operated hardware suitable for storing and/or retrieving data. In some embodiments, storage device 410 is integrated in server computing device 402. For example, server computing device 402 may include one or more hard disk drives as storage device 410. In other embodiments, storage device 410 is external to server computing device 402 and is accessed by a plurality of server computing devices 402. For example, storage device 410 may include multiple storage units such as hard disks or solid state disks in a redundant array of inexpensive disks (RAID) configuration. Storage device 410 may include a storage area network (SAN) and/or a network attached storage (NAS) system.
In some embodiments, processor 404 is operatively coupled to storage device 410 via a storage interface 412. Storage interface 412 is any component capable of providing processor 404 with access to storage device 410. Storage interface 412 may include, for example, an Advanced Technology Attachment (ATA) adapter, a Serial ATA (SATA) adapter, a Small Computer System Interface (SCSI) adapter, a RAID controller, a SAN adapter, a network adapter, and/or any component providing processor 404 with access to storage device 410.
Memory areas 306 (shown in
This written description uses examples to disclose the invention, including the best mode, and also to enable any person skilled in the art to practice the invention, including making and using any devices or systems and performing any incorporated methods. The patentable scope of the invention is defined by the claims, and may include other examples that occur to those skilled in the art. Such other examples are intended to be within the scope of the claims if they have structural elements that do not differ from the literal language of the claims, or if they include equivalent structural elements with insubstantial differences from the literal languages of the claims.
This application claims the benefit of U.S. Provisional Application No. 62/425,372 filed Nov. 22, 2016, which is incorporated herein in its entirety. This application further claims the benefit of U.S. Provisional Application No. 62/484,669 filed Apr. 12, 2017, the contents of which is incorporated herein in its entirety.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2017/062986 | 11/22/2017 | WO | 00 |
Number | Date | Country | |
---|---|---|---|
62425372 | Nov 2016 | US | |
62484669 | Apr 2017 | US |