The subject disclosure relates generally to wireless communications systems, and more particularly to signal detection in multiple-input multiple-output (MIMO) systems.
Wireless communication networks are increasingly popular and widely deployed. Multiple-input multiple-output (MIMO) technology is a promising candidate for next-generation wireless communications. However, signal detection and decoding is more complex in MIMO networks than in conventional wireless networks that have only a single receive/transmit antenna per attached device.
The linearity of a communication channel and the lattice structure of a modulation scheme can be exploited to state many signal detection problems as a problem of finding a nearest lattice point. Further, the relative degree of freedom provided by such lattice-based approaches in choosing a lattice basis can be a significant factor affecting the quality and efficiency of such approaches. For example, conventional low-complexity and highly sub-optimal MIMO detectors can be modified to provide detection that achieves full diversity without a significant sacrifice in complexity by employing lattice reduction of associated MIMO channel matrices.
However, the process of finding a good lattice basis reduction can be significantly complicated in many conventional lattice-based signal detection approaches as compared to other components of such approaches, such that the lattice reduction complexity of conventional lattice-based signal detection techniques often dominates the overall detection complexity. Moreover, this disparity in complexity generally becomes more significant as the dimension of the associated communication system increases. As a result, difficulties arise in applying conventional signal detection techniques in many communication systems, such as those where an associated channel matrix or related lattice basis undergo frequent changes.
The above-described deficiencies of wireless network communications are merely intended to provide an overview of some of the problems of today's wireless networks, and are not intended to be exhaustive. Other problems with the state of the art may become further apparent upon review of the description of various non-limiting embodiments that follows.
The following presents a simplified summary of the invention in order to provide a basic understanding of some aspects of the invention. This summary is not an extensive overview of the invention. It is intended to neither identify key or critical elements of the invention nor delineate the scope of the invention. Its sole purpose is to present some concepts of the invention in a simplified form as a prelude to the more detailed description that is presented later.
According to one aspect, a method and system of signal detection in MIMO networks is provided. A channel matrix is determined corresponding to a channel. After determining the channel matrix, the various basis vectors of the channel matrix can be sorted before performing the Gram-Schmidt Orthogonalization (GSO) step of Lenstra-Lenstra-Lovasz (LLL) lattice reduction technique. The sorting can be done, for example, using Vertical Bell Labs Layered Space-Time (V-BLAST) or sorted-QR ordering. Subsequently, an extended LLL technique that works with complex vectors can be used to reduce the sorted lattice. The resulting solution can then be used to decode the symbols sent over the channel.
In another embodiment, instead of pre-sorting the basis vectors prior to the GSO step, the sorting is performed jointly with reductions. After each reduction step of the LLL lattice reduction technique, a candidate vector can be selected that will reduce the overall complexity. This technique called, joint sorting and LLL reduction (JSAR), can be applied to the LLL reduction of real or complex lattice bases. The selection can be based, for example, on the vector with the shortest projection.
Since LLL lattice reduction is an iterative algorithm, the LLL can be stopped before continuing to another iteration when predetermined conditions occur. For example, the LLL can be stopped if a predetermined number of vectors have been swapped or the processing time has exceeded some predetermined threshold. Although stopping early prevents an ultimate reduced lattice solution from being determined, it can reduce complexity and the time needed to accurately detect the transmitted symbols.
To the accomplishment of the foregoing and related ends, certain illustrative aspects of the invention are described herein in connection with the following description and the annexed drawings. These aspects are indicative, however, of but a few of the various ways in which the principles of the invention may be employed and the present invention is intended to include all such aspects and their equivalents. Other advantages and novel features of the invention may become apparent from the following detailed description of the invention when considered in conjunction with the drawings.
The present invention is now described with reference to the drawings, wherein like reference numerals are used to refer to like elements throughout. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present invention. It may be evident, however, that the present invention may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form in order to facilitate describing the present invention.
Turning to
The LLL reduction is a popular technique for lattice reduction (e.g., in the lattice reduction component 204) since it runs in polynomial time. However, since the traditional LLL reduction algorithm is a theoretical number tool, it works on real lattices, not complex lattices. Consequently, the real-valued equivalent matrix of the complex channel matrix H is often used instead of the complex matrix:
where (H) denotes the matrix comprising the real part of matrix H and ℑ(H) denotes the matrix comprising the imaginary part of matrix H. The complex MIMO system model is replaced by its real equivalent model
y
R
=H
R
x
R
+w
R, (Equation 2)
where yR=[(y) (y)]T and similarly for xR and WR.
The direct application of LLL reduction using the real-valued equivalent matrix doubles the channel matrix dimension and adds unnecessarily complexity to lattice reduction. Moreover, since the reduced basis matrix does not generally have the matrix structure as in Equation 1, the detection part also has to be done in the real number field, rather than its natural complex number field.
Accordingly, a complex LLL (CLLL) reduction technique is provided that use the same the steps of the traditional LLL technique. Namely,
GSO procedure: a GSO procedure to compute Hi
Size reduction: A process that aims to make basis vectors shorter and closer to orthogonal.
Basis vectors swapping: Two consecutive basis vectors hk−1 and hk will be swapped if Hk≧(δ−|μk,k−1|2)Hk−1 is violated. Thus, after swapping, size reduction can be repeated to make the basis vectors shorter.
The two steps, size reduction and basis vectors swapping, iterate until Hk≧(δ−|μk,k−1|2)Hk−1 is satisfied by all pairs of hk−1 and hk. The resultant basis is thus LLL-reduced.
Although a more generalized version of LLL algorithm already exists, several modifications are made such that a simple condition checking can be employed and the technique made even faster. From analytic and simulation results, the average overall complexity of the CLLL reduction algorithm can be about half of that of the real LLL (RLLL) reduction algorithm. The linear detectors employing CLLL reduced basis can achieve full diversity just like RLLL. Finally, simulation results reveal that the bit-error-rate performance of CLLL-aided schemes are virtually the same as RLLL-aided schemes.
The GSO procedure can be extended to complex vectors. Moreover, since μi,j is now complex, the modified size reduction condition is:
|(μi,j)|≦0.5 and |(μi,j)|≦0.5 (Equation 3)
for 1≦j<i≦n. The swapping condition remains unchanged, but factor δ is now restricted to ½<δ<1 for convergence and polynomial running time.
The complexity of the LLL reduction algorithm depends on the distribution of the random basis matrix H. For real lattices, the complexity of RLLL is
where BR is the norm of the longest column vector of HR, λ(ΛR) denotes the norm of the shortest vector in lattice ΛR generated by HR. The result is expanded to complex lattices.
Theorem 1 Consider an m×n complex matrix H=[h1 . . . hn] that generates an n-dimensional complex lattice Λ. The complexity of the CLLL algorithm on basis H is
where B is the norm of the longest column vector of H, and λ(ΛR) is the norm of the shortest vector in complex lattice Λ generated by H.
To preliminarily estimate how much complexity can be reduced by applying CLLL, compared with RLLL, the CLLL-to-RLLL Complexity Ratio is considered
where K is an architecture-dependent factor, meaning, on average, how many real arithmetic operations have to be executed per each complex operation. For example, if a complex addition uses 2 real arithmetic operations and a complex multiplication uses 6, then K=(6+2)/2=4 since the number of additions and multiplications are roughly the same for CLLL and RLLL. Pc (Pr) denotes the probability that the conditional test is passed in CLLL (RLLL), i.e.
P
r(2n)=P{|μi,j(2n)|>0.5}, μi,j|real (Equation 5)
and
P
c(n)=P{[|μi,j(n)|]>0.5 or [|μi,j(n)|]>0.5}, (Equation 6)
where, for clarity, the dependence on the dimension is shown explicitly.
By definition of μi,j the random variables [μi,j(n)],[μi,j(n)] and real-valued μi,j (2n) should have similar statistics. Moreover, as shown in Table 1 for n≦22, empirical results suggest that the reasonable assumption is made that for circular symmetric complex Gaussian H, the events |[μi,j(n)]|>0.5 and |[μi,j(n)]|>0.5 are statistically independent. As supported by the empirical result shown in Table 1,
With this information and the fact that log for large enough n, the ratio in Equation 4 becomes
CRCR=K/16×2=K/8=½ (Equation 8)
where the common value 1=4 was used. This means that CLLL algorithm will have half of the complexity of RLLL algorithm. Empirical results confirm the above prediction of 50% complexity reduction.
The upper bounds for traditional RLLL are valid for CLLL-reduced basis
as well, except that in this case. Hence lattice-reduction-aided detection schemes utilizing CLLL reduction can also achieve full diversity, just like the traditional LLL.
The average complexity and the error-rate performance is compared when the reduced bases were used in MIMO detection.
The average complexity was measured in terms of the average number of floating-point operations (flops) used. Simulations were conducted in which the number of flops for complex addition equals 2 and the number of flops for complex multiplication equals 6. For real numbers, both addition and multiplication occur in 1 flop. Moreover, it is assumed the costs of rounding and hard-limiting are negligible when compared to floating-point addition and multiplication. The LLL factor δ was set to 0.99 in all cases for the best performance.
The average complexity of LLL-reduction-aided successive interference cancellation (LLL-SIC) detection scheme was determined. The whole detection process can be divided into two parts: preprocessing and processing. The preprocessing part includes these operations:
Lattice reduction of channel matrix H.
QR decomposition of the reduced channel matrix H′.
And the processing part includes:
Matrix multiplication QHly.
Successive interference cancellation and detection.
Unimodular transformation U{circumflex over (x)}′, where {circumflex over (x)} is the vector obtained by successive nulling and cancellation; and hard-limiting of the resultant vector to a valid modulation symbol vector.
Table 2 illustrates the average complexity of the preprocessing and processing part of LLL-SIC. Since the complex lattice was manipulated rather than the real-equivalent lattice, the complexity of the entire preprocessing part was reduced by 45.1% for m=n=2 (i.e. a 2-transmitter-2-receiver system), and reduced by somewhere between 42.4% to 49.2% for larger n.
In particular, the complexity reduction of the CLLL reduction algorithm over the traditional RLLL is about 44.2% to 50.5% for the selected range of n.
About 40.4% to 47.1% of the computation was reduced by computing QR decomposition in the complex number field. If it is assumed that the number of complex additions is roughly the same as the number of complex multiplications for this part, 4 flops are used for each complex number operation on average. Thus, the complexity reduction of this part approaches 4/8=50% for sufficiently large n.
The bit-error-rate (BER) performance when traditional RLLL-reduced and CLLL-reduced basis are used in MIMO detection are shown in
To prove that the LLL reduction algorithm will be terminated in polynomial time, a positive-valued quantity
is defined. The value is reduced by a factor of at least 1/δ after each basis vector swapping in the LLL algorithm. Since there is a lower bound on this value for a given lattice, the algorithm terminates after a finite number of steps.
The first step of the LLL algorithm is the execution of the GSO process. Different permutations of the original vectors produce different set of orthogonal vectors, and hence different values of Equation 9. Therefore, some permutations give smaller initial values of Equation 9. The complexity of LLL algorithm can be further reduced by sorting the vectors such that D is minimized.
One technique to can reduce Equation 9 is to sort the basis vectors in ascending order according to their norm. More precisely, let π be the permutation of basis vectors, then
πi=argj≠π
This ordering is called norm-induced ordering. Although it takes time to calculate Equation 10, simulation results show that this simple scheme is enough to reduce the complexity of LLL.
The original V-BLAST detection algorithm sorts the basis vectors implicitly. This suggests the idea of employing such ordering to reduce the complexity of LLL. Geometrically speaking, to choose the next vector to be reduced, each of the candidate vectors is projected on to the orthogonal complement of the subspace spanned by the other candidate vectors in n, and the one with shortest projection.
However, the cost of computing V-BLAST is indeed quite high and not offset by the complexity reduction obtained. Accordingly, the scheme is not very attractive. Nonetheless, if this ordering is known a priori, for example from V-BLAST detection of previous symbol, then this information can be used to reduce largely the complexity of LLL.
This modified QR decomposition algorithm aims to maximize the diagonal elements of the R matrix (i.e. the upper triangular matrix) using a greedy approach, i.e. Rn,n is maximized, and then Rn−1,n−1, Rn−2,n−2 and so on. A small modification convert it into GSO algorithm that aims to minimize the norm of the orthogonal vectors h*'s.
Although sorted-QR ordering does not produce better ordering than the V-BLAST ordering, sorted-QR ordering can achieve greater overall complexity reduction than V-BLAST ordering because the sorting is performed implicitly in the GSO procedure.
A novel technique jointly performs vector sorting stage and lattice reduction stage.
The traditional LLL reduction works in a successive manner. In other words, to LLL-reduce a basis consisting of vectors {h1, . . . , hn}, the algorithm reduces {h1, h2} first, which is called the first reduction step. After that, the algorithm goes on to reduce {h′1, h′2, h′3} where h′1, h′2 denote the reduced basis vectors obtained in the first reduction step, and so on. After n−1 reduction steps, the whole basis is reduced.
Now, after the i-th reduction step, instead of picking hi+1 as the next vector to be reduced, a vector is picked among the candidate vectors hi+1, . . . hn that would minimize or reduce the overall complexity. In other words, a candidate vector is labeled as hi+1 after the i-th reduction step if, by doing so, the overall complexity can be reduced according to some heuristics. This approach is called joint sorting and reduction (JSAR) as the processes of LLL reduction and vector sorting are now jointly considered.
Note how this approach is different from naive sorting discussed supra. In the JSAR approach, the (i+1)-th vector to be reduced is not determined until the i-th reduction step is completed. As some vectors may have been manipulated in the last reduction step, the best candidate to be labeled hi+1 may inevitably be changed too. In contrast, in the naive sorting approach, the ordering is fixed after starting the reduction. In general, a better result can be expected with this joint approach.
A heuristic for JSAR is provided that can reduce the overall complexity of the LLL reduction technique.
Denote B as the set of candidate vectors. All vectors are projected in B to the orthogonal complement of the space spanned by the labeled vectors, and pick the one with the shortest projection. For the first one, the basis vector selected is the one with the smallest norm. More precisely,
h
i
=arg
νεBmin∥proj(νSi−1⊥)∥ (Equation 11)
where Si−1⊥ denotes the orthogonal complement of Si−1 span (h1, . . . hi−1) in n, and proj (ν, U⊥) denotes the projection of vector v onto subspace U⊥. This heuristic is called minimum projection ordering (MPO).
The complexity of computing Equation 11 seems to be huge. However, the GSO algorithm can be modified such that Equation 11 can be computed implicitly, similar to sorted-QR.
Unfortunately, this piece of information comes with a small price. Assume that the first p basis vectors are to be LLL-reduced, and during the reduction hk−1 and hk were swapped. After the swapping, these need updating:
With the traditional GSO, μp+1,k−1, μp+1,k′, μp+2,k−1 and μp+2,k do not need to be updated, since they were not even computed at this stage (only μi,j′s for j≦i≦p had been computed). However, the μi,j's are now determined in this ordering. μ1,1, μ2,1, . . . μn,1, μ2,2 . . . μ2,n . . . μn,n In particular the values in Equation 12 are now determined from top to bottom, left to right. Nonetheless, simulation results show that with minimum projection ordering, a saving in complexity can be obtained.
The ordering heuristic is very similar to sorted-QR. In fact, if the original basis was “good” enough, such that basis vector swapping never occurs, then the two ordering schemes coincide. However, when there is basis vector swapping, the sorted-QR ordering in naive sorting and minimum projection ordering in JSAC would produce different results.
The average complexity and the error-rate performance is compared when the reduced bases are used in MIMO detection. Advantageously, JSAR with minimum projection ordering achieves the largest complexity reduction among the sorting schemes. Hereinafter, LLL reduction algorithm with JSAR employing minimum projection ordering heuristic will be referred to as LLL-MPO.
The JSAR technique can be applied to the LLL reduction of real or complex lattice bases. To demonstrate that JSAR can be employed simultaneously with complex LLL reduction, the comparison were performed between CLLL and CLLL-MPO (i.e. complex LLL lattice reduction with JSAR).
Table 3 illustrates the average complexity of CLLL algorithm and the CLLL-MPO algorithm. The complexity, in terms of number of flops, was reduced by 11.85% when m=n=2 (i.e. a 2-transmitter-2-receiver system), and reduced by somewhere between 16.53% to 27.90% for larger n. Similarly, the joint sorting and reduction technique reduced the average number of basis vector swappings by 47.56% when n=2, and up to 60.35% when n=10.
The bit-error-rate (BER) performances are shown in
In some scenarios, the fully reduced lattice for the channel basis is not desirable, such as due to a maximum delay-constraint. In such case, the lattice reduction process can be stopped once the delay is over a certain threshold. This method is called a truncated LLL reduction.
Turning briefly to
Although not shown, the method can be truncated after multiple iterations when a predetermined condition occurs, such as a predetermined amount of time has elapsed or a predetermined number of iterations have occurred.
Turning to
Although not required, the invention can partly be implemented via software (e.g., firmware). Software may be described in the general context of computer-executable instructions, such as program modules, being executed by one or more computers.
With reference to
Computer 1110 typically includes a variety of computer readable media. Computer readable media can be any available media that can be accessed by computer 1110. By way of example, and not limitation, computer readable media may comprise computer storage media and communication media. Computer storage media includes volatile and nonvolatile as well as removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CDROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by computer 1110. Communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media.
The system memory 1130 may include computer storage media in the form of volatile and/or nonvolatile memory such as read only memory (ROM) and/or random access memory (RAM). A basic input/output system (BIOS), containing the basic routines that help to transfer information between elements within computer 1110, such as during start-up, may be stored in memory 1130. Memory 1130 typically also contains data and/or program modules that are immediately accessible to and/or presently being operated on by processing unit 1120. By way of example, and not limitation, memory 1130 may also include an operating system, application programs, other program modules, and program data.
The computer 1110 may also include other removable/non-removable, volatile/nonvolatile computer storage media. For example, computer 1110 could include a flash memory that reads from or writes to non-removable, nonvolatile media, a magnetic disk drive that reads from or writes to a removable, nonvolatile magnetic disk, and/or an optical disk drive that reads from or writes to a removable, nonvolatile optical disk, such as a CD-ROM or other optical media. Other removable/non-removable, volatile/nonvolatile computer storage media that can be used in the exemplary operating environment include, but are not limited to, magnetic tape cassettes, digital versatile disks, digital video tape, solid state RAM, solid state ROM and the like.
A user may enter commands and information into the computer 1110 through input devices. Input devices are often connected to the processing unit 1120 through user input 1140 and associated interface(s) that are coupled to the system bus 1121, but may be connected by other interface and bus structures, such as a parallel port, game port or a universal serial bus (USB). A graphics subsystem may also be connected to the system bus 1121. A monitor or other type of display device may also connected to the system bus 1121 via an interface, such as output interface 1150, which may in turn communicate with video memory. In addition to a monitor, computer 1110 may also include other peripheral output devices, which may be connected through output interface 1150.
The computer 1110 operates in a networked or distributed environment using logical connections to one or more other remote computers, such as remote computer 1170, which may in turn have capabilities different from device 1110. The logical connections depicted in
When used in a LAN networking environment, the computer 1110 is connected to the LAN through a network interface or adapter. When used in a WAN networking environment, the computer 1110 typically includes a communications component, such as a modem, or other means for establishing communications over the WAN, such as the Internet. A communications component, such as a network interface card, which may be internal or external, may be connected to the system bus 1121 via the user input interface of input 1140, or other appropriate mechanism. In a networked environment, program modules depicted relative to the computer 1110, or portions thereof, may be stored in a remote memory storage device.
Turning now to
In one example, packet traffic originating from mobile subscriber 1250 is transported over the air interface to a BTS 1204, and from the BTS 1204 to the BSC 1202. Base station subsystems, such as BSS 1200, can be a part of internal frame relay network 1210 that can include Service GPRS Support Nodes (“SGSN”) such as SGSN 1212 and 1214. Each SGSN is in turn connected to an internal packet network 1220 through which a SGSN 1212, 1214, etc., can route data packets to and from a plurality of gateway GPRS support nodes (GGSN) 1222, 1224, 1226, etc. As illustrated, SGSN 1214 and GGSNs 1222, 1224, and 1226 are part of internal packet network 1220. Gateway GPRS serving nodes 1222, 1224 and 1226 can provide an interface to external Internet Protocol (“IP”) networks such as Public Land Mobile Network (“PLMN”) 1245, corporate intranets 1240, or Fixed-End System (“FES”) or the public Internet 1230. As illustrated, subscriber corporate network 1240 can be connected to GGSN 1222 via firewall 1232; and PLMN 1245 can be connected to GGSN 1224 via boarder gateway router 1234. The Remote Authentication Dial-In User Service (“RADIUS”) server 1242 may also be used for caller authentication when a user of a mobile subscriber device 1250 calls corporate network 1240.
Generally, there can be four different cell sizes-macro, micro, pico, and umbrella cells. The coverage area of each cell is different in different environments. Macro cells can be regarded as cells where the base station antenna is installed in a mast or a building above average roof top level. Micro cells are cells whose antenna height is under average roof top level; they are typically used in urban areas. Pico cells are small cells having a diameter is a few dozen meters; they are mainly used indoors. On the other hand, umbrella cells are used to cover shadowed regions of smaller cells and fill in gaps in coverage between those cells.
The present invention has been described herein by way of examples. For the avoidance of doubt, the subject matter disclosed herein is not limited by such examples. In addition, any aspect or design described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects or designs, nor is it meant to preclude equivalent exemplary structures and techniques known to those of ordinary skill in the art. Furthermore, to the extent that the terms “includes,” “has,” “contains,” and other similar words are used in either the detailed description or the claims, for the avoidance of doubt, such terms are intended to be inclusive in a manner similar to the term “comprising” as an open transition word without precluding any additional or other elements.
Various implementations of the invention described herein may have aspects that are wholly in hardware, partly in hardware and partly in software, as well as in software. As used herein, the terms “component,” “system” and the like are likewise intended to refer to a computer-related entity, either hardware, a combination of hardware and software, software, or software in execution. For example, a component may be, but is not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on computer and the computer can be a component. One or more components may reside within a process and/or thread of execution and a component may be localized on one computer and/or distributed between two or more computers.
Thus, the methods and apparatus of the present invention, or certain aspects or portions thereof, may take the form of program code (i.e., instructions) embodied in tangible media, such as floppy diskettes, CD-ROMs, hard drives, or any other machine-readable storage medium, wherein, when the program code is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the invention. In the case of program code execution on programmable computers, the computing device generally includes a processor, a storage medium readable by the processor (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device.
Furthermore, the invention may be described in the general context of computer-executable instructions, such as program modules, executed by one or more components. Generally, program modules include routines, programs, objects, data structures, etc. that perform particular tasks or implement particular abstract data types. Typically the functionality of the program modules may be combined or distributed as desired in various embodiments. Furthermore, as will be appreciated various portions of the disclosed systems above and methods below may include or consist of artificial intelligence or knowledge or rule based components, sub-components, processes, means, methodologies, or mechanisms (e.g., support vector machines, neural networks, expert systems, Bayesian belief networks, fuzzy logic, data fusion engines, classifiers . . . ). Such components, inter alia, can automate certain mechanisms or processes performed thereby to make portions of the systems and methods more adaptive as well as efficient and intelligent.
Additionally, the disclosed subject matter may be implemented as a system, method, apparatus, or article of manufacture using standard programming and/or engineering techniques to produce software, firmware, hardware, or any combination thereof to control a computer or processor based device to implement aspects detailed herein. The terms “article of manufacture,” “computer program product” or similar terms, where used herein, are intended to encompass a computer program accessible from any computer-readable device, carrier, or media. For example, computer readable media can include but are not limited to magnetic storage devices (e.g., hard disk, floppy disk, magnetic strips . . . ), optical disks (e.g., compact disk (CD), digital versatile disk (DVD) . . . ), smart cards, and flash memory devices (e.g., card, stick). Additionally, it is known that a carrier wave can be employed to carry computer-readable electronic data such as those used in transmitting and receiving electronic mail or in accessing a network such as the Internet or a local area network (LAN).
The aforementioned systems have been described with respect to interaction between several components. It can be appreciated that such systems and components can include those components or specified sub-components, some of the specified components or sub-components, and/or additional components, according to various permutations and combinations of the foregoing. Sub-components can also be implemented as components communicatively coupled to other components rather than included within parent components, e.g., according to a hierarchical arrangement. Additionally, it should be noted that one or more components may be combined into a single component providing aggregate functionality or divided into several separate sub-components, and any one or more middle layers, such as a management layer, may be provided to communicatively couple to such sub-components in order to provide integrated functionality. Any components described herein may also interact with one or more other components not specifically described herein but generally known by those of skill in the art.