The subject disclosure relates generally to data hiding in visual raster media, and more particularly to lossless encoding and decoding of hidden data, such as a digital watermark, using multiple predictor functions.
Steganography is the art and science of writing hidden messages in such a way that no one apart from the intended recipient knows of the existence of the message. For example, digital watermarking is one application of steganography. Digital watermarking is one of the ways to prove the ownership and the authenticity of the media. In order to enhance the security of the hidden message, the hidden message should be perceptually transparent and robustness. However, for hidden messages, there is a tradeoff between the visual quality and the payload. The higher the payload is, the lower the visual quality is.
In traditional watermarking algorithms, a digital watermark signal is embedded into a digital host signal resulting in watermarked signal. However, distortion is introduced into a host image during the embedding process and results in Peak Signal-to-Noise Ratio (PSNR) loss. Although the distortion is normally small, some applications, such as medical and military, are sensitive to embedding distortion and may not tolerate permanent loss of signal fidelity. As a result, lossless data hiding, which can recover the original host signal and/or the hidden data signal perfectly after extraction, is desirable for at least these applications.
There are a number of existing lossless/reversible watermarking algorithms. In one algorithm, modulo operations are used to ensure the reversibility, however, it often results in “salt-and-peppers” artifacts. In another algorithm, a circular interpretation of bijective transform is used for lossless watermarking. Although the algorithm can withstand some degree of image encoding (e.g., JPEG) attack, the small payload capacity and “salt-and-peppers” artifacts are major disadvantages of the algorithm. In yet another algorithm, the prediction error between the predicted pixel value and the original pixel value to embed data is used; however, some overhead (e.g., a location map and a threshold values) is needed to ensure the reversibility.
The above-described deficiencies of current data hiding methods are merely intended to provide an overview of some of the problems of today's data hiding techniques, and are not intended to be exhaustive. Other problems with the state of the art may become further apparent upon review of the description of various non-limiting embodiments of the invention that follows.
The following presents a simplified summary of the invention in order to provide a basic understanding of some aspects of the invention. This summary is not an extensive overview of the invention. It is intended to neither identify key or critical elements of the invention nor delineate the scope of the invention. Its sole purpose is to present some concepts of the invention in a simplified form as a prelude to the more detailed description that is presented later.
According to one aspect, a method of encoding/decoding hidden data is provided which uses a set of multiple predictors. Each predictor generates a predicted value for a pixel according to one or more surrounding pixels. The multiple predictor(s) can include, but is not limited to, a horizontal predictor, a vertical predictor, a casual weighted average, and a casual spatial varying weight. Data is embedded by making the watermarked pixel value close to one of the predicted values generated by the predictors. The embedding process involves bijective mirror mapping (BMM) for embedding hidden data and bijective pixel value shifting (BPVS) for maintaining reversibility of various candidate positions. By using different predictors, a candidate location can be either a low variance region (smooth region) or a high variance region (texture/edge region). The payload capacity is increased over the other methods, and no location map is needed to ensure the reversibility. In order to recover the watermark, some side information is conveyed to decoder, such as the set of predictors used.
To the accomplishment of the foregoing and related ends, certain illustrative aspects of the invention are described herein in connection with the following description and the annexed drawings. These aspects are indicative, however, of but a few of the various ways in which the principles of the invention may be employed and the present invention is intended to include all such aspects and their equivalents. Other advantages and novel features of the invention may become apparent from the following detailed description of the invention when considered in conjunction with the drawings.
The present invention is now described with reference to the drawings, wherein like reference numerals are used to refer to like elements throughout. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present invention. It may be evident, however, that the present invention may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form in order to facilitate describing the present invention.
As used in this application, the terms “component,” “module,” “system”, or the like are generally intended to refer to a computer-related entity, either hardware, a combination of hardware and software, software, or software in execution. For example, a component may be, but is not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a controller and the controller can be a component. One or more components may reside within a process and/or thread of execution and a component may be localized on one computer and/or distributed between two or more computers.
Furthermore, the claimed subject matter may be implemented as a method, apparatus, or article of manufacture using standard programming and/or engineering techniques to produce software, firmware, hardware, or any combination thereof to control a computer to implement the disclosed subject matter. The term “article of manufacture” as used herein is intended to encompass a computer program accessible from any computer-readable device, carrier, or media. For example, computer readable media can include but are not limited to magnetic storage devices (e.g., hard disk, floppy disk, magnetic strips . . . ), optical disks (e.g., compact disk (CD), digital versatile disk (DVD) . . . ), smart cards, and flash memory devices (e.g., card, stick, key drive . . . ). Additionally it should be appreciated that a carrier wave can be employed to carry computer-readable electronic data such as those used in transmitting and receiving electronic mail or in accessing a network such as the Internet or a local area network (LAN). Of course, those skilled in the art will recognize many modifications may be made to this configuration without departing from the scope or spirit of the claimed subject matter.
Moreover, the word “exemplary” is used herein to mean serving as an example, instance, or illustration. Any aspect or design described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects or designs. Rather, use of the word exemplary is intended to present concepts in a concrete fashion. As used in this application, the term “or” is intended to mean an inclusive “or” rather than an exclusive “or”. That is, unless specified otherwise, or clear from context, “X employs A or B” is intended to mean any of the natural inclusive permutations. That is, if X employs A; X employs B; or X employs both A and B, then “X employs A or B” is satisfied under any of the foregoing instances. In addition, the articles “a” and “an” as used in this application and the appended claims should generally be construed to mean “one or more” unless specified otherwise or clear from context to be directed to a singular form.
Referring now to
The system 100 also includes a decoder 104. The decoder 104 can also be hardware and/or software (e.g., threads, processes, computing devices). The decoder 104 can house threads to perform decoding. Multiple decoders for each set of predictors and/or smooth/edge encoding regions. One possible communication between an encoder 102 and a decoder 104 can be in the form of data packets adapted to be transmitted between two or more computer processes. The data packets can include data representing visual media, such as a video frame or a static image.
The system 100 also includes a content consumer 108. The content consumer can also be hardware and/or software (e.g., threads, processes, computing devices). The content consumer 108 can present the visual media to a user and/or store the visual media for future distribution and/or playback. In some embodiments, the content consumer and/or its user is aware of the hidden data. For example, media companies may indicate that particular visual media is watermarked to deter unauthorized copying. In a second example, the content presenter 108 and decoder 104 are executing on the same machine and that machine verifies authenticity before presenting the visual media to the user. In other embodiments, the content consumer 108 is unaware of the hidden data such as when the hidden data is not a digital watermark but instead a hidden message.
The system 100 includes a communication framework 106 (e.g., a global communication network such as the Internet; or a more traditional computer-readable storage medium such as a tape, DVD, flash memory, hard drive, etc.) that can be employed to facilitate visual media communications between the encoder 102, decoder 104 and content consumer 108. Communications can be facilitated via a wired (including optical fiber) and/or wireless technology and via a packet-switched or circuit-switched network.
For the sake of simplicity and clarity, an embodiment involving a 256 shades greyscale image, a set of only two predictors, and a digital watermark as the hidden data are described as used in an exemplary embodiment. However, one will appreciate that the techniques may be applied to multiple colors, multiple color depths, more than two predictors, and frames within a video. In addition, one will appreciate that the methodology may be performed at a block level instead of the pixel level.
According to one embodiment, the system and methodology performs data hiding in raster scan order. A predictor generates a predicted value for a pixel based on the surrounding pixels. Different predictors with different characteristics, such as edge preserving, noise removal, edge sensitive and so on, usually result in different predicted pixel values. By examining the predicted pixel values, candidate locations for embedding regions can be determined. After defining the embedding location (e.g., smooth (or small difference in predicted values); and edge (or large difference in predicted values), bijective mirror mapping is used to embed the digital watermark by choosing the watermarked pixel value to be closest to one of those predicted pixel values. In the case of a set of two predictors, the watermarked pixel will be either closer to the minimum or the maximum of these two predicted values. In the case of a set of three or more predictors, the watermarked pixel can still be the minimum or the maximum of the three predicted values.
Referring to
The method according to one embodiment starts by setting Q equal to P. In one exemplary embodiment, the first row and the first column is not be used for embedding, as it is needed for prediction. However, depending on a particular application, other rows or columns can also not be used for potential embedding. For example, if one of the predictor used needs additional pixel values to predict a value, other rows and columns may not be usable for embedding. Similarly, if the unit of the visual media worked on is a block instead of a pixel, only even rows may be potentially used for embedding. As a final example, if protection from a crop attack is desired, a row in the center of the visual media can be used to synchronize the predicted values and not used for potential embedding.
Before the embedding process, encoder users choose which set of predictors are used—the decoder uses the same set of predictor as the encoder—as well as other configuration settings discussed below (e.g., the range R, the value of B, embedding domain, and mapping function). In the process of choosing the set of predictor used, the encoder user also selects how many predictors are to be taken into account. Default values can be used for at least some of the other configuration settings not specified by the encoder user.
A non-exclusive list of potential predictors is shown below and the pixels used as part of the predictors are illustrated in FIG. 3A-3E.:
Horizontal Predictor: {circumflex over (P)}(x,y)=Q(x−1,y) (300 of
Vertical Predictor: {circumflex over (P)}(x, y)=Q(x, y−1) (310 of
Causal Weighted Average: {circumflex over (P)}(x, y)=(2×Q(x−1, y)+2×Q(x, y−1)+Q(x−1, y−1)+Q(x+1, y−1))/6 (320 of
Causal Average: {circumflex over (P)}x,y=(Q(x−1, y)+Q(x, y−1)+Q(x−1, y−1)+Q(x+1, y−1))/4 (330 of
Causal Spatial Varying Weight (SVF): Before applying casual SVF, a “target” value, Tgtx,y, is computed by calculating the casual average of the neighboring pixels, Q(x−1, y), Q(x, y−1), Q(x+1, y−1) and Q(x−1, y−1). The locations of four candidate pixels are shown in 330 of
However, using the mean method to determine the “target” value is often not representative enough, especially in the casual case, as mean algorithm will be affected by outliers. As a result, activity measurement can be used instead. The locations of pixels involved in activity measurement are shown in 340 of
The equations involved in activity measurement are:
d
h
=|Q(x−2,y)−Q(x−1,y)|+|Q(x−1,y−1)−Q(x,y−1)|+|Q(x,y−1)−Q(x+1,y−1)| Eqn. A
d
v
=|Q(x−1,y−1)−Q(x−1,y)|+|Q(x,y−2)−Q(x,y−1)|+|Q(x+1,y−2)−Q(x+1,y−1)| Eqn. B
The value of dh and dv are the activity in horizontal direction and vertical direction respectively. The higher the activity value is, the less the correlation between pixel in that direction is. If dh<dv, the value of Tgtx,y is set to Q(x−1,y). If dh>dv, the value of Tgtx,y is set to Q(x,y−1).
By using Tgtx,y, the predicted pixel value at (x, y), {circumflex over (P)} x,y, can be computed by refining the Tgtx,y using the neighboring candidate pixels, Q(x−1, y), Q(x,y−1), Q(x+1, y−1) and Q(x−1, y−1) through SVF.
The spatial varying weight, Wi,j can be any monotonic decreasing function. The Wi,j is negatively correlated with Di,j, which is the difference between the neighboring pixel values and the Tgtx,y. The value of Di,j and Wi,j is calculated as follows:
D
i,j
=|Q(x+i,y+j)−Tgtx,y| Eqn. D
W
i,j=exp (−Di,j·k) Eqn. E
k is the controlling factor which controls suppression degree of outliers. For determining Wi,j, Equation E is used. Casual SVF will predict the pixel value by suppressing the outlier with a lower Wi,j.
For every pixel, P(x,y), the minimum predicted value of these 2 predictors is denoted as min_P and the maximum predicted value is denoted as max_P.
min—P=min({circumflex over (P)}1(x, y), {circumflex over (P)}2(x, y)) Eqn. 1
max—P=max({circumflex over (P)}1(x, y), {circumflex over (P)}2(x, y)) Eqn. 2
The difference between two predictor is, Diff_P.
Diff
—
P=max—P−min—P−1 Eqn. 3
Encoder users can choose in advanced to embed the watermark in the region with large Diff_P or small Diff_P. However, in other embodiments, the region of encoding can be determined automatically based on the image (e.g., choosing the region to maximize payload). If the predictor pair is causal SVF and weighted average, or horizontal predictor and vertical predictor, larger Diff_P means the region with higher variance. As for smooth region, both predictors can predict well and get the similar predicted values, however, for the edge or texture region, one of the predictors will get a closer value whereas the other predictor will get a less accurate predicted value. On the other hand, the larger the Diff_P is, the larger the watermark strength is.
In order to become one of the possible candidates to embed 1 bit of watermark, the predicted pixel values should satisfy the following conditions:
Region Condition: Diff_P is in a predefined range, R a)
Minimum Condition: min—P>floor(1.5R)+1+B b)
Maximum Condition: max—P<255−floor(1.5R)−1−B c)
where B is the predefined value to make sure the watermarked pixel value is in between 0 and 255.
In at least one embodiment, B can be tuned iteratively, and the minimum value of B, Bmin, can be found. When the value of B is increased larger than Bmin, the number of candidate positions decrease and thus the payload decreases. The value of Bmin depends on the characteristics of the image. If the image is a low-variance image, the value of Bmin is smaller. Conversely, if the image is a high-variance image, the value of Bmin is larger. The value of B can be treated as a unique key that is used to have perfect reconstruction of the watermark and recovery of the host image. An encoder user can choose any value larger than Bmin (and less than 255) to enhance the security of the watermark.
To prevent a great distortion, some candidate positions are used for watermarking by performing bijective mirror mapping (BMM) and some candidate positions are used to ensure the reversibility by performing bijective pixel value shifting (BPVS). The candidate position is used for watermarking if the position satisfies the following requirement:
Embedding Condition: min—P−floor(0.5Diff—P)<P(x,y)<max—P+floor(0.5Diff—P) d)
BMM is performed for the candidate positions which satisfy the condition (d). Min_P and Max_P will be selected as “mirror” according to the watermark bit. For example, if L(x, y) is “1” (“0”), Min_P (Max_P) will be chosen as “mirror”. The BMM is illustrated in
For the candidate positions that do not satisfy the condition (d), BPVS is performed in order to ensure the reversibility. The BPVS process is illustrated in
The watermarked image, Q, is formed after performing BMM or BPVS.
In the watermark extraction and image recovery, the same set of predictors and variable values (R and B), are used with inverse raster scan order. In one embodiment, this additional information can be transmitted with the watermarked image or separately supplied (e.g., by out of band transmission to a decoder). By computing the condition (a), (b) and (c), candidate positions are identified.
Extraction Condition: min—P−floor(1.5Diff—P)−1<S(x,y)<max—P+floor(1.5Diff—P)+1 e)
For those candidate positions which satisfy condition (e), the watermark is extracted by comparing the difference between the received watermarked pixel values with each predicted values. If the received watermarked pixel values are closer to Min_P (Max_P), the extracted watermark value is set to “1” (“0”). The extracted watermark, S, and process 500 of determining it is illustrated in
Referring to
The data hiding algorithm according to one aspect of the present invention was tested with several standard testing images available from the USC-SIPI image database. The tested images are Lena, Barbara, F16, Pentagon, and Peppers. Each were tested with an image size of 512×512 pixels.
Referring to
The Peak Signal-to-Noise Ratio (PSNR) and the Weighted PSNR (WPSNR) between the watermarked image and the original host image are used for measurement of visual quality. WPSNR is based on the Contrast Sensitive Function (CSF) of Human Visual System. The PSNR and the WPSNR and the payload are shown in table I and II. For table I and II, small Diff_P is used as well as a constant value of B, which is greater than the Bmin of all the images. For table I, the predictor pair—causal weighted average and causal SVF is used. For table II, the predictor pair—horizontal predictor and vertical predictor is used.
According to the exemplary embodiment, causal neighborhood is used, which is shown in
In order to increase the payload and embed the information bit in a constant intensity region, predictor expansion can be used. For a constant intensity region, both predictors will normally result in the similar or same value. In order to become candidate position, Diff_P should be lager than 2 for binary watermark. The payload is increased by increasing Max_P with a constant, c1, and/or decreasing the Min_P with c2, so it can hide more data by increasing the number of candidate position.
Predictor expansion was tested as well. In table III, both predictors are causal SVF and the technique of predictor expansion is used.
The system can also be extended to prevent a cropping attack by inserting the synchronization code into the hidden data signal. The region of inserting the synchronization code is predefined (e.g., near the center of the host image). When the predictors are casual and with small neighborhood, the predicted value depends on the neighboring pixels only. The watermark can still be extracted and noticeable. By using the extracted synchronization code, at least some of the hidden data can be reconstructed even after cropping.
As previously mentioned, the system and methodology can be extended using more than two predictors. For example, if a third predictor is used, candidate locations using a small Diff_P can be determined using either the middle predicted value and the minimum predicted value or the middle predicted value and the maximum predicted values. Similarly, candidate positions using a large Diff_P can be determined using the maximum and the minimum predicted values from the set of predicted values. As a result, a higher PSNR can be achieved. As another example, if four predictors are used, payload can be increased as a single embedding location can contain two bits of information, instead of one.
One will appreciate that various other modification can be made in other embodiments. For example, one will appreciate that other mapping functions well known in the art can be used instead of bijective mirror mapping. In addition, although the system and methodology have been described as occurring in the spatial domain, the techniques may be applied to coefficients after an image transformation has occurred. For example, in another embodiment, the techniques are used in the wavelet domain. After transforming the original image using wavelet transform, the host image is decomposed into different sub-bands (LL, LH, HL, HH), where L stands for lowpass and H stands for highpass. The HH sub-band can be used for embedding hidden data since the Human Visual System (HVS) is less sensitive to changes in the HH sub-band. A set of predictors is then used to predict the wavelet coefficient based on neighboring wavelet coefficients. BMM and BPVS can be subsequently used if the appropriate conditions are satisfied, just as in the spatial domain.
Referring to
In an alternative embodiment, the original image is transformed using the transformation component 701. The transformation component, for example, can transform the image using wavelet transform. After transforming the image, the scan component 708 can be used to scan at least some of the coefficients. Other than using coefficients instead of pixels, the other components (704, 706, 710, 712, 714) perform the same basic functionality as described above. An inverse transformation component 715 is utilized at the end of the scan to produce the image containing the hidden data.
Although not shown, a decoding system would be similar. The BMM component and the BPVS component would be replaced with an inverse BMM component and an inverse BPVS component, respectively. Each of these components would perform the inverse operation of their respective component. In addition, the scan component would instead perform an inverse scan by scanning in an order opposite the original scan.
Referring to
Referring to
Turning now to
Although not required, the invention can partly be implemented via an operating system, for use by a developer of services for a device or object, and/or included within application software that operates in connection with the component(s) of the invention. Software may be described in the general context of computer-executable instructions, such as program modules, being executed by one or more computers, such as client workstations, servers or other devices. Those skilled in the art will appreciate that the invention may be practiced with other computer system configurations and protocols.
With reference to
Computer 1110a typically includes a variety of computer readable media. Computer readable media can be any available media that can be accessed by computer 1110a. By way of example, and not limitation, computer readable media may comprise computer storage media and communication media. Computer storage media includes volatile and nonvolatile as well as removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CDROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by computer 1110a. Communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media.
The system memory 1130a may include computer storage media in the form of volatile and/or nonvolatile memory such as read only memory (ROM) and/or random access memory (RAM). A basic input/output system (BIOS), containing the basic routines that help to transfer information between elements within computer 1110a, such as during start-up, may be stored in memory 1130a. Memory 1130a typically also contains data and/or program modules that are immediately accessible to and/or presently being operated on by processing unit 1120a. By way of example, and not limitation, memory 1130a may also include an operating system, application programs, other program modules, and program data.
The computer 1110a may also include other removable/non-removable, volatile/nonvolatile computer storage media. For example, computer 1110a could include a hard disk drive that reads from or writes to non-removable, nonvolatile magnetic media, a magnetic disk drive that reads from or writes to a removable, nonvolatile magnetic disk, and/or an optical disk drive that reads from or writes to a removable, nonvolatile optical disk, such as a CD-ROM or other optical media. Other removable/non-removable, volatile/nonvolatile computer storage media that can be used in the exemplary operating environment include, but are not limited to, magnetic tape cassettes, flash memory cards, digital versatile disks, digital video tape, solid state RAM, solid state ROM and the like. A hard disk drive is typically connected to the system bus 1121a through a non-removable memory interface such as an interface, and a magnetic disk drive or optical disk drive is typically connected to the system bus 1121a by a removable memory interface, such as an interface.
A user may enter commands and information into the computer 1110a through input devices such as a keyboard and pointing device, commonly referred to as a mouse, trackball or touch pad. Other input devices may include a microphone, joystick, game pad, satellite dish, scanner, or the like. These and other input devices are often connected to the processing unit 1120a through user input 1140a and associated interface(s) that are coupled to the system bus 1121a, but may be connected by other interface and bus structures, such as a parallel port, game port or a universal serial bus (USB). A graphics subsystem may also be connected to the system bus 1121a. A monitor or other type of display device is also connected to the system bus 1121a via an interface, such as output interface 1150a, which may in turn communicate with video memory. In addition to a monitor, computers may also include other peripheral output devices such as speakers and a printer, which may be connected through output interface 1150a.
The computer 1110a may operate in a networked or distributed environment using logical connections to one or more other remote computers, such as remote computer 1110a, which may in turn have media capabilities different from device 1110a. The remote computer 1170a may be a personal computer, a server, a router, a network PC, a peer device or other common network node, or any other remote media consumption or transmission device, and may include any or all of the elements described above relative to the computer 1110a. The logical connections depicted in
When used in a LAN networking environment, the computer 1110a is connected to the LAN 1111a through a network interface or adapter. When used in a WAN networking environment, the computer 1110a typically includes a communications component, such as a modem, or other means for establishing communications over the WAN, such as the Internet. A communications component, such as a modem, which may be internal or external, may be connected to the system bus 1121a via the user input interface of input 1140a, or other appropriate mechanism. In a networked environment, program modules depicted relative to the computer 1110a, or portions thereof, may be stored in a remote memory storage device. It will be appreciated that the network connections shown and described are exemplary and other means of establishing a communications link between the computers may be used.
The present invention has been described herein by way of examples. For the avoidance of doubt, the subject matter disclosed herein is not limited by such examples. In addition, any aspect or design described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects or designs, nor is it meant to preclude equivalent exemplary structures and techniques known to those of ordinary skill in the art. Furthermore, to the extent that the terms “includes,” “has,” “contains,” and other similar words are used in either the detailed description or the claims, for the avoidance of doubt, such terms are intended to be inclusive in a manner similar to the term “comprising” as an open transition word without precluding any additional or other elements.
Various implementations of the invention described herein may have aspects that are wholly in hardware, partly in hardware and partly in software, as well as in software. As used herein, the terms “component,” “system” and the like are likewise intended to refer to a computer-related entity, either hardware, a combination of hardware and software, software, or software in execution. For example, a component may be, but is not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on computer and the computer can be a component. One or more components may reside within a process and/or thread of execution and a component may be localized on one computer and/or distributed between two or more computers.
Thus, the methods and apparatus of the present invention, or certain aspects or portions thereof, may take the form of program code (i.e., instructions) embodied in tangible media, such as floppy diskettes, CD-ROMs, hard drives, or any other machine-readable storage medium, wherein, when the program code is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the invention. In the case of program code execution on programmable computers, the computing device generally includes a processor, a storage medium readable by the processor (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device.
Furthermore, the invention may be described in the general context of computer-executable instructions, such as program modules, executed by one or more components. Generally, program modules include routines, programs, objects, data structures, etc. that perform particular tasks or implement particular abstract data types. Typically the functionality of the program modules may be combined or distributed as desired in various embodiments. Furthermore, as will be appreciated various portions of the disclosed systems above and methods below may include or consist of sub-components, processes, means, methodologies, or mechanisms.
Additionally, the disclosed subject matter may be implemented as a system, method, apparatus, or article of manufacture using standard programming and/or engineering techniques to produce software, firmware, hardware, or any combination thereof to control a computer or processor based device to implement aspects detailed herein. The terms “article of manufacture,” “computer program product” or similar terms, where used herein, are intended to encompass a computer program accessible from any computer-readable device, carrier, or media. For example, computer readable media can include but are not limited to magnetic storage devices (e.g., hard disk, floppy disk, magnetic strips . . . ), optical disks (e.g., compact disk (CD), digital versatile disk (DVD) . . . ), smart cards, and flash memory devices (e.g., card, stick). Additionally, it is known that a carrier wave can be employed to carry computer-readable electronic data such as those used in transmitting and receiving electronic mail or in accessing a network such as the Internet or a local area network (LAN).
The aforementioned systems have been described with respect to interaction between several components. It can be appreciated that such systems and components can include those components or specified sub-components, some of the specified components or sub-components, and/or additional components, and according to various permutations and combinations of the foregoing. Sub-components can also be implemented as components communicatively coupled to other components rather than included within parent components, e.g., according to a hierarchical arrangement. Additionally, it should be noted that one or more components may be combined into a single component providing aggregate functionality or divided into several separate sub-components, and any one or more middle layers, such as a management layer, may be provided to communicatively couple to such sub-components in order to provide integrated functionality. Any components described herein may also interact with one or more other components not specifically described herein but generally known by those of skill in the art.