This disclosure relates generally to image processing, and more specifically, to single parameter segmentation of images.
In the last decade, Object Based Image Analysis (OBIA) has been employed broadly in various mapping applications. Dealing with image objects in OBIA can circumvent many of the shortcomings associated with pixel based analysis. Image segmentation is one step for OBIA, where the image is to be divided into segments or logical objects based on homogeneity or similarity measures. These extracted objects serve as semantic and descriptive targets for the further analysis steps. As such, adoption of image segmentation methodology can improve the analysis results significantly. Various theories and assumptions have been utilized to develop image segmentation techniques that serve a wide variety of applications.
The improper selection of the segmentation parameters can lead to either over-segmented or under-segmented images. Over-segmentation occurs when the same object in the image is segmented into many segments despite the high homogeneity between its pixels.
Embodiments of methods and systems for single parameter segmentation of images are presented. In an embodiment, a method includes assigning a first label to a first pixel in an image. The method may also include measuring a characteristic of a first pixel. Additionally, the method may include measuring the characteristic of a second pixel, the second pixel being adjacent to the first pixel in the image. Also, the method may include assigning the first label to the second pixel in response to a determination that the characteristic of the second pixel has a measured value above a similarity threshold value. The method may further include assigning a second label to the second pixel in response to a determination that the measured value of the characteristic of the second pixel is below the similarity threshold value.
An embodiment of a system may include a data storage device configured to store an image file, the image comprising a plurality of pixels. Additionally, the system may include a data processor coupled to the data storage device. In an embodiment, the data processor may be configured to assign a first label to a first pixel in an image. measure a characteristic of a first pixel, measure the characteristic of a second pixel, the second pixel being adjacent to the first pixel in the image, assign the first label to the second pixel in response to a determination that the characteristic of the second pixel has a measured value above a similarity threshold value, and assign a second label to the second pixel in response to a determination that the measured value of the characteristic of the second pixel is below the similarity threshold value. In a further embodiment, the data processor may be configured to repeat the steps of described above for each pixel in the image.
In an embodiment, the data processor may be configured to measure a characteristic of a plurality of pixels adjacent to each pixel in the image, and assign pixels having measured values of the characteristic over the similarity threshold to a common label. Additionally, the data processor may assign a lowest available label to each pixel in the image, wherein an available label is a label of one of the adjacent pixels in the image. Also, the data processor may determine whether any of the plurality of adjacent pixels have measured values of the characteristic above the similarity threshold, and marking the larger label of the two for replacement. In such an embodiment, the data processor may iteratively replace the marked labels. In an embodiment, the data processor may measure a characteristic of a plurality of pixels adjacent to each pixel in the image, and assigning pixels having measured values of the characteristic below the similarity threshold a different label.
In a further embodiment, the data processor may pre-process the image to smooth noise while keeping the transitions between features in the image. In an embodiment, the measured characteristic comprises a boosted gradient of the image. The system may also merge groups of pixels having a common label, where at least one of the groups has a number of pixels below a threshold pixel count value.
In an embodiment, the system may segment a color image. In an embodiment, segmenting the color image may include measuring the characteristic of a plurality of neighboring pixels adjacent the first pixel, the adjacent pixels being oriented into vertical columns and horizontal rows, calculating differences in the values of the characteristic between pixels arranged in the horizontal row, calculating differences in values of the characteristic between pixels arranged in the vertical column, and identifying a smallest difference in the value of the characteristic of any of the neighboring pixels. In such an embodiment, the system may compute an accumulative histogram of the differences at each pixel in the image. Additionally, the system may compute an edge pixel ratio in response to a smallest expected segment size. Also, the system may compute a similarity threshold in response to the difference value corresponding to the computed edge pixel ratio using the computed accumulative histogram.
In an embodiment, the system may segment multi-layer image data. In one embodiment, segmenting multi-layer image data may include for pixels in a first layer: measuring the characteristic of a plurality of neighboring pixels adjacent a first pixel in the first layer, the adjacent pixels being oriented into columns and rows, calculating differences in the values of the characteristic between pixels arranged in the row, calculating differences in values of the characteristic between pixels arranged in the column, identifying a smallest difference in the value of the characteristic of any of the neighboring pixels, and for pixels in a second layer: measuring the characteristic of a plurality of neighboring pixels adjacent a first pixel in the second layer, the adjacent pixels being oriented into columns and rows, calculating differences in the values of the characteristic between pixels arranged in the row, calculating differences in values of the characteristic between pixels arranged in the column, identifying a smallest difference in the value of the characteristic of any of the neighboring pixels.
The system may be further configured to compute an accumulative histogram of the differences at each pixel for each layer. Also, the system may compute an edge pixel ratio in response to a smallest expected segment size for each layer. In such an embodiment, the system may compute the pixel potential for each pixel of each layer, limit the pixel potential value such that the value does not exceed the number of available layers, compute the aggregated potential image as a weighted sum of the pixel potential values in each layer, and compute an accumulative histogram of differences of all image pixels of the aggregated potential image. The system may also be configured to compute a similarity threshold in response to the difference value corresponding to the computed edge pixel ratio using the computed accumulative histogram.
The present invention(s) is/are illustrated by way of example and is/are not limited by the accompanying figures, in which like references indicate similar elements. Elements in the figures are illustrated for simplicity and clarity, and have not necessarily been drawn to scale.
Embodiments of the present methods and systems attempt to avoid both over-segmentation and under-segmentation problems by assuming preliminarily a very small object size to ensure detecting small segments and to reduce under segmentation problems. Finally, the very small segments are merged with the neighbors to reduce the over-segmentation problems.
Segmentation preprocessing step are described, and embodiments of methods for regular color image segmentation are presented with assessment results and sample segmented aerial/satellite images. Further embodiments for multi-layer image segmentation are presented with sample segmented multi-layer data set of (aerial and Light Detection And Ranging (LiDAR)) images.
Segmentation Pre-Processing
One benefit of image segmentation processes is to divide the image into distinct regions or segments where each individual region or segment is homogenous. The pixels inside each region or segment may exhibit some sort of similarity that facilitates grouping the pixels into a single segment. A high noise level exhibited within the same scene object may, however, complicate the segmentation process and induces a huge number of tiny segments especially in highly detailed aerial images. Despite the ability of the merging step of the proposed method to remove these small segments, some embodiments may include a preliminary filtering step to reduce this effect and save the time consumed by the merging step. There are several smoothing techniques that can be used to reduce the image noise. Median filtering, convolution with low pass filter kernels, and low pass filtering of the entire image are among the embodiments of techniques for image smoothing.
Bilateral smoothing filters may be used as a preliminary step because it preserves edges while smoothing the image within the regions of comparable spectral values. The bilateral smoothing filter averages the values of the neighborhood at each pixel weighted not only by their Gaussian value of their distance to the filtered pixel but also weighted by the Gaussian value of the pixels values as shown in equation 1. This added weighting helps to filter only the noise within the same spectral vicinity and therefore maintain the neighbor pixels that are far from the filtered pixel without much smoothing and hence preserve the image edges.
Where BF[I]p is the bilateral filtered value at point p, Wp is a weighting factor, S is the neighborhood of point p, Gσs, Gσr are Gaussian functions, Ip, Iq are the image values at points p, q, and |p−q| is the distance between p, q points.
Image Segmentation
Embodiments of the segmentation method adopt both region growing and merging processes to find the suitable segments.
In an embodiment, region growing starts from any corner of image and labels each pixel into one segment. The next pixel is added to the previous label or segment if it satisfies a similarity condition. This similarity condition is based on a similarity measure and a similarity threshold based on the image statistics. Finally, the detected segments which are smaller than a predefined size are merged into lager segments. A pseudocode for the region growing process is described in Algorithm 1 below.
In an embodiment, the similarity measure to be used may be dependent on the available data. The difference of the intensity of gray scale images may be used also as the similarity measure. Additionally, any derived measures based on the neighborhood of the pixels can be used. For color images, a measure that reflects the perception difference uniformly between colors could be used. For example, the Euclidean distance between the L*a*b* color space components of each pixel is a good candidate similarity measure, as it alleviates the non-uniformity of the distances in the RGB color space.
Similarity checks between neighbor pixels compare the chosen similar measure against a threshold to determine if these pixels are similar or not. The choice of such threshold is very crucial for achieving proper data segmentation.
Choosing a large threshold will always fulfill the similarity condition and will deal with regions of different nature as one segment which leads to under segmentation. On the other hand, choosing a small threshold will rarely fulfill the similarity condition and hence will lead to huge number of segments which leads to over segmentation. Both situations may be avoided through proper selection of the similarity threshold.
Most of the segmentation algorithms are highly dependent on the selection of many parameters or thresholds that affect its performance drastically. In some embodiments, several segmentation trials are performed until a satisfactory result is attained (Zhang and Maxwell 2006). The problem is more noticeable for those algorithms that depend on a large number of parameters that exhibit huge number of combinations, especially when the parameters cannot be related to the data characteristics and/or the desired result characteristics.
Contrastingly, the present embodiments use one segmentation parameter that can be chosen by the user. This parameter is the smallest expected segment size. This parameter can be decided by the user without too much effort based on the data specifications and the target level of details. Assuming that the whole image is composed of segments of the smallest size, the ratio of the edge pixels between segments to the whole number of pixels could be roughly estimated. This rough estimation of the edge pixels ratio can be used to find the similarity threshold on which this ratio of edge pixels occurs based on the histogram of similarity measure over the whole image.
To clarify the estimation of the ratio of the edge pixels to the whole image pixels, the image is assumed to be divided into rectangle segments of equal area (S) with height (α), width (α·r) and (r) is the width to height ratio.
The ratio of the edge pixels count to the image pixels count can be approximated by the ratio between the sum of the rectangle dimensions and the area of a single rectangle as given by the following equation
High values for (r) increases the value of the edge pixels ratio. A default value of (r=10) may be assumed to account for elongated objects. This assumed value permits extraction of segments of size equal to the minimum expected segment size that have maximum axis length ten times its minimum axis length. The two assumptions used so far ensures above-average value for the edge pixels ratio which consequently results in under-average similarity threshold. These conservative assumptions have been made to ensure that even for images full of segments of minimum size, the region growing step is still able to differentiate between these segments.
The similarity between each image pixel and its neighbors may be calculated, and the percentage of pixels that have neighbor similarity larger than different similarity values are obtained.
As the minimum segment size parameter served to determine the similarity threshold, it is also used as the minimum accepted segment size during the final segmentation phase that merges the small segments into neighbors. A pseudocode describing one embodiment of a merging process is described in Algorithm 2 below.
The final segmentation after merging phase is composed of segments whose areas are above the minimum segment size parameter as shown in
The tiny segments detected before the merging phase may be merged into their closest neighbors. The segmentation result illustrates how the presented approach targets segments of different scales at once. Large segment such as the pool in the middle of the image may be successfully extracted as a single segment while small segments representing dark green vegetation may also be successfully segmented. The segmentation process was able to separate the shore area accurately and all the roads of the scene have been extracted as very large and thin segments with accurate boundaries.
Alternatives for Similarity Measures
The purpose of the similarity measures is to define how similar two adjacent pixels are. This similarity measure can be evaluated using many alternatives.
Gradient of the image G can be used as similarity measure where high gradients indicates low spatial similarity and vice versa. The gradient may be computed as follows
G=√{square root over (Gx2+Gy2)} (3)
θ=tan−1(Gy/Gx) (4)
where Gx, Gy are the horizontal and vertical gradients respectively that can be computed with the help of edge detection operators such as Sobel operator. The image can be filtered before the gradient calculation to reduce the sensitivity to noise.
Difference of Gaussians (DoG) can also be used as a similarity measure in the same sense as the image gradient. It acts as a band-pass filter and could be computed as the difference between the Gaussian filtered versions of an image with different Gaussian kernel widths as follows
While the image gradient can offer good information about the image edges, it's also useful to trace the weak edges of low gradient value and boost it through the neighbor extensions of the same slope. Assuming the normalized image gradients at point p, q are (gxp, gyp) and (gxq, gyq) respectively, two measures can be calculate as follows:
slope similarity (SS)=1−∥gxpgxq+gypgyq∥ (6)
Coincidence (CS)=∥gyp(yq−yp)+gxp(xp−xq)∥ (7)
The image gradient (I) are spatially filtered as follows to get the boosted similarity measure
This boosted gradient image (BG) can serve as a similarity measure in the same sense as the gradient image.
Assessment of the Segmentation Methods
Image segmentation is one of the main and hard problems in the computer vision field. No general automatic algorithm approaches the performance of human segmentation yet (Martin et al. 2001). While image segmentation has been an active research area for decades with too many developed algorithms, the assessment of the developed segmentation methodologies did not attract the same amount of attention. Segmentation assessment methods of different types have been developed. Supervised objective assessment and unsupervised objective assessment are examples of the main assessment categories.
In the unsupervised objective assessment, no ground truth segmentation is needed through the assessment process. Instead of using an external reference segmentation made by human subjects, empirical measures are calculated using the segmentation result itself to evaluate the quality of the extracted segments. Several qualities such as the inter-region variance, the intra-region variance, and the boundary gradient of the extracted segments are used in this unsupervised assessment.
On the other hand, supervised objective assessment uses segmentations made by human subjects as a reference to be compared with the result of the segmentation algorithm. This type of assessment offers a finer level of evaluation, and therefore are commonly used for assessment (Zhang et al. 2008). The main issue of this type of comparison is what is the perfect segmentation (Martin et al. 2001). The same scene is segmented in different ways when segmented by different human subjects. Some researchers argue that the assessment of segmentation algorithm can be done only in the context of the application task such as object recognition (Martin et al. 2001).
The adopted assessment methodology uses images from the mentioned database with their human subject segmentations as a reference to be compared with the results of the proposed segmentation approach.
When the segmentation result is compared to reference segmentation, the boundaries of the two segmentations are compared in a pixel by pixel basis. Because of the expected displacement between the segmentations, the neighborhood of each reference boundary pixel is checked in the other segmentation (evaluated segmentation) within radius of a few pixels. The nearest match found is removed to prevent double mapping and the true positive (TP) count is increased by one. If no matched pixel was found in the neighborhood, the false negative (FN) is increased by one. The count of the remaining pixels in the evaluated segmentation is the false positive (FP).
A value of 7 pixels has been chosen as the accepted displacement search radius based on the observed displacements between the human subject's segmentations.
Two main measures are derived from this matching process, namely precision and recall. The precision is a ratio that represents how much the extracted segment boundary pixel is correct and computed as follows
The recall is a ratio that represents how much of the reference segment boundary pixels are retrieved and computed as follows
The F-measure is the weighted harmonic mean of the precision and recall of the segmentation and is computed as follows.
Where F is the F-measure, P is the precision, R is the recall, and a is the weighting factor.
In the cases of over-segmentation, the precision will be degraded while the recall will have the opportunity to gain more value. While in cases of under segmentation the recall will be degraded and the precision will have the opportunity to gain more value.
In the context of image classification, an over segmented image still has the opportunity to be reasonably classified. The pixel based classification can be seen as an object based classification of extremely over segmented image where the pixels itself are the segments. While, classification of under-segmented image will be significantly deteriorated as the different classes are grouped under one segment and have to be classified as either one of them. This comparison of the effect of both over-segmentation and under segmentation in the context of image classification motivates the adoption of a weighting factor that favor the over-segmentation on in other words favors the recall. For this reason a value of 2 has been chosen as a weighting factor to account for the recall twice the account for the precision.
For each test image, the assessment between the extracted segmentation and the available human segmentations is performed and also the assessment between all the pairs of the human segmentations is performed. The F-measure is computed for each assessment.
The average F-measure of the extracted segmentation is very close to the average F-measure between human segmentation which indicates excellent performance. In all the tests, some of the F-measures of the extracted segmentation exceed some of the F-measure between human segmentations. This highlights the significance of the proposed approach and also proves the variation of the reference human segmentations and the non-uniqueness of the segmentation solution.
These segmented test images show that the proposed segmentation was able to identify the relevant image segments efficiently and that the extracted segments vary in scale according to the homogeneity exhibited by the image. This behavior is compatible with the natural segmentation of human and it enables targeting of the segments of different scales concurrently. Since the Euclidean distance in L*a*b color space was used as a similarity measure through this test, the regions of slight color difference were not detected as segment edges as the case of the pyramid base line in
Segmentation Results of Sample Satellite/Aerial Imagery
The distance in the L*a*b* color space was used as the similarity measure for the segmentation of this image and the minimum segment size was chosen to be very small to allow the segmentation of the individual buildings.
Despite the challenge of segmentation of satellite imagery of urban areas, the segmentation approach successfully extracted the buildings classes due to the small value of the minimum expected segment size parameter. The homogenous areas of water, barren and many of the road segments did not exhibit over segmentation. The vegetation areas exhibit over segmentation due to the high variation of its colors.
Collaborative Segmentation of Multiple Data layers
The continuous advancement in earth observing satellites, aerial imagery and LiDAR systems provides more and more data of different specifications and from different sensors. The same scene can be captured using these different sensors to cover different aspects about it. True color imagery offers detailed and semantically rich data layer but lacks the useful information provided by infra-red imagery that can help identify vegetation and hot spots. Hyperspectral imagery can also add more spectral details about the scene. LiDAR technology offers a reliable data layer for height description that cannot be easily or reliably obtained through imagery. These different data layers for the same scene increase the opportunity for accurate analysis, classification, and planning applications. On the other hand it increases the challenge of handling these different data layers collaboratively and efficiently. The segmentation of such various data layers should make use of these layers despite their different characteristics in a collaborative fashion that does not sacrifice any of these layers and that controls the contribution of each layer based on the application.
For the proposed segmentation to handle multiple layers of different characteristics, the selected similarity measure should reflect the similarity contribution from all these layers.
The three IR, red and green layers represent surface reflectance values while the fourth DSM layer represents surface height. A single similarity measure that can incorporate the four layers regardless of their different representations and value ranges may be used to serve in the segmentation process.
In the proposed approach a similarity measure based on the statistics of neighborhood difference of the individual layers is presented.
For each layer the difference between the data layer values of each pixel and its neighbors are aggregated in a histogram such as the previously shown histogram in
The minimum segment size parameter S is used to find the edges pixel ratio E (ratio between the edge pixels count to the total number of image pixels) and the similarity threshold θthreshold of this data layer. These values are the same calculated ones if only this data layer has been available for segmentation.
If the calculated similarity difference θs at a pixel (i,j) is higher than the threshold value θthreshold this indicates that this pixel is a candidate edge. The higher similarity difference values θs corresponds to lower corresponding edge pixels ratio values C(i,j). The calculated similarity difference θs at each pixel indicates the corresponding edge pixels ratio C(i,j) (the edge pixel ratio achieved if this difference θs is used as threshold) through the histogram of the similarity differences.
The edges pixel ratio E and the similarity threshold θthreshold of each data layer along with the histogram of the similarity differences are used to calculate the potential of every pixel on each data layer of being an edge.
This potential is calculated as the ratio between the edges pixel ratio E of this data layer to the edge pixels ratio corresponding to the difference to neighbors value θs at that pixel.
Where Pn is potential of being edge at pixel (i,j) at layer (n), E is edge pixels ratio according to the minimum segment size parameter, and Cn is edge pixels ratio corresponding to the difference to neighbors value at pixel (i,j) and obtained using the histogram of the similarity difference at each data layer.
A maximum value of 4 was permitted; all the potential values above 4 have been trimmed to 4. This trimming prevents the extremely distinct pixels of very high potential values from dominating the aggregate potential of the four layers.
The individual potentials of being edge at each layers have to be aggregated to form a unique representation for potential of being segment edge. As the four layers were represented in a uniform potential metric, the aggregation between these layers is simply performed by adding the potentials of the four layers.
In some segmentation tasks, more emphasis is made on specific objects. For example, when segmentation of vegetation vs. non-vegetation is the main purpose, the IR data layer is more useful and relevant in performing the segmentation process. For such cases when some data layers have been found more useful or relevant to the segmentation, the aggregate potential map can be calculated as a weighted sum of the individual potential maps.
Aggregated Potential Map=30·PMLIDAR+PMIR+PMR+PMG (13)
Where PMLIDAR is the potential map of the Lidar data layer, PMIR is the potential map of the infra-red data layer, PMRis the potential map of the red data layer, PMG is the potential of the green data layer.
This aggregated potential map is threshold to find the potential threshold value corresponding to the edge ratio using a histogram. Beside the mentioned potential maps, three other potential maps are created in the same fashion to represent the potential based on vertical differences, horizontal differences and diagonal differences. These three potential maps serve as similarity measures between vertical, horizontal and diagonal adjacent pixels during the region growing phase.
The region growing phase continues in the same fashion presented before but using the potential maps instead of the direct similarity measure as follows:
Similarly, the merging phase is updated to use the potential maps instead of the similarity measure as follows:
The following algorithms describe additional embodiments of the above processes that are particularly suited for gray scale and color image segmentation, as well as multi-layer segmentation.
Algorithm 5 describes an embodiment method for grayscale and color image segmentation. The process of Algorithm 5 is illustrated in
Algorithm 6 describes an embodiment of a method for multi-layer segmentation.
Compute the edge ratio (C) corresponding to its difference as shown at
It may be understood that various operations described herein may be implemented in software executed by logic or processing circuitry, hardware, or a combination thereof. The order in which each operation of a given method is performed may be changed, and various operations may be added, reordered, combined, omitted, modified, etc. It is intended that the invention(s) described herein embrace all such modifications and changes and, accordingly, the above description may be regarded in an illustrative rather than a restrictive sense.
For example,
As illustrated, computer system 3400 includes one or more processors 3402A-N coupled to a system memory 3404 via bus 3406. Computer system 3400 further includes network interface 3408 coupled to bus 3406, and input/output (I/O) controller(s) 3410, coupled to devices such as cursor control device 3412, keyboard 3414, and display(s) 3416. In some embodiments, a given entity (e.g., an image processing device) may be implemented using a single instance of computer system 3400, while in other embodiments multiple such systems, or multiple nodes making up computer system 3400, may be configured to host different portions or instances of embodiments (e.g., image capture, editing, and processing devices).
In various embodiments, computer system 3400 may be a single-processor system including one processor 3402A, or a multi-processor system including two or more processors 3402A-N (e.g., two, four, eight, or another suitable number). Processor(s) 3402A-N may be any processor capable of executing program instructions. For example, in various embodiments, processor(s) 3402A-N may be general-purpose or embedded processors implementing any of a variety of instruction set architectures (ISAs), such as the x86, POWERPC®, ARM®, SPARC®, or MIPS® ISAs, or any other suitable ISA. In multi-processor systems, each of processor(s) 3402A-N may commonly, but not necessarily, implement the same ISA. Also, in some embodiments, at least one processor(s) 3402A-N may be a graphics processing unit (GPU) or other dedicated graphics-rendering device.
System memory 3404 may be configured to store program instructions and/or data accessible by processor(s) 3402A-N. For example, memory 3404 may be used to store software program and/or database shown in
The terms “tangible” and “non-transitory,” as used herein, are intended to describe a computer-readable storage medium (or “memory”) excluding propagating electromagnetic signals, but are not intended to otherwise limit the type of physical computer-readable storage device that is encompassed by the phrase computer-readable medium or memory. For instance, the terms “non-transitory computer readable medium” or “tangible memory” are intended to encompass types of storage devices that do not necessarily store information permanently, including for example, random access memory (RAM). Program instructions and data stored on a tangible computer-accessible storage medium in non-transitory form may further be transmitted by transmission media or signals such as electrical, electromagnetic, or digital signals, which may be conveyed via a communication medium such as a network and/or a wireless link.
In an embodiment, bus 3406 may be configured to coordinate I/O traffic between processor 3402, system memory 3404, and any peripheral devices including network interface 3408 or other peripheral interfaces, connected via I/O controller(s) 3410. In some embodiments, bus 3406 may perform any necessary protocol, timing or other data transformations to convert data signals from one component (e.g., system memory 3404) into a format suitable for use by another component (e.g., processor(s) 3402A-N). In some embodiments, bus 3406 may include support for devices attached through various types of peripheral buses, such as a variant of the Peripheral Component Interconnect (PCI) bus standard or the Universal Serial Bus (USB) standard, for example. In some embodiments, the operations of bus 3406 may be split into two or more separate components, such as a north bridge and a south bridge, for example. In addition, in some embodiments some or all of the operations of bus 3406, such as an interface to system memory 3404, may be incorporated directly into processor(s) 3402A-N.
Network interface 3408 may be configured to allow data to be exchanged between computer system 3400 and other devices, such as other computer systems attached to an image processing device or an image data storage device, for example. In various embodiments, network interface 3408 may support communication via wired or wireless general data networks, such as any suitable type of Ethernet network, for example; via telecommunications/telephony networks such as analog voice networks or digital fiber communications networks; via storage area networks such as Fiber Channel SANs, or via any other suitable type of network and/or protocol.
I/O controller(s) 3410 may, in some embodiments, enable connection to one or more display terminals, keyboards, keypads, touch screens, scanning devices, voice or optical recognition devices, or any other devices suitable for entering or retrieving data by one or more computer system 3400. Multiple input/output devices may be present in computer system 3400 or may be distributed on various nodes of computer system 3400. In some embodiments, similar I/O devices may be separate from computer system 3400 and may interact with computer system 3400 through a wired or wireless connection, such as over network interface 3408.
The terms “tangible” and “non-transitory,” as used herein, are intended to describe a computer-readable storage medium (or “memory”) excluding propagating electromagnetic signals; but are not intended to otherwise limit the type of physical computer-readable storage device that is encompassed by the phrase computer-readable medium or memory. For instance, the terms “non-transitory computer readable medium” or “tangible memory” are intended to encompass types of storage devices that do not necessarily store information permanently, including, for example, RAM. Program instructions and data stored on a tangible computer-accessible storage medium in non-transitory form may afterwards be transmitted by transmission media or signals such as electrical, electromagnetic, or digital signals, which may be conveyed via a communication medium such as a network and/or a wireless link.
As shown in
A person of ordinary skill in the art will appreciate that computer system 3400 is merely illustrative and is not intended to limit the scope of the disclosure described herein. In particular, the computer system and devices may include any combination of hardware or software that can perform the indicated operations. In addition, the operations performed by the illustrated components may, in some embodiments, be performed by fewer components or distributed across additional components. Similarly, in other embodiments, the operations of some of the illustrated components may not be performed and/or other additional operations may be available. Accordingly, systems and methods described herein may be implemented or executed with other computer system configurations.
In various further embodiments, the method 3500 may include and incorporate various steps described in the pseudocode of Algorithms 1-6.
Although the invention(s) is/are described herein with reference to specific embodiments, various modifications and changes can be made without departing from the scope of the present invention(s), as set forth in the claims below. Accordingly, the specification and figures are to be regarded in an illustrative rather than a restrictive sense, and all such modifications are intended to be included within the scope of the present invention(s). Any benefits, advantages, or solutions to problems that are described herein with regard to specific embodiments are not intended to be construed as a critical, required, or essential feature or element of any or all the claims.
Unless stated otherwise, terms such as “first” and “second” are used to arbitrarily distinguish between the elements such terms describe. Thus, these terms are not necessarily intended to indicate temporal or other prioritization of such elements. The terms “coupled” or “operably coupled” are defined as connected, although not necessarily directly, and not necessarily mechanically. The terms “a” and “an” are defined as one or more unless stated otherwise. The terms “comprise” (and any form of comprise, such as “comprises” and “comprising”), “have” (and any form of have, such as “has” and “having”), “include” (and any form of include, such as “includes” and “including”) and “contain” (and any form of contain, such as “contains” and “containing”) are open-ended linking verbs. As a result, a system, device, or apparatus that “comprises,” “has,” “includes” or “contains” one or more elements possesses those one or more elements but is not limited to possessing only those one or more elements. Similarly, a method or process that “comprises,” “has,” “includes” or “contains” one or more operations possesses those one or more operations but is not limited to possessing only those one or more operations.
Number | Date | Country | |
---|---|---|---|
62270394 | Dec 2015 | US |