The present disclosure relates generally to image processing devices.
An imaging device formed on or in combination with an integrated circuit device typically includes an array of pixels formed by filters disposed over photo detectors (e.g., photo diodes formed in a complementary metal oxide semiconductor device) in a Bayer pattern. Such a Bayer pattern typically implements three color channels for red, blue and green visible light. Image signal intensity values associated with pixel locations in an image frame obtained from an imaging device may be further processed by application of one or more kernels in convolution operations. Kernels may be applied in convolution operations to image signal intensity values defined for color channels in different color spaces (e.g., RGB, YUV or other color spaces), as well as to features defined in feature maps of machine-learning/convolution neural network filtering operations, for example.
Claimed subject matter is particularly pointed out and distinctly claimed in the concluding portion of the specification. However, both as to organization and/or method of operation, together with objects, features, and/or advantages thereof, it may best be understood by reference to the following detailed description if read with the accompanying drawings in which:
Reference is made in the following detailed description to accompanying drawings, which form a part hereof, wherein like numerals may designate like parts throughout that are corresponding and/or analogous. It will be appreciated that the figures have not necessarily been drawn to scale, such as for simplicity and/or clarity of illustration. For example, dimensions of some aspects may be exaggerated relative to others. Further, it is to be understood that other embodiments may be utilized. Furthermore, structural and/or other changes may be made without departing from claimed subject matter. References throughout this specification to “claimed subject matter” refer to subject matter intended to be covered by one or more claims, or any portion thereof, and are not necessarily intended to refer to a complete claim set, to a particular combination of claim sets (e.g., method claims, apparatus claims, etc.), or to a particular claim. It should also be noted that directions and/or references, for example, such as up, down, top, bottom, and so on, may be used to facilitate discussion of drawings and are not intended to restrict application of claimed subject matter. Therefore, the following detailed description is not to be taken to limit claimed subject matter and/or equivalents.
References throughout this specification to one implementation, an implementation, one embodiment, an embodiment, and/or the like means that a particular feature, structure, characteristic, and/or the like described in relation to a particular implementation and/or embodiment is included in at least one implementation and/or embodiment of claimed subject matter. Thus, appearances of such phrases, for example, in various places throughout this specification are not necessarily intended to refer to the same implementation and/or embodiment or to any one particular implementation and/or embodiment. Furthermore, it is to be understood that particular features, structures, characteristics, and/or the like described are capable of being combined in various ways in one or more implementations and/or embodiments and, therefore, are within intended claim scope. In general, of course, as has always been the case for the specification of a patent application, these and other issues have a potential to vary in a particular context of usage. In other words, throughout the disclosure, particular context of description and/or usage provides helpful guidance regarding reasonable inferences to be drawn; however, likewise, “in this context” in general without further qualification refers at least to the context of the present patent application.
Imaging devices formed in integrated circuit devices may include a substrate formed as a complementary metal oxide semiconductor (CMOS) device having formed thereon an array of photodiodes that are responsive to impinging light energy. In one embodiment as shown in
Sensitivity of such a three-color channel imaging device may be limited to detection of visible light in red, blue and green bands. Accordingly, such a three-color channel imaging device may have limited effectiveness in night and/or low lit environments. According to an embodiment, a Bayer pattern imaging device may be modified to include pixels dedicated to detection of infrared light to implement a fourth color channel of invisible light energy as shown in
In this context, a “kernel” as referred to herein means a set of organized parameters of a convolution operation to be applied to one or more image signal intensity values expressing an image (or a portion of an image), such as image signal intensity values of a color channel associated with pixel locations in the image, to impart a particular intended effect to the image. Such an intended effect may comprise, for example, blurring, interpolating/demosaicing, sharpening, embossing, feature detection/extraction (e.g., edge detection), just to provide a few examples. In a particular implementation, a kernel may comprise an ordered array of values (e.g., coefficients in an integer or floating point format) tailored for application to image signal intensity values of a particular dimensionality such as dimensions corresponding to color intensity values and/or pixel location. According to an embodiment, a convolution (e.g., filtering) operation for application of a kernel to signal intensity values of an image may be implemented according to expression (1) as follows:
where:
While ω as used in expression (1) is defined above for a symmetric range for −a≤ dx≤ a and −b≤dy≤ b, in other implementations ω may defined for an asymmetric range such as, for example, 0≤ dx≤ a and 0≤ dy≤ b.
According to an embodiment, a convolution operation according to expression (1) may be applied separately to image signal intensity values of particular individual color channels Cin (e.g., red, green and blue color channels). While particular examples of convolution operations described herein include convolution operations applied to two-dimensional signals with multiple channels, claimed subject matter is not limited to such two-dimensional signals. For example, it should be understood that convolution operations described herein may be applied to one-dimensional signals (e.g., an audio signal) and/or signals having three or more dimensions (with greater or fewer channels) without loss of generality.
As may be appreciated, processing images using convolution operations in real-time applications may consume significant computing resources to, for example, execute multiplication operations to apply kernel coefficients to image signal intensity values, kernel coefficients, store image signal intensity values to be processed in convolution operations and/or store a set of full-granularity kernel coefficients.
In one embodiment, applying a convolution operation may include obtaining a sum of image signal intensity values of the plurality that correspond to a common coefficient value. Such a convolution operation may then include multiplying the sum by the common coefficient value. This approach may reduce a number of multiplication operations to execute such a convolution operation. As such, a number of multipliers in circuitry for an image processing system to perform a convolution operation may be reduced. This approach may also reduce an amount of storage memory consumed by an image processing system to perform convolution operations because fewer coefficients of the kernel require storage. This approach can therefore improve the efficiency with which an image processing system performs convolution operations.
In the example convolution of expression (1), an output image signal intensity value for an output pixel location x,y is computed based on application of kernel coefficients to input image signal intensity values for pixels which neighbor output pixel location x,y. In a convolution operation, it may be observed that an impact of granularity of kernel coefficients applied to an image signal intensity value of a particular pixel location may diminish the greater particular pixel location is offset from output pixel location x,y. In a particular implementation, the kernel coefficients to be applied to image signal intensity values for pixels in a region of the image frame may be selected from a discrete set of coefficient values. The same coefficient value selected from the set of coefficient values may be applied to image signal intensity values of multiple pixel locations in the region based, at least in part, a location of the region relative to the output pixel location. For example, convolving image signal intensity values for multiple pixels in a region with the same coefficient value (e.g., lower granularity coefficient) may not significantly degrade convolution accuracy if the region is significantly offset from the output pixel location. Here, convolving the image signal intensity values of pixels in the region may comprise multiplying a sum of the image signal intensity values of the pixels in the region by the selected kernel coefficient.
As shown by expression (1), kernel coefficients are applied to image signal intensity values of multiple pixel locations to map to an output image signal intensity value of a single output pixel location x,y. There may be separable kernels, sparse kernels, etc. It may be observed that an importance of an exact location of a pixel for an input image signal intensity value may be greater towards the center of the kernel (e.g., output pixel location x,y). In one technique, kernel coefficients may be applied to image signal intensity values sampled more densely towards the middle and more sparsely the further from the center of the kernel. This may have an advantage of a large receptive field and smaller computational costs. In another technique, kernel coefficients offset from an output pixel location may be mapped to a single coefficient value (e.g., combined by averaging). Averaging coefficients of a kernel may improve robustness and performance of such a kernel for denoising applications as well as reducing a number of delayed lines in streaming processing.
In technique 100 (
In technique 100 (
In the presently illustrated embodiment, convolution operation 104 may involve 25 MAC operations to derive output signal value O1. On this basis, applying the convolution operation 104 may entail an image processing system performing 25 multiplication operations, that is, a multiplication of each coefficient of the plurality of coefficients a-y with a corresponding image signal intensity value of the plurality of image signal intensity values i11-i55, e.g. with the same relative position in the portion of the image represented by the image signals 102 and in the kernel 108. In
Examples described herein relate to improving efficiency with which an image processing system performs convolution operations by reducing a number of multipliers and an amount of storage space required to generate an output signal data value. Although some examples herein are explained throughout with reference to a 5×5 kernel, it is to be understood that the examples described herein may be used with a kernel of various other sizes, shapes and/or dimensionalities. In other example implementations, kernel sizes of 7×7 or larger may be implemented without deviating from claimed subject matter.
In this example, technique 110 may include obtaining image signals 202 which includes the plurality of image signal intensity values i11-i55. In technique 110 of
Applying convolution operation 204 may include obtaining a sum of image signal intensity values of the plurality of image signal intensity values i11-i55 that correspond respectively to the coefficients 112a-112d of the plurality of coefficients that each have the common coefficient value a. In the example of
In this way, the number of multiplication operations involved in technique 110 of
In some examples, a convolution operation may be applied to implement at least part of a neural network architecture. For example, convolution operation 204 in
In technique 114 (
In this manner, a number of multiplication operations involved in technique 114 is 15, showing a further reduction compared to the 25 multiplication operations involved in technique 100 (
In this example, kernel 408 is symmetric. Such a symmetric kernel may be described generally as a kernel having at least one line of reflective symmetry, such as a kernel comprising an array of coefficients. Each coefficient may be given by Ki,j, where i is a row number of the array of coefficients and j is a column number of the array of coefficients, for which Ki,j=Ki,j. This expression may represent one line of reflective symmetry of the kernel. However, kernel 408 in this particular example may include four such lines of reflective symmetry such that coefficients Ki,j of the kernel 408 also satisfy Ki,j=K(N-j+1),(N-i+1) and Ki,j=K(N-i+1),j where N represents the size of the kernel 408 (i.e. N=5 in this case). In one particular implementation, a predetermined constraint on kernel 408 to be symmetric may be implemented using the three expressions above for coefficients Ki,j (e.g., with the coefficient values pre-selected so that the kernel 408 is symmetric). In another particular implementation, there may be no such predetermined constraint on kernel 408. For example, in cases where convolution operation 404 is applied to implement at least part of a neural network architecture, coefficient values of kernel 408 may be learned during a machine learning training phase.
It can be seen that the kernel 408 may be represented by six common coefficient values a, b, c, d, e and f, each of which is different from one another. Applying techniques described above with reference to
As pointed out above, an impact of granularity of kernel coefficients applied to an image signal intensity value of a particular pixel location in a convolution operation according to expression (1) may diminish the greater particular pixel location is offset from output pixel location x,y. In the particular example implementation of
As shown in
Similarly, at a lateral periphery of embodiment 600 from a center region at which full-granularity kernel coefficients are applied, kernel coefficient values at multiple pixel locations may be mapped to a single coefficient value H, I, J, K, L, S, T, V, W or X. This is shown by example at regions 613 through 617 and 631 through 635 at such a lateral periphery. As shown, single coefficient value H may be uniformly applied in a horizontal direction in region 613, single coefficient value I may be uniformly applied in a horizontal direction in region 614, single coefficient value J may be uniformly applied in a horizontal direction in region 615, single coefficient value K may be uniformly applied in a horizontal direction in region 616, single coefficient value L may be uniformly applied in a horizontal direction in region 617, single coefficient value X may be uniformly applied in a horizontal direction in region 631, single coefficient value W may be uniformly applied in a horizontal direction in region 632, single coefficient value V may be uniformly applied in a horizontal direction in region 633, single coefficient value T may be uniformly applied in a horizontal direction in region 634 and single coefficient value S may be uniformly applied in a horizontal direction in region 635.
At a region both at a lateral periphery and a vertical periphery of embodiment 600 from a center region, kernel coefficient values at all pixel locations may be mapped to a single coefficient value A, G, M or U. As shown, a single coefficient value A may be applied at all pixel locations in region 602 that is in both a lateral and vertical periphery. Also, a single coefficient value G may be applied at all pixel locations in region 612 that is in both a lateral and vertical periphery. Also, a single coefficient value M may be applied at all pixel locations in region 610 that is in both a lateral and vertical periphery. Also, a single coefficient value U may be applied at all pixel locations in region 608 that is in both a lateral and vertical periphery.
As may be observed from embodiment 600, kernel coefficients to be applied to image signal intensity values of pixels in regions 620 through 629 at a vertical periphery have reduced granularity (e.g., by combining/averaging to a single kernel coefficient value) in a vertical direction. In a horizontal direction in regions 620 through 629, however, kernel coefficients are applied with greater granularity (e.g., full granularity). Likewise, kernel coefficients to be applied to image signal intensity values of pixels in regions 613 through 617 and 631 through 635 in regions at a lateral periphery have reduced granularity (e.g., by combining/averaging to a single uniform value) in a horizontal direction. In a vertical direction in regions 613 through 617 and 631 through 635, however, kernel coefficients are applied with full granularity. With a single kernel coefficient applied in each of regions 602, 608, 610 and 612, granularity is reduced in both horizontal and vertical directions.
In this context, “full-granularity” kernel coefficients as referred to herein means kernel coefficients to be applied in a convolution operation that are not constrained from assuming any value within a set range or precision of values (e.g., particular floating point format having a fixed number of digits). For example, each full-granularity kernel coefficient of kernel coefficients to be applied to features (e.g., image signal intensity values) in a convolution operation may have a unique value among the applied kernel coefficients. Additionally in this context, “reduced-granularity” kernel coefficients as referred to herein means kernel coefficients that are constrained to be from a discrete set of coefficient values. For example, multiple reduced-granularity kernel coefficient of kernel coefficients to be applied to features in a convolution operation may assume the same coefficient value within such a discrete set of coefficient values.
According to an embodiment, image signal intensity values of an image frame (e.g., as per embodiment 600) may be organized in a two-dimensional array to represent a single image signal intensity value (e.g., integer or floating point) at pixel locations associated with spatial coordinates. For a multi-color channel image frame, image signal intensity values may be organized in a three-dimensional array to represent multiple image signal intensity at pixel locations associated with spatial coordinates.
In the particular example implementation of embodiment 600, kernel coefficients associated with adjacent/contiguous pixels may be combined and/or averaged to provide a single kernel coefficient to be applied to image signal intensity values of the adjacent/contiguous pixels. As pointed out above, for a multi-color image frame, kernel coefficients may be applied separately to image signal intensity values of separate color channels. In such an embodiment of a multi-color image frame, kernel coefficients may be combined/averaged to provide reduced-granularity kernel coefficients to be applied over individual color channels.
In a process to combine and/or average kernel coefficients for application to multi-color channel pixels arranged in a Bayer pattern (e.g., as shown in
Expressions (2) through (5) show generation of separate output values for gR(x), gG(x), gB(x) and g4(x). In some embodiments, aspects of gR(x), gG(x), gB(x) and g4(x) may be selectively combined and/or added together to produce a single full-kernel convolution output value.
In the particular implementation of
As illustrated above with embodiment 600, the same coefficient value selected from the set of reduced-granularity coefficient values may be applied to image signal intensity values of multiple pixel locations in the region based, at least in part, a location of the region relative to an output pixel location. For example, convolving image signal intensity values for multiple pixels in a region with the same coefficient value (e.g., lower granularity coefficient) may not significantly degrade convolution accuracy if the region is sufficiently offset from the output pixel location. Here, convolving the image signal intensity values of pixels in the region may comprise multiplying a sum of the image signal intensity values of the pixels in the region by the selected kernel coefficient.
As shown in
Also shown in
Also shown in
Another embodiment of a convolution operation to apply kernel coefficients to image signal intensity values for a single-channel and/or monochrome format, a convolution operation is shown in
As pointed out above, in each of peripheral regions 854, 856, 858, 860, 862, 864, 866 and 868, a single kernel coefficient is to be applied to each image signal intensity value in the peripheral region. To further reduce usage of processing resources, image signal intensity values for all pixels in a peripheral region 854, 856, 858, 860, 862, 864, 866 and/or 868 may be mapped to a single image signal intensity value. In one implementation, such a single image signal intensity value for a particular peripheral region may be determined as an average (e.g., weighted) of image signal intensity values over all pixel locations in the particular peripheral region. In another particular implementation, such a single image signal intensity value for a particular peripheral region may be selected from among image signal intensity values of pixels in the particular peripheral region to be representative of all pixel locations in the particular peripheral region.
According to an embodiment, reduced-granularity kernel coefficients A through Y of
While
Block 954 may comprise application of one or more convolution operations to elements of a feature map determined at block 952 to provide an output image signal intensity value mapped to an output pixel location in an image frame. Kernel coefficients to be applied to image signal intensity values for pixels in a region of the image frame may be selected from a set of kernel coefficient values such that the same kernel coefficient value is to be applied to image signal intensity values of multiple pixel locations in the region. In the particular example of
According to an embodiment, kernel coefficients may be selected to be applied to image signal intensity values of multiple pixel locations in a region based, at least in part, on a location of the region relative to an output pixel location of a convolution operation. As discussed above with reference to
According to an embodiment, block 952 may comprise one or more pre-processing operations as illustrated in
In this context, “full-granularity” image signal intensity values as referred to herein means image signal intensity values to be processed in a convolution operation that are not constrained from assuming any value within a set range or precision of values (e.g., particular floating point format having a fixed number of digits). For example, each full-granularity image signal intensity value of image signal intensity values to be applied to be processed in a convolution operation may have a unique value among the processed image signal intensity values. Additionally in this context, “reduced-granularity” image signal intensity values as referred to herein means image signal intensity values that are constrained to be from a discrete set of image signal intensity values. For example, multiple reduced-granularity image signal intensity values to be processed in a convolution operation may assume the same image signal intensity value within such a discrete set of image signal intensity values.
In the particular example of
According to an embodiment, block 952 may map image signal intensity values of a plurality of contiguous pixel locations in a portion of an image frame to a single image signal intensity value to be representative of the contiguous pixel locations in an augmented portion of the image frame. In an implementation, such a single image signal intensity value to be representative of the contiguous pixel locations may be multiplied by a coefficient value selected from a set of kernel coefficient values to provide an output image signal intensity value based on the computed product. Such a selected coefficient value may comprise a reduced-granularity coefficient value as described above.
According to an embodiment, a state of “configuration” signal shown in
As pointed out above in embodiment 800 in
One particular technique for reducing memory usage and/or multiplication operations in application of a kernel operation in a color channel over a region of an image may comprise a skipping of image signal intensity values in the color channel for at least some pixel locations in the region (e.g., and normalizing kernel coefficient products accordingly). In one example, such a skipping of image signal intensity values may comprise a convolution operation in which only image signal intensity values of the same color channel for every other pixel are to be multiplied by a corresponding kernel coefficient. In another example, such a skipping of image signal intensity values may comprise a convolution operation in which image signal intensity values of neighboring pixels of the same color channel are averaged or combined. Here, a single kernel coefficient is to be applied for both pixels by multiplying the averaged/combined image signal intensity value by the single kernel coefficient in a single multiplication operation.
According to an embodiment, an image processing operation may be capable of selectively applying a convolution operation (e.g., any convolution operation applied at blocks 126 (
In a second such mode of multiple modes, a convolution operation may be applied with reduced granularity kernel coefficients, such as by mapping full-granularity kernel coefficients to a set of discrete coefficient values (e.g., according to expressions (2) through (5)) and/or with a reduced granularity image signal intensity values (e.g., in regions set off from an output pixel location of the convolution operation). In a third such mode of multiple modes, a convolution operation may be applied with a full granularity kernel with full granularity image signal intensity values. As indicated above, it may be observed that the first and second identified modes may enable a reduction in memory usage and multiplication operations over the third mode operation. In one particular implementation, the third mode of operation may be applied over a reduced region. In particular implementations, a particular convolution mode of the first, second and third modes may be selected based on particular conditions and/or desired robustness to noise.
Technique 120 may also include applying a further convolution operation 130 to at least some image signal intensity values 132 of the plurality of image signal intensity values 124 using a further kernel, thereby generating output signals 134. In this example, the further kernel used in convolution operation 130 represents a bandpass filter for edge detection. Such a bandpass filter may be used to suppress low- and high-frequency components of an image represented by the at least some 132 of the plurality of image signal intensity values 124. For example, lower-frequency components may not provide sufficient information regarding a contrast of an image. Conversely, high-frequency components may be sensitive to high-contrast image features, and hence may be subject to noise. Coefficients of the further kernel may be used to perform the further convolution 130 may be predetermined. In this way the bandpass filter may be tuned such that coefficients of the further kernel determine frequencies of the low- and high-frequency components of the image that are suppressed by convolution operation 130. Alternatively, in cases where convolution operation is applied to implement at least part of a neural network architecture, coefficients of the further kernel may be learned during a training phase of the neural network. Alternatively, coefficients of the further kernel may be derived as part of an automated tuning process of an image signal processing system, such as the image processing system described with reference to
In technique 120, a combiner 136 combines signals 128 of the first convolution operation 126 with signals 134 of convolution operation 130, thereby generating the output signals 138. In this way, the output signals 138 may include a combination of information regarding edges detected in the image by applying convolution operation 130 and information obtained from the first convolution operation 126 applied to the plurality of image signal intensity values 124. For example, the combiner 136, for a given pixel location, may combine an image signal intensity value of the grayscale image from the signals 134 of convolution operation 130 with each of the image signal intensity values for each color channel from signals 128 of the first convolution operation 126. This may include adding the image signal intensity value of the grayscale image to each of the image signal intensity values for each color channel. By combining both signals 128 of the first convolution operation 126 and the signal 134 of convolution operation 130, the output signals 138 in this example includes information regarding detected edges in the image represented by the image signals 122, and an amount of noise in the image is also reduced. In this example where the first convolution operation 126 is used as part of a demosaicing algorithm, the output signals 138 includes an array of output signal values which for each pixel location of the image, represents an image signal intensity value for each color channel, with additional information regarding detected edges in the image represented by the image signals 122. In this way, an output image represented by the output signals 138 may provide a sharper image around the detected edges.
As pointed out above, for an imaging device that captures an image, a pixel location may be associated with a red color channel, and demosaicing may allow for image signal intensity values for the green and blue channels to be obtained for that pixel location. Demosaicing for example may involve interpolating between and/or among neighboring pixels of the same color channel to obtain an image signal intensity value at a location between these neighboring pixels, such as at a location corresponding to a pixel of a different color channel. This may be performed for each of a plurality of color channels in order to obtain, at each pixel location, an image signal intensity value for each of the color channels. In some cases, grayscale demosaicing may be performed, in which a grayscale intensity is obtained at each pixel location indicating an image signal intensity value for a single color channel (e.g. from white (lightest) to black (darkest)). An example image processing pipeline including a demosaicing algorithm is described below with reference to
In an example, each pixel location is associated with one of a first color channel and a further color channel. For example, where a color filter pattern is a Bayer filter pattern, the first color channel may be a red color channel and the further color channel may be a green color channel. The processing of
According to an embodiment, coefficients of a kernel applied in convolution operation 130 may be determined such that a bandpass filter detects edges in the image represented by the image signals 122, but suppresses the high-frequency Bayer pattern component. For example, edges detected in an image by applying convolution operation 130 may represent edges of a scene represented by the image, and not high-frequency edges resulting from a color filter pattern of the color filter array.
In particular examples where a Bayer filter array is used to capture an image, a color of signals 134 may be grayscale because signals 134 may include equal contributions from pixel locations associated with each of the red, green and blue color channels. Combining a grayscale image with the image signal intensity values for each of the red, green and blue color channels (e.g., derived from demosaicing the image signals 122) may effectively desaturate an image around detected edges, which may reduce a chance of false colors being present around edges in the output signals 138. False colors, also sometimes referred to as colored artifacts, may occur when performing convolution operations that combine contributions from image signal intensity values associated with different color channels. An output of such a convolution operation may be an image signal intensity value that differs significantly from an original image signal intensity value at the same pixel location representing the actual color of a scene represented by the image. This difference may be visually noticeable in an output image as a color not present in an original input image. False colors may hence appear as erroneous to a viewer, so it is desirable for their presence to be reduced, e.g. by performing convolution operation 130.
In a case where a color filter pattern is a Bayer filter pattern, a kernel used to apply convolution operation 130 may be referred to as a “Bayer invariant kernel”. Signals 134 in this example may include equal contributions from pixel locations associated with the red, blue and green color channels, independent of a pixel location to which convolution operation 130 is applied. An example of a Bayer invariant 3×3 kernel is:
However, this is not intended to be limiting, as many other Bayer invariant kernels of a different size, shape and/or dimensionality are possible without deviating from claimed subject matter.
In this example, the image signal intensity values 232 of the plurality of image signal intensity values 224 that are processed using convolution operation 230 are a first set of the plurality of image signal intensity values 232 and the image signals also includes a second, different set of the plurality of image signal intensity values 142. Technique 140 of
A bypass determination may be based, at least in part, on properties of the image signals 222. In one example, bypass determiner 144 may perform the above-mentioned determination based on an amount of information related to at least one color channel in the second set of the plurality of image signal intensity values 142. For example, bypass determiner 144 may determine that there is not sufficient information related to the at least one color channel in the second set of the plurality of image signal intensity values 142 to apply convolution operation 230 without generating an undesirable level of false edges or other image artifacts.
In the image processing pipeline 146 of
In the image processing pipeline 146 of
The image signal intensity values represented by the image signals 322 may include pedestal values, which may comprise constant values which are added to image signal intensity values to avoid negative image signal intensity values during an image capture process. For example, a sensor pixel may still register a non-zero sensor pixel value even if exposed to no light, e.g. due to noise. To avoid reducing image signal intensity values to less than zero, pedestal values may be added. Hence, before further processing is performed, image signals 322 in the image processing pipeline 146 of
In the image processing pipeline 146 of
In the image processing pipeline 146 of
where 0 represents black, and 1 represents white. In this example, a corresponding 5×5 Bayer filter pattern used by an image sensor to capture the image may be shown as:
where R represents the red filter elements, B represents the blue filter elements and G represents the green filter elements. In this case, capturing the above 5×5 image of black and white stripes using this Bayer filter pattern may generate the following 5×5 array of image signal intensity values:
where Gij and Bij represent image signal intensity values associated with the green and blue color channels respectively, at a pixel location with row number i and column number j. As can be seen in this example, there is no information related to the red color channel, and information related to the green color channel has been significantly reduced compared to the originally captured image. On this basis, performing the demosaicing algorithm 156 may generate a resulting image that appears blue, which would not accurately correspond to the color of the scene represented by the original image (i.e., black and white stripes).
In this example, a measured error may be obtained by using information associated with the green color channel from the image signals 322 to identify high frequency regions of the image represented by the image signals 322 for which false colors may be expected to occur due to the demosaicing algorithm 156. The threshold which the measured error is compared to may be predetermined to correspond to a minimum magnitude of the measured error for which false color correction is to be applied. In this example, false color correction includes processing a portion of the image signals representing the region of the image to desaturate the region of the image. Saturation refers to the colorfulness of an area judged in proportion to its brightness, in which colorfulness refers to perceived chromatic nature of a color. Desaturating the region of the image may include adding shades of gray to the region of the image, thereby reducing the colorfulness of the region of the image and suppressing any false colors resulting from applying the demosaicing algorithm 156. In this way, the false color detection and correction 158 reduces a presence of false colors in the image represented by the image signals 322.
It is to be appreciated that the image processing pipeline of
In the image processing pipeline 146 of
Inputting the green color channel signals 160 to the CV system 162 may be preferred compared to inputting interpolated values output from the demosaicing algorithm 156 to the CV system 162 because errors from the demosaicing algorithm 156, e.g. false colors or colored artifacts, may be carried forward and therefore impact on an accuracy of the CV functionality implemented by the CV system 162. In this example, however, the green color channel signals 160, which may be obtained from raw image signals captured by an image sensor and not derived using the demosaicing algorithm 156 for example, has a lower contribution from such false colors, thereby improving the accuracy of the CV functionality. In some examples, though, interpolated values 163 from the output signals 338 may also or instead be input to the CV system 162 in addition to or instead of the green color channel signals. The interpolated values 163 for example include red and blue image signal intensity values (i.e. red and blue pixel intensity values) for each pixel location in the Bayer filter array associated with the green color channel. These interpolated values 163 are obtained by performing the demosaicing algorithm 156 as described above. In this way, for each pixel location in the Bayer filter array associated with the green color channel, the CV system 162 may obtain image signal intensity values associated with each of the green, red and blue color channels. It is to be appreciated that whether to input green color channel signals 160 and/or the interpolated values 163 to the CV system, may be predetermined based on the CV functionality implemented by the CV system 162. In other examples, red color channel signals or blue color channel signals from the image signals 322 may instead be input to the CV system 162, and the interpolated values 163 in this case include image signal intensity values for the other of red or blue, and green image signal intensity values (i.e., green pixel intensity values) from the output signals 338.
According to an embodiment techniques 100, 110, 114, 118, 120 and 140, pipeline 146 and/or process 950 may be formed by and/or expressed in transistors and/or lower metal interconnects (not shown) in processes (e.g., front end-of-line and/or back-end-of-line processes) such as processes to form complementary metal oxide semiconductor (CMOS) circuitry, just as an example. It should be understood, however that this is merely an example of how circuitry may be formed in a device in a front end-of-line process, and claimed subject matter is not limited in this respect.
It should be noted that the various circuits disclosed herein may be described using computer aided design tools and expressed (or represented), as data and/or instructions embodied in various computer-readable media, in terms of their behavioral, register transfer, logic component, transistor, layout geometries, and/or other characteristics. Formats of files and other objects in which such circuit expressions may be implemented include, but are not limited to, formats supporting behavioral languages such as C, Verilog, and VHDL, formats supporting register level description languages like RTL, and formats supporting geometry description languages such as GDSII, GDSIII, GDSIV, CIF, MEBES and any other suitable formats and languages. Storage media in which such formatted data and/or instructions may be embodied include, but are not limited to, non-volatile storage media in various forms (e.g., optical, magnetic or semiconductor storage media) and carrier waves that may be used to transfer such formatted data and/or instructions through wireless, optical, or wired signaling media or any combination thereof. Examples of transfers of such formatted data and/or instructions by carrier waves include, but are not limited to, transfers (uploads, downloads, e-mail, etc.) over the Internet and/or other computer networks via one or more data transfer protocols (e.g., HTTP, FTP, SMTP, etc.).
If received within a computer system via one or more machine-readable media, such data and/or instruction-based expressions of the above described circuits may be processed by a processing entity (e.g., one or more processors) within the computer system in conjunction with execution of one or more other computer programs including, without limitation, net-list generation programs, place and route programs and the like, to generate a representation or image of a physical manifestation of such circuits. Such representation or image may thereafter be used in device fabrication, for example, by enabling generation of one or more masks that are used to form various components of the circuits in a device fabrication process (e.g., wafer fabrication process).
In the context of the present patent application, the term “between” and/or similar terms are understood to include “among” if appropriate for the particular usage and vice-versa. Likewise, in the context of the present patent application, the terms “compatible with,” “comply with” and/or similar terms are understood to respectively include substantial compatibility and/or substantial compliance.
For one or more embodiments, systems described herein may be implemented in a device, such as a computing device and/or networking device, that may comprise, for example, any of a wide range of digital electronic devices, including, but not limited to, desktop and/or notebook computers, high-definition televisions, digital versatile disc (DVD) and/or other optical disc players and/or recorders, game consoles, satellite television receivers, cellular telephones, tablet devices, wearable devices, personal digital assistants, mobile audio and/or video playback and/or recording devices, Internet of Things (IoT) type devices, in-vehicle electronics or advanced driver-assistance systems (ADAS), or any combination of the foregoing. Further, unless specifically stated otherwise, a process as described, such as with reference to flow diagrams and/or otherwise, may also be executed and/or affected, in whole or in part, by a computing device and/or a network device. A device, such as a computing device and/or network device, may vary in terms of capabilities and/or features. Claimed subject matter is intended to cover a wide range of potential variations. For example, a device may include a numeric keypad and/or other display of limited functionality, such as a monochrome liquid crystal display (LCD) for displaying text, for example. In contrast, however, as another example, a web-enabled device may include a physical and/or a virtual keyboard, mass storage, one or more accelerometers, one or more gyroscopes, global positioning system (GPS) and/or other location-identifying type capability, and/or a display with a higher degree of functionality, such as a touch-sensitive color 2D or 3D display, for example.
In the context of the present patent application, the term “connection,” the term “component” and/or similar terms are intended to be physical but are not necessarily always tangible. Whether or not these terms refer to tangible subject matter, thus, may vary in a particular context of usage. As an example, a tangible connection and/or tangible connection path may be made, such as by a tangible, electrical connection, such as an electrically conductive path comprising metal or other conductor, that is able to conduct electrical current between two tangible components. Likewise, a tangible connection path may be at least partially affected and/or controlled, such that, as is typical, a tangible connection path may be open or closed, at times resulting from influence of one or more externally derived signals, such as external currents and/or voltages, such as for an electrical switch. Non-limiting illustrations of an electrical switch include a transistor, a diode, etc. However, a “connection” and/or “component,” in a particular context of usage, likewise, although physical, can also be non-tangible, such as a connection between a client and a server over a network, particularly a wireless network, which generally refers to the ability for the client and server to transmit, receive, and/or exchange communications, as discussed in more detail later.
In a particular context of usage, such as a particular context in which tangible components are being discussed, therefore, the terms “coupled” and “connected” are used in a manner so that the terms are not synonymous. Similar terms may also be used in a manner in which a similar intention is exhibited. Thus, “connected” is used to indicate that two or more tangible components and/or the like, for example, are tangibly in direct physical contact. Thus, using the previous example, two tangible components that are electrically connected are physically connected via a tangible electrical connection, as previously discussed. However, “coupled,” is used to mean that potentially two or more tangible components are tangibly in direct physical contact. Nonetheless, “coupled” is also used to mean that two or more tangible components and/or the like are not necessarily tangibly in direct physical contact, but are able to co-operate, liaise, and/or interact, such as, for example, by being “optically coupled.” Likewise, the term “coupled” is also understood to mean indirectly connected. It is further noted, in the context of the present patent application, since memory, such as a memory component and/or memory states, is intended to be non-transitory, the term physical, at least if used in relation to memory necessarily implies that such memory components and/or memory states, continuing with the example, are tangible.
Unless otherwise indicated, in the context of the present patent application, the term “or” if used to associate a list, such as A, B, or C, is intended to mean A, B, and C, here used in the inclusive sense, as well as A, B, or C, here used in the exclusive sense. With this understanding, “and” is used in the inclusive sense and intended to mean A, B, and C; whereas “and/or” can be used in an abundance of caution to make clear that all of the foregoing meanings are intended, although such usage is not required. In addition, the term “one or more” and/or similar terms is used to describe any feature, structure, characteristic, and/or the like in the singular, “and/or” is also used to describe a plurality and/or some other combination of features, structures, characteristics, and/or the like. Likewise, the term “based on” and/or similar terms are understood as not necessarily intending to convey an exhaustive list of factors, but to allow for existence of additional factors not necessarily expressly described.
Also, for one or more embodiments, an electronic document and/or electronic file may comprise a number of components. As previously indicated, in the context of the present patent application, a component is physical, but is not necessarily tangible. As an example, components with reference to an electronic document and/or electronic file, in one or more embodiments, may comprise text, for example, in the form of physical signals and/or physical states (e.g., capable of being physically displayed). Typically, memory states, for example, comprise tangible components, whereas physical signals are not necessarily tangible, although signals may become (e.g., be made) tangible, such as if appearing on a tangible display, for example, as is not uncommon. Also, for one or more embodiments, components with reference to an electronic document and/or electronic file may comprise a graphical object, such as, for example, an image, such as a digital image, and/or sub-objects, including attributes thereof, which, again, comprise physical signals and/or physical states (e.g., capable of being tangibly displayed). In an embodiment, digital content may comprise, for example, text, images, audio, video, and/or other types of electronic documents and/or electronic files, including portions thereof, for example.
Also, in the context of the present patent application, the term “parameters” (e.g., one or more parameters), “coefficients” (e.g., one or more coefficients), “values” (e.g., one or more values), “symbols” (e.g., one or more symbols) “bits” (e.g., one or more bits), “elements” (e.g., one or more elements), “characters” (e.g., one or more characters), “numbers” (e.g., one or more numbers), “numerals” (e.g., one or more numerals) or “measurements” (e.g., one or more measurements) refer to material descriptive of a collection of signals, such as in one or more electronic documents and/or electronic files, and exist in the form of physical signals and/or physical states, such as memory states. For example, one or more parameters, values, symbols, bits, elements, characters, numbers, numerals or measurements, such as referring to one or more aspects of an electronic document and/or an electronic file comprising an image, may include, as examples, time of day at which an image was captured, latitude and longitude of an image capture device, such as a camera, for example, etc. In another example, one or more parameters, values, symbols, bits, elements, characters, numbers, numerals or measurements, relevant to digital content, such as digital content comprising a technical article, as an example, may include one or more authors, for example. Claimed subject matter is intended to embrace meaningful, descriptive parameters, values, symbols, bits, elements, characters, numbers, numerals or measurements in any format, so long as the one or more parameters, values, symbols, bits, elements, characters, numbers, numerals or measurements comprise physical signals and/or states, which may include, as parameter, value, symbol bits, elements, characters, numbers, numerals or measurements examples, collection name (e.g., electronic file and/or electronic document identifier name), technique of creation, purpose of creation, time and date of creation, logical path if stored, coding formats (e.g., type of computer instructions, such as a markup language) and/or standards and/or specifications used so as to be protocol compliant (e.g., meaning substantially compliant and/or substantially compatible) for one or more uses, and so forth.
Signal packet communications and/or signal frame communications, also referred to as signal packet transmissions and/or signal frame transmissions (or merely “signal packets” or “signal frames”), may be communicated between nodes of a network, where a node may comprise one or more network devices and/or one or more computing devices, for example. As an illustrative example, but without limitation, a node may comprise one or more sites employing a local network address, such as in a local network address space. Likewise, a device, such as a network device and/or a computing device, may be associated with that node. It is also noted that in the context of this patent application, the term “transmission” is intended as another term for a type of signal communication that may occur in any one of a variety of situations. Thus, it is not intended to imply a particular directionality of communication and/or a particular initiating end of a communication path for the “transmission” communication. For example, the mere use of the term in and of itself is not intended, in the context of the present patent application, to have particular implications with respect to the one or more signals being communicated, such as, for example, whether the signals are being communicated “to” a particular device, whether the signals are being communicated “from” a particular device, and/or regarding which end of a communication path may be initiating communication, such as, for example, in a “push type” of signal transfer or in a “pull type” of signal transfer. In the context of the present patent application, push and/or pull type signal transfers are distinguished by which end of a communications path initiates signal transfer.
In the context of the particular patent application, a network protocol, such as for communicating between devices of a network, may be characterized, at least in part, substantially in accordance with a layered description, such as the so-called Open Systems Interconnection (OSI) seven layer type of approach and/or description. A network computing and/or communications protocol (also referred to as a network protocol) refers to a set of signaling conventions, such as for communication transmissions, for example, as may take place between and/or among devices in a network. In the context of the present patent application, the term “between” and/or similar terms are understood to include “among” if appropriate for the particular usage and vice-versa. Likewise, in the context of the present patent application, the terms “compatible with,” “comply with” and/or similar terms are understood to respectively include substantial compatibility and/or substantial compliance.
In one example embodiment, as shown in
Example devices in
Referring now to
For one or more embodiments, a device, such as a computing device and/or networking device, may comprise, for example, any of a wide range of digital electronic devices, including, but not limited to, desktop and/or notebook computers, high-definition televisions, digital versatile disc (DVD) and/or other optical disc players and/or recorders, game consoles, satellite television receivers, cellular telephones, tablet devices, wearable devices, personal digital assistants, mobile audio and/or video playback and/or recording devices, Internet of Things (IoT) type devices, or any combination of the foregoing. Further, unless specifically stated otherwise, a process as described, such as with reference to flow diagrams and/or otherwise, may also be executed and/or affected, in whole or in part, by a computing device and/or a network device. A device, such as a computing device and/or network device, may vary in terms of capabilities and/or features. Claimed subject matter is intended to cover a wide range of potential variations. For example, a device may include a numeric keypad and/or other display of limited functionality, such as a monochrome liquid crystal display (LCD) for displaying text, for example. In contrast, however, as another example, a web-enabled device may include a physical and/or a virtual keyboard, mass storage, one or more accelerometers, one or more gyroscopes, GNSS receiver and/or other location-identifying type capability, and/or a display with a higher degree of functionality, such as a touch-sensitive color 5D or 3D display, for example.
In
Memory 1822 may comprise any non-transitory storage mechanism. Memory 1822 may comprise, for example, primary memory 1824 and secondary memory 1826, additional memory circuits, mechanisms, or combinations thereof may be used. Memory 1822 may comprise, for example, random access memory, read only memory, etc., such as in the form of one or more storage devices and/or systems, such as, for example, a disk drive including an optical disc drive, a tape drive, a solid-state memory drive, etc., just to name a few examples.
Memory 1822 may be utilized to store a program of executable computer instructions. For example, processor 1820 may fetch executable instructions from memory and proceed to execute the fetched instructions. Memory 1822 may also comprise a memory controller for accessing device readable-medium 1840 that may carry and/or make accessible digital content, which may include code, and/or instructions, for example, executable by processor 1820 and/or some other device, such as a controller, as one example, capable of executing computer instructions, for example. Under direction of processor 1820, a non-transitory memory, such as memory cells storing physical states (e.g., memory states), comprising, for example, a program of executable computer instructions, may be executed by processor 1820 and able to generate signals to be communicated via a network, for example, as previously described. Generated signals may also be stored in memory, also previously suggested.
Memory 1822 may store electronic files and/or electronic documents, such as relating to one or more users, and may also comprise a computer-readable medium that may carry and/or make accessible content, including code and/or instructions, for example, executable by processor 1820 and/or some other device, such as a controller, as one example, capable of executing computer instructions, for example. As previously mentioned, the term electronic file and/or the term electronic document are used throughout this document to refer to a set of stored memory states and/or a set of physical signals associated in a manner so as to thereby form an electronic file and/or an electronic document. That is, it is not meant to implicitly reference a particular syntax, format and/or approach used, for example, with respect to a set of associated memory states and/or a set of associated physical signals. It is further noted an association of memory states, for example, may be in a logical sense and not necessarily in a tangible, physical sense. Thus, although signal and/or state components of an electronic file and/or electronic document, are to be associated logically, storage thereof, for example, may reside in one or more different places in a tangible, physical memory, in an embodiment.
Algorithmic descriptions and/or symbolic representations are examples of techniques used by those of ordinary skill in the signal processing and/or related arts to convey the substance of their work to others skilled in the art. An algorithm is, in the context of the present patent application, and generally, is considered to be a self-consistent sequence of operations and/or similar signal processing leading to a desired result. In the context of the present patent application, operations and/or processing involve physical manipulation of physical quantities. Typically, although not necessarily, such quantities may take the form of electrical and/or magnetic signals and/or states capable of being stored, transferred, combined, compared, processed and/or otherwise manipulated, for example, as electronic signals and/or states making up components of various forms of digital content, such as signal measurements, text, images, video, audio, etc.
It has proven convenient at times, principally for reasons of common usage, to refer to such physical signals and/or physical states as bits, values, elements, parameters, symbols, characters, terms, samples, observations, weights, numbers, numerals, measurements, content and/or the like. It should be understood, however, that all of these and/or similar terms are to be associated with appropriate physical quantities and are merely convenient labels. Unless specifically stated otherwise, as apparent from the preceding discussion, it is appreciated that throughout this specification discussions utilizing terms such as “processing,” “computing,” “calculating,” “determining”, “establishing”, “obtaining”, “identifying”, “selecting”, “generating”, and/or the like may refer to actions and/or processes of a specific apparatus, such as a special purpose computer and/or a similar special purpose computing and/or network device. In the context of this specification, therefore, a special purpose computer and/or a similar special purpose computing and/or network device is capable of processing, manipulating and/or transforming signals and/or states, typically in the form of physical electronic and/or magnetic quantities, within memories, registers, and/or other storage devices, processing devices, and/or display devices of the special purpose computer and/or similar special purpose computing and/or network device. In the context of this particular patent application, as mentioned, the term “specific apparatus” therefore includes a general purpose computing and/or network device, such as a general purpose computer, once it is programmed to perform particular functions, such as pursuant to program software instructions.
In some circumstances, operation of a memory device, such as a change in state from a binary one to a binary zero or vice-versa, for example, may comprise a transformation, such as a physical transformation. With particular types of memory devices, such a physical transformation may comprise a physical transformation of an article to a different state or thing. For example, but without limitation, for some types of memory devices, a change in state may involve an accumulation and/or storage of charge or a release of stored charge. Likewise, in other memory devices, a change of state may comprise a physical change, such as a transformation in magnetic orientation. Likewise, a physical change may comprise a transformation in molecular structure, such as from crystalline form to amorphous form or vice-versa. In still other memory devices, a change in physical state may involve quantum mechanical phenomena, such as, superposition, entanglement, and/or the like, which may involve quantum bits (qubits), for example. The foregoing is not intended to be an exhaustive list of all examples in which a change in state from a binary one to a binary zero or vice-versa in a memory device may comprise a transformation, such as a physical, but non-transitory, transformation. Rather, the foregoing is intended as illustrative examples.
Referring again to
One particular embodiment disclosed herein is directed to an article comprising: a storage medium comprising computer-readable instructions stored thereon that are executable by one or more processors of a computing device to: convolve image signal intensity values associated with at least a portion of pixel locations in an image frame with kernel coefficients to provide an output image signal intensity value mapped to an output pixel location in the image frame, wherein: kernel coefficients to be applied to image signal intensity values for pixels in a first region of the at least a portion of pixel locations in the image frame to be selected from a set of coefficient values such that the same coefficient value is to be applied to image signal intensity values of multiple pixel locations in the first region. In one particular implementation, pixel locations in the first region are mapped to the same coefficient value of the set of coefficient values based, at least in part, on full-granularity coefficient values computed for the pixel locations in the first region. For example, the same coefficient value is computed based, at least in part, on full-granularity coefficient values including at least some of the full-granularity coefficient values computed for the pixel locations. In another example, the same coefficient value is computed as an average of the full-granularity coefficient values. In yet another example, the pixel locations in the first region are mapped to the same coefficient value based, at least in part, on an association of the full-granularity coefficient values with a range of values including the same coefficient value. In another particular implementation, the same coefficient value is selected to be applied to the image signal intensity values of the multiple pixel locations based, at least in part, a location of the first region relative to the output pixel location. For example, the first region may be peripheral to a second region the at least a portion of pixel locations in the image frame, the second region containing the output pixel location. In another example, the first region may be at a vertical periphery to the second region; a single kernel coefficient may be applied to image signal intensity values of multiple pixel locations in the first region extending in a vertical direction; and kernel coefficients may be applied with full granularity in a horizontal dimension. In yet another example, the first region may be at a lateral periphery to the second region; a single kernel coefficient may be applied to image signal intensity values of multiple pixel locations in the first region extending in a horizontal direction; and kernel coefficients may be applied with full granularity in a vertical dimension. In yet another particular implementation, convolution of image signal intensity values associated with pixel locations in the first region may comprise: computation of a sum of image signal intensity values associated with two or more pixel locations in the first region; computation of a product of the summed image signal intensity values by the same coefficient value; and determination of the output image signal intensity value based, at least in part, on the computed product. In yet another particular implementation, the instructions may be further executable by the one or more processors to: map image signal intensity values for at least some pixel locations in the region of the image frame to a single image signal intensity value; multiply the signal image signal value by a coefficient value selected from the set of coefficient values to compute a product; and determine the output image signal intensity value based, at least in part on the computed product. For example, the instructions may be executable to average the image signal intensity values to obtain the map. In another particular implementation, the instructions are further executable by the one or more processors to select between and/or among multiple modes to convolve the image signal intensity values associated with the at least a portion of pixel locations in the image frame, the multiple modes to convolve including at least a first mode comprising application of the same coefficient value to image signal intensity values of the multiple pixel locations in the first region. For example, the multiple modes to convolve may further includes a second mode comprising skipping image signal intensity values in the color channel for at least some pixel locations in the first region.
Another particular embodiment disclosed herein is directed to an apparatus comprising: a memory storage device; one or more processors coupled to the memory storage device, the one or more processors to: convolve image signal intensity values associated with at least a portion of pixel locations in an image frame with kernel coefficients to provide an output image signal intensity value mapped to an output pixel location in the image frame, wherein: kernel coefficients to be applied to image signal intensity values for pixels in a first region of the at least a portion of pixel locations in the image frame to be selected from a set of coefficient values such that the same coefficient value is to be applied to image signal intensity values of multiple pixel locations in the first region. In one particular implementation, pixel locations in the first region are mapped to the same coefficient value of the set of coefficient values based, at least in part, on full-granularity coefficient values computed for the pixel locations in the first region. For example, the same coefficient value is computed based, at least in part, on full-granularity coefficient values including at least some of the full-granularity coefficient values computed for the pixel locations. In another example, the same coefficient value is computed as an average of the full-granularity coefficient values. In yet another example, the pixel locations in the first region are mapped to the same coefficient value based, at least in part, on an association of the full-granularity coefficient values with a range of values including the same coefficient value. In another particular implementation, the same coefficient value is selected to be applied to the image signal intensity values of the multiple pixel locations based, at least in part, a location of the first region relative to the output pixel location. For example, the first region may be peripheral to a second region the at least a portion of pixel locations in the image frame, the second region containing the output pixel location. In another example, the first region may be at a vertical periphery to the second region; a single kernel coefficient may be applied to image signal intensity values of multiple pixel locations in the first region extending in a vertical direction; and kernel coefficients may be applied with full granularity in a horizontal dimension. In yet another example, the first region may be at a lateral periphery to the second region; a single kernel coefficient may be applied to image signal intensity values of multiple pixel locations in the first region extending in a horizontal direction; and kernel coefficients may be applied with full granularity in a vertical dimension. In yet another particular implementation, convolution of image signal intensity values associated with pixel locations in the first region may comprise: computation of a sum of image signal intensity values associated with two or more pixel locations in the first region; computation of a product of the summed image signal intensity values by the same coefficient value; and determination of the output image signal intensity value based, at least in part, on the computed product. In yet another particular implementation, the one or more processors are further to: map image signal intensity values for at least some pixel locations in the region of the image frame to a single image signal intensity value; multiply the signal image signal value by a coefficient value selected from the set of coefficient values to compute a product; and determine the output image signal intensity value based, at least in part on the computed product. For example, the one or more processors may be further to average the image signal intensity values to obtain the map. In another particular implementation, the one or more processors may be further to select between and/or among multiple modes to convolve the image signal intensity values associated with the at least a portion of pixel locations in the image frame, the multiple modes to convolve including at least a first mode comprising application of the same coefficient value to image signal intensity values of the multiple pixel locations in the first region. For example, the multiple modes to convolve may further includes a second mode comprising skipping image signal intensity values in the color channel for at least some pixel locations in the first region.
Yet another particular embodiment disclosed herein is directed to an article comprising: a storage medium comprising computer-readable instructions stored thereon that are executable by one or more processors of a computing device to: map original image signal intensity values of a plurality of contiguous pixel locations in a portion of an image frame to a single image signal intensity value to be representative of the contiguous pixel locations in an augmented portion of the image frame; and convolve image signal intensity values associated with pixel locations in the augmented portion of the image frame by applying one or more kernel coefficients to the image signal intensity values pixel locations associated with the pixel locations in the augmented portion of the image frame. In one particular implementation, the instructions are further executable the one or more processors to determine the single image signal intensity value based, at least in part, on an average of the original image signal intensity values. For example, the instructions may be further executable the one or more processors to determine the single image signal intensity value based, at least in part, on selection of a representative image signal intensity value from among the original image signal intensity values. In yet another particular implementation, kernel coefficients to be applied to image signal intensity values for pixels in the augmented portion of the image frame may be selected from a set of coefficient values such that the same coefficient value is to be applied to image signal intensity values of multiple pixel locations in the augmented portion of the image frame.
Yet another particular embodiment disclosed herein is directed to an apparatus comprising: a memory storage device; one or more processors coupled to the memory storage device, the one or more processors to: map original image signal intensity values of a plurality of contiguous pixel locations in a portion of an image frame to a single image signal intensity value to be representative of the contiguous pixel locations in an augmented portion of the image frame; and convolve image signal intensity values associated with pixel locations in the augmented portion of the image frame by applying one or more kernel coefficients to the image signal intensity values pixel locations associated with the pixel locations in the augmented portion of the image frame. In one particular implementation, the instructions are further executable the one or more processors to determine the single image signal intensity value based, at least in part, on an average of the original image signal intensity values. For example, the instructions may be further executable the one or more processors to determine the single image signal intensity value based, at least in part, on selection of a representative image signal intensity value from among the original image signal intensity values. In yet another particular implementation, kernel coefficients to be applied to image signal intensity values for pixels in the augmented portion of the image frame may be selected from a set of coefficient values such that the same coefficient value is to be applied to image signal intensity values of multiple pixel locations in the augmented portion of the image frame.
Yet another particular embodiment disclosed herein is directed to an apparatus comprising: a memory storage device; one or more processors coupled to the memory storage device, the one or more processors to: map original image signal intensity values of a plurality of contiguous pixel locations in a portion of an image frame to a single image signal intensity value to be representative of the contiguous pixel locations in an augmented portion of the image frame; and convolve image signal intensity values associated with pixel locations in the augmented portion of the image frame by applying one or more kernel coefficients to the image signal intensity values pixel locations associated with the pixel locations in the augmented portion of the image frame. In one particular implementation, the instructions are further executable the one or more processors to determine the single image signal intensity value based, at least in part, on an average of the original image signal intensity values. For example, the instructions may be further executable the one or more processors to determine the single image signal intensity value based, at least in part, on selection of a representative image signal intensity value from among the original image signal intensity values. In yet another particular implementation, kernel coefficients to be applied to image signal intensity values for pixels in the augmented portion of the image frame may be selected from a set of coefficient values such that the same coefficient value is to be applied to image signal intensity values of multiple pixel locations in the augmented portion of the image frame.
In the preceding description, various aspects of claimed subject matter have been described. For purposes of explanation, specifics, such as amounts, systems and/or configurations, as examples, were set forth. In other instances, well-known features were omitted and/or simplified so as not to obscure claimed subject matter.
While certain features have been illustrated and/or described herein, many modifications, substitutions, changes and/or equivalents will now occur to those skilled in the art. It is, therefore, to be understood that the appended claims are intended to cover all modifications and/or changes as fall within claimed subject matter.