The present disclosure relates generally to processing systems and, more particularly, to one or more techniques for image processing in processing systems.
Computing devices often utilize an image signal processor (ISP), a central processing unit (CPU), a graphics processing unit (GPU), an image processor, or a video processor to accelerate the generation of image, video, or graphical data. Such computing devices may include, for example, computer workstations, mobile phones such as so-called smartphones, embedded systems, personal computers, tablet computers, and video game consoles. ISPs or CPUs can execute image, video, or graphics processing systems that includes multiple processing stages that operate together to execute image, video, or graphics processing commands and output one or more frames. In some aspects, a CPU may control the operation of one or more additional processors by issuing one or more image, video, or graphics processing commands. Modern day CPUs are typically capable of concurrently executing multiple applications, each of which may need to utilize another processor during execution. A device that provides content for visual presentation on a display may include an ISP, a GPU, or a CPU.
ISPs, GPUs, or CPUs can be configured to perform multiple processes in an image, video, or graphics processing system. With the advent of faster communication and an increase in the quality of content, e.g., any content that is generated using an ISP, GPU, or CPU, there has developed a need for improved image, video, or graphics processing.
The following presents a simplified summary of one or more aspects in order to provide a basic understanding of such aspects. This summary is not an extensive overview of all contemplated aspects, and is intended to neither identify key or critical elements of all aspects nor delineate the scope of any or all aspects. Its sole purpose is to present some concepts of one or more aspects in a simplified form as a prelude to the more detailed description that is presented later.
In an aspect of the disclosure, a method, a computer-readable medium, and a first apparatus are provided. The apparatus may be an image processor. In one aspect, the apparatus may obtain a first image including multiple first pixels, where each first pixel has a first pixel width. The apparatus can also determine a scale factor for scaling the first image from the multiple first pixels to multiple second pixels. In some aspects, a number of the multiple second pixels can be less than a number of the multiple first pixels, where each second pixel has a second pixel width. Additionally, the apparatus can determine a value for each second pixel based on a weighted average, where each component of the weighted average can be a function of an overlapping area associated with each second pixel and values of corresponding first pixels of the multiple first pixels. In some aspects, the overlapping area can be centered on each second pixel and have a width greater than the second pixel width. Moreover, the apparatus can generate a second image based on the determined values for each second pixel, where the second image can be a downscaled image from the first image. In some aspects, the apparatus can be a wireless communication device.
The details of one or more examples of the disclosure are set forth in the accompanying drawings and the description below. Other features, objects, and advantages of the disclosure will be apparent from the description and drawings, and from the claims.
Various aspects of systems, apparatuses, computer program products, and methods are described more fully hereinafter with reference to the accompanying drawings. This disclosure may, however, be embodied in many different forms and should not be construed as limited to any specific structure or function presented throughout this disclosure. Rather, these aspects are provided so that this disclosure will be thorough and complete, and will fully convey the scope of this disclosure to those skilled in the art. Based on the teachings herein one skilled in the art should appreciate that the scope of this disclosure is intended to cover any aspect of the systems, apparatuses, computer program products, and methods disclosed herein, whether implemented independently of, or combined with, other aspects of the disclosure. For example, an apparatus may be implemented or a method may be practiced using any number of the aspects set forth herein. In addition, the scope of the disclosure is intended to cover such an apparatus or method which is practiced using other structure, functionality, or structure and functionality in addition to or other than the various aspects of the disclosure set forth herein. Any aspect disclosed herein may be embodied by one or more elements of a claim.
Although various aspects are described herein, many variations and permutations of these aspects fall within the scope of this disclosure. Although some potential benefits and advantages of aspects of this disclosure are mentioned, the scope of this disclosure is not intended to be limited to particular benefits, uses, or objectives. Rather, aspects of this disclosure are intended to be broadly applicable to different wireless technologies, system configurations, networks, and transmission protocols, some of which are illustrated by way of example in the figures and in the following description. The detailed description and drawings are merely illustrative of this disclosure rather than limiting, the scope of this disclosure being defined by the appended claims and equivalents thereof.
Several aspects are presented with reference to various apparatus and methods. These apparatus and methods are described in the following detailed description and illustrated in the accompanying drawings by various blocks, components, circuits, processes, algorithms, and the like (collectively referred to as “elements”). These elements may be implemented using electronic hardware, computer software, or any combination thereof. Whether such elements are implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system.
By way of example, an element, or any portion of an element, or any combination of elements may be implemented as a “processing system” that includes one or more processors (which may also be referred to as processing units). Examples of processors include image signal processors (ISPs), central processing units (CPUs), graphics processing units (GPUs), image processors, video processors, microprocessors, microcontrollers, application processors, digital signal processors (DSPs), reduced instruction set computing (RISC) processors, systems on a chip (SoC), baseband processors, application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), programmable logic devices (PLDs), state machines, gated logic, discrete hardware circuits, and other suitable hardware configured to perform the various functionality described throughout this disclosure. One or more processors in the processing system may execute software. Software shall be construed broadly to mean instructions, instruction sets, code, code segments, program code, programs, subprograms, software components, applications, software applications, software packages, routines, subroutines, objects, executables, threads of execution, procedures, functions, etc., whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise. The term application may refer to software. As described herein, one or more techniques may refer to an application (i.e., software) being configured to perform one or more functions. In such examples, the application may be stored on a memory (e.g., on-chip memory of a processor, system memory, or any other memory). Hardware described herein, such as a processor may be configured to execute the application. For example, the application may be described as including code that, when executed by the hardware, causes the hardware to perform one or more techniques described herein. As an example, the hardware may access the code from a memory and executed the code accessed from the memory to perform one or more techniques described herein. In some examples, components are identified in this disclosure. In such examples, the components may be hardware, software, or a combination thereof. The components may be separate components or sub-components of a single component.
Accordingly, in one or more examples described herein, the functions described may be implemented in hardware, software, or any combination thereof. If implemented in software, the functions may be stored on or encoded as one or more instructions or code on a computer-readable medium. Computer-readable media includes computer storage media. Storage media may be any available media that can be accessed by a computer. By way of example, and not limitation, such computer-readable media can be a random access memory (RAM), a read-only memory (ROM), an electrically erasable programmable ROM (EEPROM), optical disk storage, magnetic disk storage, other magnetic storage devices, combinations of the aforementioned types of computer-readable media, or any other medium that can be used to store computer executable code in the form of instructions or data structures that can be accessed by a computer.
As used herein, instances of the term “content” may refer to image content, high dynamic range (HDR) content, video content, graphical content, or display content. In some examples, as used herein, the phrases “image content” or “video content” may refer to a content generated by a processing unit configured to perform image or video processing. For example, the phrases “image content” or “video content” may refer to content generated by one or more processes of an image or video processing system. In some examples, as used herein, the phrases “image content” or “video content” may refer to content generated by an ISP or a CPU. In some examples, as used herein, the term “display content” may refer to content generated by a processing unit configured to perform display processing. In some examples, as used herein, the term “display content” may refer to content generated by a display processing unit. Image or video content may be processed to become display content. For example, an ISP or CPU may output image or video content, such as a frame, to a buffer, e.g., which may be referred to as a frame buffer. A display processing unit may read the image or video content, such as one or more frames from the buffer, and perform one or more display processing techniques thereon to generate display content. For example, a display processing unit may be configured to perform composition on one or more generated layers to generate a frame. As another example, a display processing unit may be configured to compose, blend, or otherwise combine two or more layers together into a single frame. A display processing unit may be configured to perform scaling, e.g., upscaling or downscaling on a frame. In some examples, a frame may refer to a layer. In other examples, a frame may refer to two or more layers that have already been blended together to form the frame, i.e., the frame includes two or more layers, and the frame that includes two or more layers may subsequently be blended.
In some aspects, CPU 108 can run or perform a variety of algorithms for system 100. CPU 108 may also include one or more components or circuits for performing various functions described herein. For instance, the CPU 108 may include a processing unit, a content encoder, a system memory, and/or a communication interface. The processing unit, a content encoder, or system memory may each include an internal memory. In some aspects, the processing unit or content encoder may be configured to receive a value for each component, e.g., each color component of one or more pixels of image or video content. As an example, a pixel in the red (R), green (G), blue (B) (RGB) color space may include a first value for the red component, a second value for the green component, and a third value for the blue component. The system memory or internal memory may include one or more volatile or non-volatile memories or storage devices. In some examples, the system memory or the internal memory may include RAM, static RAM (SRAM), DRAM, erasable programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), flash memory, a magnetic data media, an optical storage media, or any other type of memory.
The system memory or internal memory may also be a non-transitory storage medium according to some examples. The term “non-transitory” may indicate that the storage medium is not embodied in a carrier wave or a propagated signal. However, the term “non-transitory” should not be interpreted to mean that the system memory or internal memory are non-movable or that its contents are static. As one example, the system memory or internal memory may be removed from the CPU 108 and moved to another component. As another example, the system memory or internal memory may not be removable from the CPU 108.
CPU 108 may also include a processing unit, which may be an ISP, a GPU, an image processor, a video processor, or any other processing unit that may be configured to perform image or video processing. In some examples, the processing unit may be integrated into a component of the CPU 108, e.g., a motherboard, or may be otherwise incorporated within a peripheral device configured to interoperate with the CPU 108. The processing unit of CPU 108 may also include one or more processors, such as one or more microprocessors, application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), arithmetic logic units (ALUs), digital signal processors (DSPs), discrete logic, software, hardware, firmware, other equivalent integrated or discrete logic circuitry, or any combinations thereof. If the techniques are implemented partially in software, the processing unit may store instructions for the software in a suitable, non-transitory computer-readable storage medium, e.g., system memory or internal memory, and may execute the instructions in hardware using one or more processors to perform the techniques of this disclosure. Any of the foregoing, including hardware, software, a combination of hardware and software, etc., may be considered to be one or more processors.
In some aspects of system 100, once ISP 104 processes the multiple frames, ISP 104 can produce a frame buffer 114 for each frame. In some instances, the frame buffer 114 can be stored or saved in a memory, e.g., the system memory or internal memory. In other instances, the frame buffer can be stored, saved, or processed in the ASIC 120. ASIC 120 can process the images or frames after the ISP 104. Additionally, ASIC 120 can process data stored in the frame buffer 414. In other aspects, the ASIC 120 can be a programmable engine, e.g., a processing unit or GPU.
In another aspect of system 100, image processing unit 122 or video processing unit 124 can receive the images or frames from ASIC 120. For instance, in some aspects, image processing unit 122 or video processing unit 124 can process or combine the multiple frames from ASIC 120. Image processing unit 122 or video processing unit 124 can then send the frames to display 126. In some aspects, the display 126 may include a display processor to perform display processing on the multiple frames. More specifically, the display processor may be configured to perform one or more display processing techniques on the one or more frames generated by the camera 102, e.g., via image processing unit 122 or video processing unit 124.
In some aspects, the display 126 may be configured to display content that was previously generated. For instance, the display 126 may be configured to display or otherwise present frames that were previously processed. In some aspects, the display 126 may include a variety of display devices, such as a liquid crystal display (LCD), a plasma display, an organic light emitting diode (OLED) display, a projection display device, an augmented reality display device, a virtual reality display device, a head-mounted display, and/or any other type of display device. Display 126 may also include a single display or multiple displays, such that any reference to display 126 may refer to one or more displays 126. For example, the display 126 may include a first display and a second display. In some instances, the first display may be a left-eye display and the second display may be a right-eye display. In these instances, the first and second display may receive different frames for presentment thereon. In other examples, the first and second display may receive the same frames for presentment thereon.
Referring again to
Some aspects of the present disclosure can convert or adjust the size of an image, such as by increasing, i.e., upscaling, or decreasing, i.e., downscaling, the size of the original image. In order to downscale or decrease the size of an image, some aspects of the present disclosure can utilize a downscaler. In some aspects, a downscaler can be used for a number of different purposes by an image signal processor (ISP). For example, the downscaler can be an algorithm used to downscale images. Further, downscalers herein can be part of a camera processing pipeline, where the algorithm can be used to downscale images from a camera in a processing pipeline. In further aspects, downscalers can be part of the ISP within the hardware, e.g., a chip or a part thereof in a camera pipeline. In some aspects, a downscaler can be an algorithm used to downscale a stored image (e.g., post-ISP processing and/or not part of the camera processing pipeline).
Downscaling can be used for a variety of reasons. For instance, an image may need to be resized, e.g., to fit another display format or to alter the viewing magnification. In these instances, downscalers can be used to reduce the size of an image while maintaining the image quality. Some examples of applications that utilize downscaling are web browsers, image magnifiers and other zooming applications, and/or image editors.
Downscalers can have a number of different benefits when converting or downscaling images, such as having low implementation costs compared to other image conversion methods. Further, downscalers can be relatively simple to implement compared to other image conversion methods, e.g., only one line buffer may be needed to calculate a downscaled image. Downscalers herein can also have a flexible scale factor, i.e., the amount that each image is downscaled or upscaled, such that images can be reduced or increased in size by any desired amount. In some aspects, downscalers can convert or decrease the size of an image by a scale factor greater than 1. The scale factor can be any number, such that images can be decreased in size by any amount. Moreover, a large downscaling ratio may not affect other aspects of the downscaling operation, e.g., the amount the of line buffers needed to calculate a downscaled image may not increase based on a large downscaling ratio. However, in some aspects, downscalers can have issues, as discussed in detail below, when the downscaling ratio is within a certain range, e.g., from 1 to 1.3. Additionally, downscalers according to the present disclosure can have a high downscaling quality, such that the downscaling can be equivalent to a high quality of downscaling, e.g., bilinear downscaling. Further, downscalers according to the present disclosure may be able to support any type of downscaling ratios with a simple calculation.
Some aspects of downscaling can produce unwanted side effects. For instance, when an image is downscaled, a number of noise artifacts can be produced. In some instances, these noise artifacts can disrupt the uniformity of the downscaled image. For example, the noise artifacts produced may be in the form of a grid that matches the grid used to produce the downscaled image from the originally sized image. These grid pattern artifacts may be the result of random noise in the image. For example, when there is no noise in the image, then the grid patterns may not be present. In some aspects, the aforementioned noise can be the result of neighboring pixel values being independent of one another (i.e., different color values).
In other aspects, these unwanted noise artifacts may be produced when downscaling with a certain range of scale factors. As such, in some aspects, when the downscaling ratio is closer to 1, noticeable grid patterns may be produced. For example, in some aspects, downscaling when using a scale factor of 1 to 1.3 may produce more noise artifacts than when using a different scale factor. Accordingly, some downscalers may not be able to support a small downscaling ratio. However, when downscaling with a higher scale factor, these grid pattern artifacts may not be present. For example, a scale factor or downscale ratio of 1.5 or above may not produce the aforementioned grid pattern artifacts. Moreover, these noise artifacts may be produced when using any number of downscaling methods, e.g., bilinear, bicubic, etc. In some instances, downscalers may merely crop an image in order to reduce its size. However, merely cropping the image can have a number of unwanted side effects, such as negatively influencing the image or decreasing the field of view (FOV). In contrast, downscaling an image can avoid many of these negative side effects.
As discussed above, cropping has unwanted side effects, such as negatively influencing the image or decreasing the FOV. Thus, reducing the size of an image without downscaling, e.g. cropping an image, may not be appropriate in many circumstances, e.g., when the full FOV is desired in the adjusted image.
Furthermore, there is a growing use of high quality images and a corresponding need to downscale at a high quality and with a small scale factor. In some instances, a downscaled image, even by a small scale factor, can save bandwidth and power to transmit and memory to store, particularly for high quality images. For example, the savings on both bandwidth and power when transmitting a downscaled image can be as much as 40%.
Downscaled pixel values according to the present disclosure can be calculated in a number of different ways. In some aspects, a downscaled pixel value can be calculated based on the average, sum, or combination of overlapping or covered pixel values from the original image. For instance, a downscaled pixel value may be the average of the original pixel values that overlap the downscaled pixel based on its phase, i.e., sampling point. In some aspects, the phase or sampling point can be the location of the center of the downscaled pixel based on the original image. For example, the phase or sampling point can be the location of the center of the downscaled pixel. Based on the location of each downscaled pixel, the phase values compared to the original image pixels may change on a pixel-by-pixel basis, e.g., as the location of each different downscaled pixel will change.
Each downscaled pixel in
As mentioned above, downscaling can produce unwanted artifacts in the downscaled image. The reason for these artifacts is that different downscaled pixels sample a different percentage of the original image pixels. For instance, in a particularly noisy image, the percentages sampled from each original pixel to determine a downscaled pixel may change with each different downscale pixel. Accordingly, the noise level of each downscaled pixel may keep changing. For example, if one downscaled pixel value is calculated based on a sampling percentage of four original image pixels, another downscaled pixel value may use a different sampling percentage of four different original image pixels. In some aspects, the amount of noise in each downscaled image pixel may change based on the percentages taken from each of the original pixel images, which may generate a pattern of artifacts in the final downscale image. For example, if a downscaled pixel value is determined based on four equally weighted original pixels values, the downscaled pixel value will appear to have less noise than if a downscaled pixel value is determined based on four original pixel values where one or two original pixel values are weighted higher than the remaining pixels.
One of the causes of the unwanted grid pattern of artifacts may be the difference in noise between the original image pixels. In some aspects, the grid pattern of noise artifacts may be more pronounced based on the scaling factor or downscaling ratio. For instance, the grid pattern of noise artifacts may increase when using a smaller scaling factor or downscaling ratio, e.g., a scaling factor of 1.00001 to 1.3. Likewise, the grid pattern of noise artifacts may decrease, may not be visible, or may not be present when using a larger scaling factor or downscaling ratio, e.g., a scaling factor greater than 1.5.
As mentioned above, the downscaled pixels that are calculated most closely based on averaging original pixels may include less noise. Accordingly, when four original pixel values are roughly averaged, the downscaled pixel value is not very similar to any of the original pixel value. Indeed, in these downscaled pixel locations, because the calculation roughly averages original pixel values, there may not be much noise, but it will be different from any original pixel values, so the location will be blurry. In contrast, when an individual original pixel has more weight in calculating a downscaled pixel, it will be a dominant pixel and may result in more noise at that location. These noisy locations look more like an individual original non-scaled pixel, so they will be closest to one original pixel image and hence may not be blurry. When a downscaled image includes areas with increased noise combined with areas with decreased noise, it can be visually unpleasant. This disparity in the amount of noise within the same image is one of the issues presented when downscaling images.
The present disclosure can solve the aforementioned noise issues based on a number of approaches. In some aspects, the present disclosure can add an overlapping pixel range when calculating the downscaled pixel values. For instance, rather than using a pixel area equal in size to a downscaled pixel, in the present disclosure a pixel area greater in size to a downscaled pixel may be used. In these instances, for each direction surrounding a downscaled pixel, a uniformly spaced overlapping area of original pixel values can be added to the calculation. By adding these overlapping areas, the present disclosure can ensure that a wider range of original pixel values will be used in the downscaled pixel calculation by sampling from a greater amount of original pixel data. As the percentage or weight for each original pixel used will be more equal and/or include more components, the amount of noise in the downscaled pixel may be reduced.
In some aspects, when performing the pixel interpolation, e.g., adjusting the pixels in an image due to the image being resized or downscaled, the present disclosure can add the overlapping area to increase the covering pixel range in order to compensate for noise non-uniformity in the original image. Additionally, the noise distribution may correspond to using a larger scale factor or downscale ratio, which does not produce as many grid pattern artifacts. In some aspects, as the noise distribution may not manifest itself in grid pattern artifacts, the output image size may be downscaled using a small scale factor or downscale ratio. Moreover, in some instances, adding this overlapping area may result in an increased need for other aspects of the calculation, e.g., an increase in the amount of line buffers needed during calculation in order to account for the increased pixel coverage.
In one aspect,
Additionally, center downscaled pixel 622 can have a second pixel area equal to the second pixel width 652 multiplied by the second pixel height 654. Overlapping region 640 can have an overlapping area equal to overlapping width 642 multiplied by overlapping height 644. The overlapping area can be greater than the second pixel area. As shown in
In further aspects, a number of components of the weighted average can be based on a scale factor. For instance, in some aspects, the number of components of the weighted average can be greater than or equal to four when the scale factor is less than 1.5. In other aspects, the range of the scale factor can be between 1.00001 and 1.3. In yet other aspects, the scale factor can be determined based on the amount of noise in the second image or based on a user input.
As mentioned above, some aspects of the present disclosure can utilize a weighted average to calculate the values of individual pixels. In one aspect, the value of the center downscaled pixel 622 may be determined based on a weighted average that is a function of the area 681 multiplied by the value of the pixel 631, the area 682 multiplied by the value of the pixel 632, the area 683 multiplied by the value of the pixel 633, the area 684 multiplied by the value of the pixel 634, the area 685 multiplied by the value of the pixel 635, the area 686 multiplied by the value of the pixel 636, the area 687 multiplied by the value of the pixel 637, the area 688 multiplied by the value of the pixel 638, and the area 689 multiplied by the value of the pixel 639. For example, the value of center downscaled pixel 622, e.g., P622, may be calculated based on the following equation: P622=[A681*P631+A682*P632+A683 *P633+A684*P634+A685*P635+A686*P636+A687*P637+A688*P638+A689*P639]/A640. In the above equation P631, P632, P633, P634, P635, P636, P637, P638, and P639 are the color values of original pixels 631-639, respectively, A681, A682, A683, A684, A685, A686, A687, A688, and A689 are the overlapping areas that overlap with the original pixels 631-639, respectively, and A640 is the area of overlapping region 640. As noted in
As mentioned above, when using the overlapping techniques described above, the least noisy areas of a downscaled image can be calculated using a near equal average of the original pixels. In some instances, the least noisy areas of an original image may have a similar noise amount in the downscaled image. For relatively noisy areas, the noise amount may decrease because the overlapping region adds more original pixel areas to average during the calculation. Essentially, the use of an overlapping area such as overlapping region 640 includes more original pixels for averaging when calculating downscaled pixels.
As indicated previously, when overlapping areas such as overlapping region 640 are added, the area of the coverage used to calculate the downsizing pixels is expanded, which can result in the reduction or elimination of the aforementioned grid artifacts. As mentioned above, when the scale factor or downscale ratio is small, e.g., between 1.00001 and 1.3, there is a tendency to produce noise pattern artifacts when using traditional downsizing methods. However, the use of overlapping regions, e.g., overlapping region 640, will reduce the likelihood of obtaining these noise pattern artifacts. Indeed, by adding an overlapping area and increasing the downsizing calculation area, the difference in original pixels used during the calculation is diluted, such that the grid patterns artifacts will be reduced. Accordingly, adding areas of overlap to the downscaling calculation will increase the percentage of areas that are being weighted, e.g., especially for smaller sampled areas, which may result in a more even distribution of original pixels used in the calculation.
As shown in
As mentioned above and shown in
At 906, the apparatus can determine a value for each second pixel based on a weighted average, as described in connection with the examples in
Additionally, the second pixel can have a second pixel area and the overlapping area can be greater than the second pixel area, as described in connection with the examples in
Additionally, the overlapping area for each second pixel can surround the second pixel and extend past the second pixel, as described in connection with
In some aspects, a number of components of the weighted average can be based on a scale factor value. For instance, the number of components of the weighted average can be greater than or equal to four when the scale factor is less than 1.5, as described in connection with the example in
In one configuration, a method or apparatus for image processing is provided. The apparatus may be an image processor or some other processor in a GPU. In some aspects, the apparatus may be the ISP 104, the CPU 108, the ASIC 120, the image processing unit 122, the video processing unit 124, or some other processor or hardware within system 100 or another device. In some aspects, the apparatus can be a wireless communication device. The apparatus may include means for obtaining a first image including a set of first pixels, where each first pixel has a first pixel width. The apparatus can also include means for determining a scale factor for scaling the first image from the set of first pixels to a set of second pixels. In some aspects, a number of the set of second pixels can be less than a number of the set of first pixels, where each second pixel as a second pixel width. The apparatus can also include means for determining a value for each second pixel based on a weighted average. In some aspects, each component of the weighted average can be a function of an overlapping area associated with each second pixel and values of corresponding first pixels of the set of first pixels. Further, the overlapping area can be centered on each second pixel and have a width greater than the second pixel width. Also, the apparatus can include means for generating a second image based on the determined values for each second pixel, where the second image can be a downscaled image from the first image.
The subject matter described herein can be implemented to realize one or more benefits or advantages. For instance, the described techniques herein can be used by image processors or other processors to help reduce or eliminate unwanted noise artifacts, such as through the use of overlapping areas of original image pixels when calculating a downscaled pixel. These overlapping areas can be adjustable as a parameter. In addition, the cost of adding these overlapping areas can be low, as well as be relatively simple to implement during the calculation of a downscaled pixel. Accordingly, the present disclosure can reduce grid pattern noise artifacts by adding overlapping areas, and these overlapping areas can be adjustable to achieve different effects based on different cases.
In accordance with this disclosure, the term “or” may be interrupted as “and/or” where context does not dictate otherwise. Additionally, while phrases such as “one or more” or “at least one” or the like may have been used for some features disclosed herein but not others; the features for which such language was not used may be interpreted to have such a meaning implied where context does not dictate otherwise.
In one or more examples, the functions described herein may be implemented in hardware, software, firmware, or any combination thereof. For example, although the term “processing unit” has been used throughout this disclosure, such processing units may be implemented in hardware, software, firmware, or any combination thereof If any function, processing unit, technique described herein, or other module is implemented in software, the function, processing unit, technique described herein, or other module may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media may include computer data storage media or communication media including any medium that facilitates transfer of a computer program from one place to another. In this manner, computer-readable media generally may correspond to (1) tangible computer-readable storage media, which is non-transitory or (2) a communication medium such as a signal or carrier wave. Data storage media may be any available media that can be accessed by one or more computers or one or more processors to retrieve instructions, code and/or data structures for implementation of the techniques described in this disclosure. By way of example, and not limitation, such computer-readable media can be RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media. A computer program product may include a computer-readable medium.
The code may be executed by one or more processors, such as one or more digital signal processors (DSPs), general purpose microprocessors, application specific integrated circuits (ASICs), arithmetic logic units (ALUs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuitry. Accordingly, the term “processor,” as used herein may refer to any of the foregoing structure or any other structure suitable for implementation of the techniques described herein. Also, the techniques could be fully implemented in one or more circuits or logic elements.
The techniques of this disclosure may be implemented in a wide variety of devices or apparatuses, including a wireless handset, an integrated circuit (IC) or a set of ICs (e.g., a chip set). Various components, modules or units are described in this disclosure to emphasize functional aspects of devices configured to perform the disclosed techniques, but do not necessarily require realization by different hardware units. Rather, as described above, various units may be combined in any hardware unit or provided by a collection of interoperative hardware units, including one or more processors as described above, in conjunction with suitable software and/or firmware.
Various examples have been described. These and other examples are within the scope of the following claims.