A method and apparatus are disclosed for identifying an object with specific characteristics and automatically obfuscating part or all of a digital image corresponding to that object. The obfuscation comprises pixelation, color alteration, and/or contrast alteration. The obfuscation optionally can be performed only when the digital image is being viewed by certain client devices.
As computing technology continually improves, the ability to quickly generate and render digital images on a display is becoming more and more sophisticated. Computer-generated images have become extremely realistic and often comprise layers of different details.
At the same time, realistic images are not always desirable for all viewers. For example, if a minor is operating a client device, the content provider may not want that minor to be able to see images containing adult content, such as images containing nudity, violence, or disturbing depictions. Numerous other reasons exist for wanting to shield certain users from certain content. For example, there may be privacy or intellectual property concerns with certain images, or the content provider may wish for only certain individuals, and not the general public, to be able to see the images.
To date, content has been shielded from viewers through access controls, for example, by preventing certain users from accessing certain content altogether, such as by denying access to a video file. This is an overly restrictive approach, as it prevents users from seeing the entire content even though the objectionable portion may be only a small portion of the overall content in terms of pixels or time displayed on the screen.
What is needed is a mechanism for automatically identifying an object for which obfuscation is desired, identifying the specific structure that should be obfuscated, and then obfuscating the structure prior to display on a screen. What is further needed is a mechanism for achieving this result in a way that does not detract from the viewing of the overall image containing the specific structure. What is further needed is the ability to perform such obfuscation only for certain client computing devices and not others.
A method and apparatus are disclosed for identifying an object with specific characteristics and automatically obfuscating part or all of a digital image corresponding to that object. The obfuscation comprises pixelation, color alteration, and/or contrast alteration. The obfuscation optionally can be performed only when the digital image is being viewed by certain client devices.
Processing unit 110 optionally comprises a microprocessor with one or more processing cores. Memory 120 optionally comprises DRAM or SRAM volatile memory. Non-volatile storage 130 optionally comprises a hard disk drive or flash memory array. Positioning unit 140 optionally comprises a GPS unit or GNSS unit that communicates with GPS or GNSS satellites to determine latitude and longitude coordinates for client device 100, usually output as latitude data and longitude data. Network interface 150 optionally comprises a wired interface (e.g., Ethernet interface) or wireless interface (e.g., 3G, 4G, GSM, 802.11, protocol known by the trademark “Bluetooth,” etc.). Image capture unit 160 optionally comprises one or more standard cameras (as is currently found on most smartphones and notebook computers). Graphics processing unit 170 optionally comprises a controller or processor for generating graphics for display. Display 180 displays the graphics generated by graphics processing unit 170, and optionally comprises a monitor, touchscreen, or other type of display.
With reference to
Client devices 100a, 100b, and 100c each communicate with server 300 using network interface 150. Server 300 runs server application 320. Server application 320 comprises lines of software code that are designed specifically to interact with client application 220.
Application 220 and/or application server 320 comprise obfuscation engine 400, scaler 440, and object identification engine 450. Obfuscation engine comprises pixelation engine 410, color engine 420, and/or contrast engine 430. Obfuscation engine 400, pixelation engine 410, color engine 420, contrast engine 430, scaler 440, and object identification engine 450 each comprises lines of software code executed by processing unit 110 and/or graphics processing unit 170, and/or comprises additional integrated circuitry, to perform certain functions. For example, scaler 440 might comprise software executed by processing unit 110 and/or graphics processing unit 170 and/or might comprise hardware scaling circuitry comprising integrated circuits.
Obfuscation engine 400 receives an input, typically comprising pixel data, and performs an obfuscation function using one or more of pixelation engine 410, color engine 420, contrast engine 430, and/or other engines on the input to generate an output, where the output can then be used to generate an image that is partially or wholly obfuscated.
Pixelation engine 410 performs an obfuscation function by receiving input pixel data and pixelating the received input pixel data to generate output pixel data, where the output pixel data generally contains fewer pixels than the input pixel data and each individual pixel in the output pixel data is based on one or more pixels in the input pixel data.
Color engine 420 performs an obfuscation function by receiving input pixel data and altering the color of one or more pixels in the input pixel data to generate output pixel data.
Contrast engine 430 performs an obfuscation function by receiving input pixel data and altering the contrast between two or more pixels in the input pixel data to generate output pixel data.
Scaler 440 performs a scaling function by receiving input pixel data and scaling the input pixel data to generate output pixel data. Scaler 440 can be used, for example, if the input pixel data is arranged in a different size configuration (e.g., y rows of x pixels per row) than the size configuration of display 180 of client device 100 on which the image is to be displayed (e.g., c rows of d pixels per row).
Object identification engine 450 identifies one or more objects or sub-objects upon which obfuscation is to be performed.
With reference to
An example of object 500 might be a character in a video game or virtual world, and examples of sub-objects 501 and 504 might be a shirt and pants that the character wears. Another example of object 500 might be a digital photograph, and examples of sub-objects 501 and 504 might be a face and body. Another example of object 500 might be landscape imagery, and examples of sub-objects 501 and 504 might be sunlight and a mountain. One of ordinary skill in the art will appreciate that these examples are not limiting, and object 500 can be any number of possible objects.
Optionally, one or more of characteristics 502, 503, 505, 506, and 507 can be a characteristic for which obfuscation is desired. For example, the characteristic might indicate that an item is secret or private (such as a person's face/identity, or financial information) or that the item is not appropriate for viewing by all audiences (such as an item with sexual content, violent content, etc.). In the example where object 500 is a character in a video game or virtual world and sub-object 501 is a shirt, characteristic 502 might be “adult only,” “see-through,” or “invisible.” Object identification engine 450 examines all portions of object 500 and identifies sub-objects or objects for which obfuscation is desired, such as sub-object 501 (e.g., a see-through shirt). Once such items are identified, object identification engine 450 sends the object 500, sub-object 501, or their associated pixel data to obfuscation engine 400.
In another embodiment, object identification engine 450 comprises image recognition engine 540, which will analyze pixel data 520 or image 530 and compare it to a set of known pixel data or images contained in database 550. If a match is found, then object identification engine 450 will identify object 500 or a relevant sub-object as an object to be obfuscated and sends object 500, the relevant sub-object 501, or their associated pixel data to obfuscation engine 400. This embodiment is useful for identifying known images for which obfuscation is desired. For example, one might do this with images protected by a copyright or trademark for which no license has been obtained, or one might also do this with images known to be offensive.
With reference now to
In
There are numerous approaches for determining the value of each qcolumn, row. In one embodiment, qcolumn,row is a weighted average of all pixels in pixel data 620 that are within the same relative location within the array. For example, when pixelated data 720 is a 16×16 array, the second pixel in the top row can be considered to occupy a space equal to 1/16 of the width of the array× 1/16 of the height of the array, starting at a location that is 1/16 in from the left edge in the horizontal direction and at the top edge in the vertical direction. With that relative size and location in mind, one could then determine the same relative size and location in the 32×32 array represented by pixel data 620. Because pixel data 620 has a larger array size than pixelated data 720, each pixel qcolumn, row will correspond to some or all of more than one pixel pcolumn, row. q can be calculated as a weighted average of those p values based on the portion of p that is covered by the q pixel.
Contained below is exemplary source code that can be used by pixelation engine 410 for performing the pixelation function. This code can be used to obtain samples on many positions within pixel data 620 on a given texture and to perform an average on those values to generate a pixel value. In this exemplary code, the variable “color” is qcolumn, row.
Because pixelated data 720 will not have the same array size as pixel data 620, the resulting pixelated image 730 will be smaller than image 630. However, the end result will be scaled by scaler 440 into the appropriate size for display 180, resulting in scaled, pixelated image 735.
In
Any number of different filters can be applied. For example, a grayscale filter can be applied to translate each pixel data value pcolumn, row into a gray-scale value, such that the resulting color-altered image 830 is a gray-scale image. As another example, a bright color filter can be applied to translate each pixel data value pcolumn, row into a bright color selected from a specific set of bright colors (e.g., fuchsia, bright green, etc.). As another example, a sepia filter can be applied to translate each pixel data value pcolumn, row into a sepia-colored value.
Contained below is exemplary source code that can be used by color engine 420 for performing the color alteration function to generate a sepia-colored value. This code will transform the given color into a sepia tone color. Here, sepiaColor.r is the “r” value, sepiaColor.g is the “g” value, sepia.Color.b is the “b” value, and sepiaColor.a is the “a” value of for rcolumn, row.
In
Any number of different contrast filters can be applied. For example, filter can be applied to increase the contrast between pixels. Or a filter can be applied to decrease the contrast between pixels. The latter is typically more useful in obfuscating images for the human eye.
Contained below is exemplary source code that can be used by contrast engine 430 for performing the contrast alteration function to alter the contrast between pixels. In this example, the code decreases the contrast of the given color by making an interpolation towards white, controlled by contrastFactor. Here, the variable color.rgb is scolumn, row.
It is to be understood that pixelation engine 410, color engine 420, and contrast engine 430 can be applied in varying combinations and in different orders. For example, only one of them might be applied or two or three of them can be applied, and the order in which they are applied can vary. Obfuscation engine 400 optionally will allow the administrator of application server 320 to select which engine to apply in a given situation.
In
The value of the invention can be seen in comparing scaled, pixelated, color-altered, contrast-altered image 1035 to image 630 in
In
In the example where object 500 is a character and sub-object 501 is a see-through shirt, the character would appear on client device 100a in a see-through shirt, but the character would appear on client device 100b in an obfuscated shirt.
References to the present invention herein are not intended to limit the scope of any claim or claim term, but instead merely make reference to one or more features that may be covered by one or more of the claims. Materials, processes and numerical examples described above are exemplary only, and should not be deemed to limit the claims. It should be noted that, as used herein, the terms “over” and “on” both inclusively include “directly on” (no intermediate materials, elements or space disposed there between) and “indirectly on” (intermediate materials, elements or space disposed there between). Likewise, the term “adjacent” includes “directly adjacent” (no intermediate materials, elements or space disposed there between) and “indirectly adjacent” (intermediate materials, elements or space disposed there between). For example, forming an element “over a substrate” can include forming the element directly on the substrate with no intermediate materials/elements there between, as well as forming the element indirectly on the substrate with one or more intermediate materials/elements there between.