Embodiments pertain to user input devices. Some embodiments relate to detecting touch inputs using a digital video camera of a mobile device.
Computing devices are being equipped with an ever increasing number of sensors. For example, front and/or rear facing cameras, motion sensors, Global Positioning Sensors (GPS), rotation sensors, light sensors, and the like. These computing devices also have touch sensitive displays for providing input in lieu of a traditional keyboard and mouse.
in the drawings, which are not necessarily drawn to scale, like numerals may describe similar components in different views. Like numerals having different letter suffixes may represent different instances of similar components. The drawings illustrate generally, by way of example, but not by way of limitation, various embodiments discussed in the present document.
Software applications running on computing devices with a touchscreen usually need to draw buttons on the screen to let the user perform actions. Buttons on a touch screen generally occlude content underneath, or at least limit available space. For example, in a drawing application, a button on a touchscreen might hide the canvas. For computing devices with smaller screens such a phone this may mean a significant percentage of the screen is devoted to just displaying buttons as opposed to showing relevant content. A button drawn on the screen also forces to the user to lay her/his hand on the display to press it, additionally reducing visibility of the screen by the user and smudging the screen with fingerprints. Additionally, a button on touch screen may not be as fast in recording a touch event as a camera. Generally touch screens take on average 100 milliseconds (ms) to record the event of a finger touching the screen.
Disclosed in some examples are computing devices, methods, systems, and machine readable mediums that detect touch events on a camera lens of a digital video camera of a computing device. The computing device monitors the images produced by the digital video camera and detects when a user has placed their finger (and in some examples, another object) onto the camera lens. This action is registered as a touch event. The detected touch events may be simple events, such as a press event (e.g., the user puts their finger on the lens where the camera lens is treated as a “virtual” button), or more advanced events; such as gestures. Example gestures include scrolling, pinching, zooming, and the like. Advantages of using a camera for a touch event over using a touch screen may include the ability to enter input that does not block the display screen, a lower latency input device (a camera running at 30 frames per second can respond to input in less than 20 ms on average), increased durability (a digital video camera behind a lens is less liable to break than a touch screen); and the like.
Turning now to
In some examples, the computing device may distinguish between different touch events based upon the user's interaction with the digital video camera 110. For example, if the user covers the digital video camera lens, a first touch event (a press touch event) may be registered. If the user swipes their finger over the digital video camera lens of the digital video camera, a swipe event may be generated, and so on. Thus, multiple different touch events may be generated depending on the motion of the user's finger(s). Each touch event may have an associated reaction on the part of the application—e.g., a swipe event may scroll a list of items or a page of content; a button press event may cause a menu to be displayed or a menu option to be selected; and the like.
In the example of
As shown in
Turning now to
At operation 515 the digital video camera may be calibrated. In some examples, baseline moving averages for smoothness factor, brightness factor, and color factor may be calculated for a predetermined period of time. These factors are explained in more detail later in the specification. That is, a predetermined amount of data (e.g., 1 second's worth) may be utilized to calculate the moving average that is used in later steps (e.g., see
At operation 520 the application may scan for touch events by analyzing video frames received from the digital video camera to determine if a touch event has been detected. The application may check frames at a certain rate (e.g., at a certain frames-per-second) that may be predetermined. In some examples, in order to conserve battery, the fps may be less than a maximum fps that can be captured by the digital video camera. In some examples, the application may adjust the rate at which frames are scanned in response to activity by the user, or a lack of activity by the user. The frame rate may be adjusted by changing the frame capture rate of the digital video camera, or by only processing certain frames (e.g., only processing every nth frame). For example, if the user is not actively engaging with the computing device (e.g., entering inputs) for a predetermined period of time, the application may throttle back the capture rate of the digital video camera, and/or the amount of frames checked by the application for a touch event. More details on this dynamic checking is shown in
At operation 525, if a touch event was detected at operation 520, then one or more applications may be notified and may take action at operation 530. For example, the application that detects the touch event may cause an action to be performed responsive to detecting the touch event (e.g., display a menu, scroll content, select an option, and the like), may send a notification to another application that has registered to receive such events, and the like. After the action has been taken, the system may return to operation 520 and continue scanning for touch events. In some examples, applications interested in receiving touch events of the camera may register to receive the events and provide callback functions to execute on detection of these events. Upon detecting the event, the computing device may execute these callback functions. For example, a driver or services of an operating system may register the callbacks, detect the events, and send notifications to registered callbacks upon detecting the events. If no event is detected, the system may return to operation 520 and continue scanning for touch events.
Turning now to
As noted, in each state, the active scanning state 610, and the relaxed scanning state 620, the system scans frames captured by the digital video camera for touch events. Turning now to
At operation 720 a smoothness factor may be calculated for the current video frame from the LumaPlane. The smoothness factor is a measure of how homogenous the source image is. A finger on the digital video camera creates an image whose pixel values are very similar that is, a smoothness factor close to zero. When the digital video camera is unobstructed it is likely that the environment in the image captured by the digital video camera is noisy. This results in a smoothness factor that is much greater than zero. In some examples, smoothness factor may be calculated as:
In the above formula, kVisionResolutionWidth is the width in pixels of the down sampled image and kVisionResolutionHeight is the height in pixels of the downsampled image.
At operation 730, a color factor (e.g., red) may be calculated for the current video frame from the ChromaPlane. When a finger is placed on the camera, the resulting image may not be black, but may have a particular color profile. This is due to the autofocus and auto-exposure adjustments in digital video cameras of many portable electronics devices. These devices adjust the exposure levels to compensate for low lighting (e.g., when a finger is covering the lens) by increasing the ISO. This makes these digital video cameras very sensitive to low light. Environmental light filters through the human body and is captured by the digital video camera. This light may have a particular color profile. For example, this light may be a red color (which may be due to the blood in the finger or other biological properties).
The color factor may be, for example, a red factor that may be calculated as:
float redFactor=sum of all red pixel values of the image/number of pixels;
At operation 735, a brightness factor may be calculated from EXIF data of the video frame. The digital video camera has meta data associated with its settings such as the ISO used, the exposure, the lens type, and the like. This metadata is stored in a data format called EXIF. One of the values in the EXIF information is brightness. Modern digital video cameras try to adjust to variable lighting conditions like the human eye does. The brightness factor may include the quantity of light coming in the lens. When a finger is placed on the digital video camera, the brightness factor is lower.
At operation 740, the system may determine whether there is a touch event indicated by the current frame based upon a comparison between the smoothness factor, color factor, and brightness factor of the current frame and the calculated moving averages of these values. For example, as noted previously, the moving average is calibrated at operation 515 of
In one example, each of the brightness factor, smoothness factor, and color factor (e.g., red factor) may be utilized in determining whether a touch event has occurred. In other examples, one or more of those factors may not be utilized. In some examples, the color factor may be correlated with the brightness factor. For example, the more light there is, the more vibrant red will be present. In a dark environment, the image may be expected to have less red. For example,
In one example, the following may determine whether a touch event has occurred:
where:
At operation 745, operation 750, and operation 755 the moving averages of the smoothness factor, brightness factor, and color factor may be updated. These steps are shown in dotted lines to highlight that in some examples, if a touch event is detected, the moving averages are not updated with the new brightness, smoothness and color values until the touch event has ended (e.g., the brightness, smoothness, and color values return to within threshold values of the respective moving average before the touch event). Not updating the moving averages may be done to avoid contaminating the normal average values for the particular environment with values observed during a touch event.
The moving average of the smoothness factor may be updated using the formula:
New Smoothness Factor Moving Average=Old Smoothness Factor Moving Average*weight+(smoothness factor calculated at operation 720*(1/weight))
Where the weight is a factor used to determine whether to weight the present value (alpha close to 0) or the previous values (alpha close to 1) more in determining the new average.
At operation 750, the moving average of the brightness factor may similarly be updated (with a same or different alpha value as the smoothness factor). For example, using the formula:
New Brightness Factor Moving Average=Old Brightness Factor Moving Average*weight+(brightness factor calculated at operation 735*(1/weight))
Where the weight is a factor used to determine whether to weight the present value (alpha close to 0) or the previous values (alpha close to 1) more in determining the new average. The weight used to calculate the brightness factor moving average may be the same weight or a different weight as used to calculate the smoothness factor moving average.
In some examples, a moving average is not used in the color value (as shown in the pseudo code above), but in other examples, a moving average may be utilized. In examples in which a moving average of the color factor may be utilized, then at operation 755, the moving average of the color factor may be similarly updated (with a same or different alpha value as the smoothness factor and/or brightness factor). For example, using the formula:
New Color Factor Moving Average=Old Color Factor Moving Average*weight+(color calculated at operation 730*(1/weight))
Where the weight is a factor used to determine whether to weight the present value (alpha close to 0) or the previous values (alpha close to 1) more in determining the new average. The weight used to calculate the color factor moving average may be the same weight or a different weight as used to calculate the brightness and/or smoothness factor moving averages.
In some examples, to detect gestures, the system may divide the image captured by the digital video camera into regions. Various gestures may be detected based upon the regions that register a touch input. For example, if the bottom regions register a finger press first, and then the top regions, this may indicate a scroll down gesture. Similarly, if the top regions register the finger press first, and then the bottom regions, this may indicate a scroll up gesture.
As previously described gestures may be detected. These may be detected by subdividing the area of coverage of the camera lens into sections and looking for button press events on each area of coverage over several frames to detect motion. For example, a button press event on a top section that spreads to bottom sections and then a return to normal (e.g., the system detects that a finger is no longer present) on the top sections suggests a swipe motion from top to bottom. Similar detections may be employed for horizontal swipes, pinches, spreads, and other gestures.
While a single camera-based gesture detection has been discussed to this point, in other examples, the system may detect user input corresponding to the multiple digital video cameras. For example a front and a rear facing digital video camera may be used to detect user input—such as the user placing a finger over the digital video camera. The second video camera may add another button for detecting touch events. Additionally, complex gestures can be formed from the user's interaction with the front and rear facing digital video cameras simultaneously or in sequence. For example a flipping gesture may be created if the user swipes a finger across the user facing digital video camera in a first direction and swipes a second finger on an opposite facing digital video camera in a second direction (opposite of the first direction). The flipping gesture may be used to rotate an object, horizontally, vertically, diagonally, or some combination thereof.
Turning now to
Camera touch input controller 1115 may be a separate application, an operating service, or may be integrated with application 1110. Camera touch input controller 1115 may have a digital video camera interface 1130 that may communicate with the digital video camera driver 1125 (e.g., through operating system 1120). Camera controller 1135 may implement the method diagram of
Frames received from the digital video camera through the digital video camera interface 1130 may be processed by the touch scanner 1140 to determine if a touch event is recognized. For example, the touch scanner may implement the operations of
Examples, as described herein, may include, or may operate on, logic or a number of components, modules, or mechanisms (hereinafter “modules”). For example, the components of
Accordingly, the term “module” is understood to encompass a tangible entity, be that an entity that is physically constructed, specifically configured (e.g., hardwired), or temporarily (e.g., transitorily) configured (e.g., programmed) to operate in a specified manner or to perform part or all of any operation described herein. Considering examples in which modules are temporarily configured, each of the modules need not be instantiated at any one moment in time. For example, where the modules comprise a general-purpose hardware processor configured using software, the general-purpose hardware processor may be configured as respective different modules at different times. Software may accordingly configure a hardware processor, for example, to constitute a particular module at one instance of time and to constitute a different module at a different instance of time.
Machine (e.g., computer system) 1200 may include a hardware processor 1202 (e.g., a central processing unit (CPU), a graphics processing unit (GPU), a hardware processor core, or any combination thereof), a main memory 1204 and a static memory 1206, some or all of which may communicate with each other via an interlink (e.g., bus) 1208. The machine 1200 may further include a display unit 1210, an alphanumeric input device 1212 (e.g., a keyboard), and a user interface (UI) navigation device 1214 (e.g., a mouse). In an example, the display unit 1210, input device 1212 and UI navigation device 1214 may be a touch screen display. The machine 1200 may additionally include a storage device (e.g., drive unit) 1216, a signal generation device 1218 (e.g., a speaker), a network interface device 1220, and one or more sensors 1221, such as a global positioning system (GPS) sensor, compass, accelerometer, light sensor (such as a digital video camera) or other sensor. The machine 1200 may include an output controller 1228, such as a serial (e.g., universal serial bus (USB), parallel, or other wired or wireless (e.g., infrared (IR), near field communication (NFC), etc.) connection to communicate or control one or more peripheral devices (e.g., a printer, card reader, etc.).
The storage device 1216 may include a machine readable medium 1222 on which is stored one or more sets of data structures or instructions 1224 (e.g., software) embodying or utilized by any one or more of the techniques or functions described herein. The instructions 1224 may also reside, completely or at least partially, within the main memory 1204, within static memory 1206, or within the hardware processor 1202 during execution thereof by the machine 1200. In an example, one or any combination of the hardware processor 1202, the main memory 1204, the static memory 1206, or the storage device 1216 may constitute machine readable media.
While the machine readable medium 1222 is illustrated as a single medium, the term “machine readable medium” may include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) configured to store the one or more instructions 1224.
The term “machine readable medium” may include any medium that is capable of storing, encoding, or carrying instructions for execution by the machine 1200 and that cause the machine 1200 to perform any one or more of the techniques of the present disclosure, or that is capable of storing, encoding or carrying data structures used by or associated with such instructions. Non-limiting machine readable medium examples may include solid-state memories, and optical and magnetic media. Specific examples of machine readable media may include: non-volatile memory, such as semiconductor memory devices (e.g., Electrically Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM)) and flash memory devices; magnetic disks, such as internal hard disks and removable disks; magneto-optical disks; Random Access Memory (RAM); Solid State Drives (SSD); and CD-ROM and DVD-ROM disks. In some examples, machine readable media may include non-transitory machine readable media. In some examples, machine readable media may include machine readable media that is not a transitory propagating signal.
The instructions 1224 may further be transmitted or received over a communications network 1226 using a transmission medium via the network interface device 1220. The Machine 1200 may communicate with one or more other machines utilizing any one of a number of transfer protocols (e.g., frame relay, internet protocol (IP), transmission control protocol (TCP), user datagram protocol (UDP), hypertext transfer protocol (HTTP), etc.). Example communication networks may include a local area network (LAN), a wide area network (WAN), a packet data network (e.g., the Internet), mobile telephone networks (e.g., cellular networks), Plain Old Telephone (POTS) networks, and wireless data networks (e.g., institute of Electrical and Electronics Engineers (IEEE) 802.11 family of standards known as Wi-Fi®, IEEE 802.16 family of standards known as WiMax®), IEEE 802.15.4 family of standards, a Long Term Evolution (LTE) family of standards, a Universal Mobile Telecommunications System (UMTS) family of standards, peer-to-peer (P2P) networks, among others. In an example, the network interface device 1220 may include one or more physical jacks (e.g., Ethernet, coaxial, or phone jacks) or one or more antennas to connect to the communications network 1226. In an example, the network interface device 1220 may include a plurality of antennas to wirelessly communicate using at least one of single-input multiple-output (SIMO), multiple-input multiple-output (MIMO), or multiple-input single-output (MISO) techniques. In some examples, the network interface device 1220 may wirelessly, communicate using Multiple User IMMO techniques.
Example 1 is a computing device, comprising: a processor; a digital video camera communicatively connected to the processor; a memory, storing instructions, which when performed by the processor, cause the processor to perform operations comprising: determining that a lens of the digital video camera was at least partially touched by a human finger; and responsive to determining that the lens of the digital video camera was touched by the human finger, recognizing a touch event in an application running on the processor.
In Example 2, the subject matter of Example 1 includes, wherein the operations further comprise; responsive to recognizing the touch event in the application running on the processor, causing a menu to be displayed.
In Example 3, the subject matter of Examples 1-2 includes, wherein the operations further comprise: determining that a user has swiped their finger across the lens of the digital video camera by comparing two different video frames captured at two different times and determining that an area of the lens that is touched changes directionally, and wherein the operations of recognizing the touch event comprise recognizing the touch event as a swipe gesture.
In Example 4, the subject matter of Example 3 includes, wherein the area of the lens that is touched changes directionally in a horizontal direction, and wherein the operations of recognizing the touch event comprise recognizing the touch event as a horizontal swipe gesture.
In Example 5, the subject matter of Examples 3-4 includes, wherein the area of the lens that is touched changes directionally in a vertical direction, and wherein the operations of recognizing the touch event comprise recognizing the touch event as a vertical swipe gesture.
In Example 6, the subject matter of Examples 1-5 includes, wherein the operations of determining that the lens of the digital video camera was at least partially touched by the human finger comprise: determining a base parameter of a first image generated by the digital video camera; receiving a second image generated by the digital video camera, the second image generated at a later time than the first image; and determining that the lens of the digital video camera was at least partially touched by the human finger based upon a comparison of the base parameter with a same parameter of the second image.
In Example 7, the subject matter of Example 6 includes, wherein the base parameter is a color uniformity.
In Example 8, the subject matter of Examples 6-7 includes, wherein the base parameter is an amount of red color.
In Example 9, the subject matter of Examples 6-8 includes, wherein the base parameter is a brightness.
In Example 10, the subject matter of Examples 1-9 includes, wherein the operations of determining that the lens of the digital video camera was at least partially touched by the human finger comprises: determining a color uniformity base parameter of a first image generated by the digital video camera; determining a color component base parameter of the first image; determining a brightness base parameter of the first image; receiving a second image generated by the digital video camera, the second image generated at a later time than the first image; determining that the lens of the digital video camera was at least partially touched by the human finger based upon: a comparison of the color uniformity base parameter with a color uniformity of the second image; a comparison of the brightness base parameter with a brightness of the second image; and a comparison of the color component base parameter with a color component of the second image.
Example 11 is a non-transitory machine-readable medium, comprising instructions, which when performed by a machine, causes the machine to perform operations of: determining that a lens of a digital video camera was at least partially touched by a human finger; and responsive to determining that the lens of the digital video camera was touched by the human finger, recognizing a touch event in an application.
In Example 12, the subject matter of Example 11 includes, wherein the operations further comprise, responsive to recognizing the touch event in the application, causing a menu to be displayed on a display.
In Example 13, the subject matter of Examples 11-12 includes, wherein the operations further comprise: determining that a user has swiped their finger across the lens of the digital video camera by comparing two different video frames captured at two different times and determining that an area of the lens that is touched changes directionally, and wherein the operations of recognizing the touch event comprise recognizing the touch event as a swipe gesture.
In Example 14, the subject matter of Example 13 includes, wherein the area of the lens that is touched changes directionally in a horizontal direction, and wherein the operations of recognizing the touch event comprise recognizing the touch event as a horizontal swipe gesture.
In Example 15, the subject flatter of Examples 13-14 includes, wherein the area of the lens that is touched changes directionally in a vertical direction, and wherein the operations of recognizing the touch event comprise recognizing the touch event as a vertical swipe gesture.
In Example 16, the subject matter of Examples 11-15 includes, wherein the operations of determining that the lens of the digital video camera was at least partially touched by the human finger comprise: determining a base parameter of a first image generated by the digital video camera; receiving a second image generated by the digital video camera, the second image generated at a later time than the first image; and determining that the lens of the digital video camera was at least partially touched by the human finger based upon a comparison of the base parameter with a same parameter of the second image.
In Example 17, the subject matter of Example 16 includes, wherein the base parameter is a color uniformity.
In Example 18, the subject matter of Examples 16-17 includes, wherein the base parameter is an amount of red color.
In Example 19, the subject matter of Examples 16-18 includes, wherein the base parameter is a brightness.
In Example 20, the subject matter of Examples 11-19 includes, wherein the operations of determining that the lens of the digital video camera was at least partially touched by the human finger comprises: determining a color uniformity base parameter of a first image generated by the digital video camera; determining a color component base parameter of the first image; determining a brightness base parameter of the first image; receiving a second image generated by the digital video camera, the second image generated at a later time than the first image; determining that the lens of the digital video camera was at least partially touched by the human finger based upon: a comparison of the color uniformity base parameter with a color uniformity of the second image; a comparison of the brightness base parameter with a brightness of the second image; and a comparison of the color component base parameter with a color component of the second image.
Example 21 is a method, performed by a processor of a computing device, the method comprising: determining that a lens of a digital video camera was at least partially touched by a human finger; and responsive to determining that the lens of the digital video camera was touched by the human finger, recognizing a touch event in an application.
In Example 22, the subject flatter of Example 21 includes, wherein the method further comprises: responsive to recognizing the touch event in the application, causing a menu to be displayed on a display.
In Example 23, the subject matter of Examples 21-22 includes; wherein the method further comprises: determining that a user has swiped their finger across the lens of the digital video camera by comparing two different video frames captured at two different times and determining that an area of the lens that is touched changes directionally, and wherein recognizing the touch event comprise recognizing the touch event as a swipe gesture.
In Example 24, the subject matter of Example 23 includes, wherein the area of the lens that is touched changes directionally in a horizontal direction, and wherein recognizing the touch event comprises recognizing the touch event as a horizontal swipe gesture.
In Example 25, the subject matter of Examples 23-24 includes, wherein the area of the lens that is touched changes directionally in a vertical direction, and wherein recognizing the touch event comprises recognizing the touch event as a vertical swipe gesture.
In Example 26, the subject matter of Examples 21-5 includes, wherein determining that the lens of the digital video camera was at least partially touched by the human finger comprises: determining a base parameter of a first image generated by the digital video camera; receiving a second image generated by the digital video camera, the second image generated at a later time than the first image; and determining that the lens of the digital video camera was at least partially touched by the human finger based upon a comparison of the base parameter with a same parameter of the second image.
In Example 27, the subject matter of Example 26 includes, wherein the base parameter is a color uniformity.
In Example 28, the subject matter of Examples 26-27 includes, wherein the base parameter is an amount of red color.
In Example 29, the subject matter of Examples 26-28 includes, wherein the base parameter is a brightness.
In Example 30, the subject matter of Examples 21-29 includes, wherein determining that the lens of the digital video camera was at least partially touched by the human finger comprises: determining a color uniformity base parameter of a first image generated by the digital video camera; determining a color component base parameter of the first image; determining a brightness base parameter of the first image; receiving a second image generated by the digital video camera, the second image generated at a later time than the first image; determining that the lens of the digital video camera was at least partially touched by the human finger based upon: a comparison of the color uniformity base parameter with a color uniformity of the second image; a comparison of the brightness base parameter with a brightness of the second image; and a comparison of the color component base parameter with a color component of the second image.
Example 31 is a computing device comprising: a digital video camera; means for determining that a lens of the digital video camera was at least partially touched by a human finger; and responsive to determining that the lens of the digital video camera was touched by the human finger, means for recognizing a touch event in an application.
In Example 32, the subject matter of Example 31 includes, wherein the device further comprises: means for causing a menu to be displayed on a display responsive to recognizing the touch event in the application.
In Example 33, the subject matter of Examples 31-32 includes, wherein the device further comprises: means for determining that a user has swiped their finger across the lens of the digital video camera by comparing two different video frames captured at two different times and determining that an area of the lens that is touched changes directionally, and wherein the means for recognizing the touch event comprise means for recognizing the touch event as a swipe gesture.
In Example 34, the subject matter of Example 33 includes, wherein the area of the lens that is touched changes directionally in a horizontal direction, and wherein the means for recognizing the touch event comprises means for recognizing the touch event as a horizontal swipe gesture.
In Example 35, the subject matter of Examples 33-34 includes, wherein the area of the lens that is touched changes directionally in a vertical direction, and wherein the means for recognizing the touch event comprises means for recognizing the touch event as a vertical swipe gesture.
In Example 36, the subject matter of Examples 31-35 includes; wherein the means for determining that the lens of the digital video camera was at least partially touched by the human finger comprises: means for determining a base parameter of a first image generated by the digital video camera; means for receiving a second image generated by the digital video camera, the second image generated at a later time than the first image; and means for determining that the lens of the digital video camera was at least partially touched by the human finger based upon a comparison of the base parameter with a same parameter of the second image.
In Example 37, the subject matter of Example 36 includes, wherein the base parameter is a color uniformity.
In Example 38, the subject matter of Examples 36-37 includes, wherein the base parameter is an amount of red color.
In Example 39, the subject matter of Examples 36-38 includes, wherein the base parameter is a brightness.
In Example 40, the subject matter of Examples 31-39 includes, wherein the means for determining that the lens of the digital video camera was at least partially touched by the human finger comprises: means for determining a color uniformity base parameter of a first image generated by the digital video camera; means for determining a color component base parameter of the first image; means for determining a brightness base parameter of the first image; means for receiving a second image generated by the digital video camera, the second image generated at a later time than the first image; means for determining that the lens of the digital video camera was at least partially touched by the human finger based upon: a comparison of the color uniformity base parameter with a color uniformity of the second image; a comparison of the brightness base parameter with a brightness of the second image; and a comparison of the color component base parameter with a color component of the second image.
Example 41 is at least one machine-readable medium including instructions that, when executed by processing circuitry, cause the processing circuitry to perform operations to implement of any of Examples 1-40.
Example 42 is an apparatus comprising means to implement of any of Examples 1-40.
Example 43 is a system to implement of any of Examples 1-40.
Example 44 is a method to implement of any of Examples 1-40.