Computing devices often include displays that utilize capacitive sensors to enable touch and multi-touch functionality. More specifically, state of the art computing devices utilize firmware that distills raw measurements from the capacitive sensors into a limited collection of resultant individual touch points. Each touch point, although derived from a complex dataset of capacitance values, is typically distilled to a two-dimensional screen coordinate (e.g., a single horizontal coordinate and a single vertical coordinate defining the location of a finger touch on the display).
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter. Furthermore, the claimed subject matter is not limited to implementations that solve any or all disadvantages noted in any part of this disclosure.
A computing system includes a capacitive touch-display including a plurality of touch-sensing pixels, a digitizer configured to generate a capacitive grid map including a capacitance value for each of the plurality of touch-sensing pixels, and an operating system configured to receive the capacitive grid map directly from the digitizer.
Some computing devices include capacitive sensors to enable touch and multi-touch functionality. More specifically, such touch-sensitive computing devices typically utilize firmware that distill raw measurements from the capacitive sensors into a limited collection of resultant individual touch points. Each touch point, although derived from a complex dataset of capacitance values, is typically distilled to a two-dimensional screen coordinate (e.g., a single horizontal coordinate and a single vertical coordinate defining the location of a finger touch on the display). In some implementations, a width, height, and/or orientation may be associated with each two-dimensional coordinate. Only these resultant individual touch points are exposed to the Operating System (OS) and/or applications. This limits the types of user interactions that can be supported to only those interactions that map to simplistic touch point coordinates.
When a touch input area is not identified/exposed to the OS, the OS is not aware that the user is touching that area of the display because the firmware simply does not report any touch input information for that area (e.g., to avoid operation based on unintentional touch input). However, such information relating to unintentional (e.g., non-finger) touch input may be useful. For example, the OS may determine contextual information about the type of touch input being provided to the capacitive touch sensor based on such information.
Accordingly, the present disclosure relates to an approach for controlling operation of a computing device using an operating system that is exposed to and informed by a full capacitive grid map of a capacitive touch sensor. The capacitive grid map includes capacitance values for each touch-sensing pixel of a set of touch-sensing pixels of the capacitive touch sensor. The capacitive grid map is provided to the operating system directly from the touch-sensing digitizer (i.e., without firmware first distilling the raw touch data into touch points. By exposing the full touch data set to the operating system without unnecessary processing delays, the operating system is able to provide more rewarding user experiences. More particularly, the operating system may be configured to visually present a user interface object and/or adjust presentation of the user interface object based on analysis of the capacitance values of the capacitive grid map.
By analyzing the capacitive grid map and not just individual touch points, the operating system may improve a variety of different user interactions. For example, analysis of the capacitive grid map may enable various gestures to be recognized that otherwise would not be recognized from individual touch points. In another example, the capacitive grid map may be used to differentiate between different sources of touch input (e.g., finger, stylus, and other types of objects), and provide different source-specific responses based on recognizing the different touch-input sources. In still another example, user interactions may be optimized by virtue of understanding how a user is holding or interacting with the computing device based on analysis of the capacitive grid map.
Capacitive touch sensor 104 may be configured to sense one or more sources of input, such as touch input imparted via fingers 106 and/or input supplied by an input device 108, shown in
Display 102 may be operatively coupled to an image source 110, which may be, for example, a computing device external to, or housed within, the display. Image source 110 may receive input from display 102, process the input, and in response generate appropriate graphical output in the form of user interface objects 112 for the display. In this way, display 102 may provide a natural paradigm for interacting with a computing device that can respond appropriately to touch input. Details regarding an example computing system are described below with reference to
Display 102 is operable to emit light, such that perceptible images can be formed at a surface of the display or at other apparent location(s). For example, display 102 may assume the form of a liquid crystal display (LCD), organic light-emitting diode display (OLED), or any other suitable display. To effect display operation, image source 110 may control pixel operation, refresh rate, drive electronics, operation of a backlight if included, and/or other aspects of the display. In this way, image source 110 may provide graphical content for output by display 102.
Capacitive touch sensor 104 is operable to receive input, which may assume various suitable form(s). As examples, capacitive touch sensor 104 may detect (1) touch input applied by the human finger 106 in contact with a surface of display 102; (2) a force and/or pressure applied by the finger 106 to the surface; (3) hover input applied by the finger 106 proximate to but not in contact with the surface; (4) a height of the hovering finger 106 from the surface, such that a substantially continuous range of heights from the surface can be determined; and/or (5) input from a non-finger touch source, such as from active stylus 108. “Touch input” as used herein refers to both finger and non-finger (e.g., stylus) input, and to input supplied by input devices both in contact with, and spaced away from but proximate to, display 102. Capacitive touch sensor 104 may be configured to receive input from multiple input sources (e.g., digits, styluses, other input devices) simultaneously, and thus may be referred to as a “multi-touch” display device. To enable input reception, capacitive touch sensor 104 may be configured to detect changes associated with the capacitance of a plurality of electrodes 114 of the touch sensor 104, as described in further detail below. Touch inputs (and/or other information) received by touch sensor 104 are operable to affect any suitable aspect of display 102 and/or computing device 100, and may include two or three-dimensional finger inputs and/or gestures.
Capacitive touch sensor 104 may take any suitable form. In some examples capacitive touch sensor 104 may be integrated within display 102 in a so-called “in-cell” touch sensor implementation. In this example, one or more components of display 102 may be operated to perform both display output and touch input sensing functions. As a particular example, the same physical electrical structure may be used both for capacitive touch sensing and for determining the field in the liquid crystal material that rotates polarization to form a displayed image. Alternative or additional components of display 102 may be employed for display and input sensing functions, however.
Other touch sensor configurations are possible. For example, capacitive touch sensor 104 may alternatively be implemented in a so-called “on-cell” configuration, in which the touch sensor 104 is disposed directly on display 102. In an example on-cell configuration, touch sensing electrodes 114 may be arranged on a color filter substrate of display 102. Implementations in which the capacitive touch sensor 104 is configured neither as an in-cell nor on-cell sensor are possible, however.
Capacitive touch sensor 104 may be configured in various structural forms. For example, the plurality of electrodes (also referred to as touch-sensing pixels) 114 may assume a variety of suitable forms, including but not limited to (1) elongate traces, as in row/column electrode configurations, where the rows and columns are arranged at substantially perpendicular or oblique angles to one another; (2) substantially contiguous pads/pixels, as in mutual capacitance configurations in which the pads/pixels are arranged in a substantially common plane and partitioned into drive and receive electrode subsets, or as in in-cell or on-cell configurations; (3) meshes; and (4) an array of isolated (e.g., planar and/or rectangular) electrodes each arranged at respective x/y locations, as in in-cell or on-cell configurations.
Capacitive touch sensor 104 may be configured for operation in different modes of capacitive sensing. In a self-capacitance mode, the capacitance and/or other electrical properties (e.g., voltage, charge) between touch sensing electrodes and ground may be measured to detect inputs. In other words, properties of the electrode itself are measured, rather than in relation to another electrode in the capacitance measuring system. In a mutual capacitance mode, the capacitance and/or other electrical properties between electrodes of differing electrical state may be measured to detect inputs. When configured for mutual capacitance sensing, and similar to the above examples, the capacitive touch sensor 104 may include a plurality of vertically separated row and column electrodes that form capacitive, plate-like nodes at row/column intersections when the touch sensor is driven. The capacitance and/or other electrical properties of the nodes can be measured to detect inputs.
For self-capacitance implementations, the capacitive touch sensor 104 may analyze one or more electrode characteristics to identify the presence of an input source. Typically, this is implemented via driving an electrode with a drive signal, and observing the electrical behavior with receive circuitry attached to the electrode. For example, charge accumulation at the electrodes resulting from drive signal application can be analyzed to ascertain the presence of the input source. In these example methods, input sources of the types that influence measurable properties of electrodes can be identified and differentiated from one another, such as human digits, styluses, and other physical object which may affect electrode conditions by providing a capacitive path to ground for electromagnetic fields. Other methods may be used to identify different input source types, such as those with active electronics.
As will be discussed in further detail below, a digitizer may be configured to output a capacitive grid map based on capacitance measurements at each touch-sensing pixel 114 of the touch sensor 104. The digitizer may represent the capacitance of each pixel with a binary number having a selected bit depth. For example, an eight bit number may be used to represent 256 different capacitances. The capacitive grid map may be used to present appropriate graphical output and improve a variety of different user interactions.
In the example of
Returning to
The capacitive grid map 206 presents a view of what is actually touching the display, rather than distilled individual touch points. For example, capacitive grid map 300 of
Once received, the OS 204 may analyze the capacitive grid map 206, via a processing framework 208 to create user experiences. At the most basic level, the OS 204 may output the capacitive grid map 206 to the application(s) 218 executed by the computing system such that the application(s) 218 also may create user experiences based on the full capacitive grid map 206. Further, the OS 204/processing framework 208 may resolve touch points from the capacitive grid map 206 to allow applications 218 to respond to conventional touch and multi-touch scenarios. In some examples, the OS 204 may output separate touch points for the different digitizers 202. For example, the OS 204 may output virtual touch points 212 corresponding to finger touch input to the touch-display, virtual stylus touch points 214 corresponding to stylus touch input to the touch-display, and optionally virtual touchpad touch points 216 corresponding to touch input to an optional touchpad that may be included in the computing system. By allowing the application(s) 218 to access such information, the applications 218 can provide improved user experiences. Moreover, by analyzing the capacitive grid map 206 at the operating system level to extract information about the touch input, the application(s) 218 do not have to perform the same full-blown processing of capacitive grid map 206. Further, the processing framework 208 may holistically consider the capacitive grid map 206 to support other experiences as discussed in further detail below.
The processing framework 208 may be configured to identify various characteristics of the capacitive grid map 206. For example, the processing framework 208 may be configured to identify a touch profile characterizing a shape of touch input to the capacitive touch sensor 202 based on the capacitance values of the capacitive grid map 206. In another example, the processing framework 208 may be configured to identify different sources of touch input based on the capacitance values of the capacitive grid map 206 and/or the identified touch profile. For example, a stylus and a finger may generate different capacitance values in the capacitive grid map that may be identified and used to differentiate touch input from the different sources. In another example, a touch source may be identified based on the shape of the touch profile. For example, a finger touch may be differentiated from a stylus based on having a larger contact region than the stylus. The processing framework 208 may be configured to determine any suitable characteristic of the capacitive grid map 206 that may be used by the OS 204 to create user experiences, such as controlling appropriate graphical output via the display of the computing system.
In some examples, the processing framework 208 may be incorporated with the OS 204 such that the OS 204 may provide at least some to all of the functionality of the processing framework 208.
In some implementations, the processing framework 208 may include a machine-learning capacitive grid map analysis tool 210 configured to classify touch input into different classes defined by different sets of characteristics. The analysis tool 210 may include one or more previously trained, machine-learning classifiers. The analysis tool 210 may be previously-trained using a training set including numerous different previously-generated capacitive grid maps corresponding to different types of touch input. The previously-generated capacitive grid maps may have characteristics that may be distinctive and may be used to distinguish between different capacitive grid maps. During the training process, the analysis tool 210 may develop various profiles or classes of characteristics that may be used to recognize different types of touch input from a capacitive grid map that is being analyzed. In some examples, the analysis tool 210 may be trained to determine that a capacitive grid map has characteristics that match characteristics of the previously-generated capacitive grid maps. The machine-learning analysis tool 210 may recognize any suitable characteristic of a capacitive grid map. Moreover, the analysis tool 210 may match any suitable number of characteristics to determine that a capacitive grid map includes a particular type of touch input. The analysis tool 210 may be configured to classify different portions of the capacitive grid map as being specific types of touch input (e.g., intentional, unintentional, finger, stylus), The analysis tool 210 may be configured according to any suitable machine-learning approach including, but not limited to, decision-tree learning, artificial neural networks, support vector machines, and clustering.
When the analysis tool 210 is utilized to interpret the capacitive grid map 206, the analysis tool 210 may include a plurality of classifiers optionally arranged in a hierarchy. As a nonlimiting example,
The analysis tool 210 may be configured to analyze the capacitive grid map 604 and identify an intentional-touch portion 612 and an unintentional-touch portion 614 based on the capacitive values of each of the touch-sensing pixels. In some examples, the analysis tool 210 may be configured to identify the intentional-touch portion 612 and the unintentional-touch portion 614 based on the shape of the portion of the capacitive grid map that have capacitance values greater that one or more thresholds indicating touch input.
In another example, as shown in
The analysis tool 210 may be configured to analyze the capacitive grid map 706 and identify an intentional-touch portion 708 provided by the stylus 700 and an unintentional-touch portion 710 provided by the right hand 702 based on the capacitive values of each of the touch-sensing pixels and/or one or more attributes derived from the capacitive values.
Returning to
If top-level classifier 502 determines a touch is intentional, a different second-level classifier 506 is invoked. Second-level classifier 506 is previously trained to determine if the intentional touch is a finger touch, thumb touch, side-of-hand touch, stylus touch, or another type of touch. In some implementations, the second-level classifier 506 may including additional sub-hierarchies of multiple classifiers that are each previously trained to determine whether a touch input is a particular type of touch input or from a particular source. The different types of intentional touches may be used by the OS 204 to determine different user interactions and provide appropriate responses. For example, the OS 204 may provide different responses based on whether a finger touch or a stylus touch is provided as input. As another example, the OS 204 may recognize different types of gestures that are specific to the identified type of intentional touch input.
If the second-level classifier 506 determines that the intentional touch is an intentional finger touch, then a third-level classifier 508 is invoked. The third-level classifier 508 is previously trained to determine if the intentional finger touch is a left-handed finger touch or a right-handed finger touch. The OS 204 may use the handedness of the touch to provide an appropriate response to the touch input. For example, the OS 204 may shift user interface objects on the display to not be occluded by a palm of the hand providing the touch.
The classifier hierarchy may increase compute efficiency, because only classifiers in a specific branch will run, thus avoiding unnecessary computations/classifications.
The illustrated example classifier hierarchy 500 is not limiting. The hierarchy 500 may include any suitable number of different levels, and any suitable number of classifiers at each level. For example, alternative or additional classifiers may be implemented at any level of the hierarchy 500.
Returning to
A full capacitive grid map enables new gestures that depend on the size and/or shape of the touch contact, as well as the capacitive properties of the source providing the touch input. In an example shown in
Exposure to the full capacitive grid map also allows the OS and/or applications to support more nuanced experience optimizations by virtue of understanding how a user is interacting with a device. In an example shown in
In the illustrated example, the finger 902 touches a user interface object in the form of a drop-down menu 908. The OS 204 may identify the unintentional touch portion of the user's arm 904 resting on the touch-display 900 from the capacitive grid map 906 and adjust presentation of the drop-down menu 908 to a position on the touch-display 900 that is not occluded by the unintentional-touch portion of the user's arm 904. In particular, the drop-down menu 908 displays a list of menu options to the right of the user's arm 904.
Further, the OS and/or applications can more intelligently place user interface elements based on the directionality of the user's finger. In the illustrated example, the user invokes the drop-down menu 908 with a left-hand finger, and the OS 204 may adjust the user interface and display the menu options to the right of the interaction so as not to display important user interface elements under the user's hand. In other words, the OS 204 may be configured to determine a handedness of the finger providing the touch input based on the capacitive grid map, and adjust presentation the drop-down menu based on the handedness of the finger touch input.
The full capacitive grid map may also be used to understand how a user is gripping a touch-display. In an example shown in
As another example, exposure to a full capacitive grid map allows the OS 204 to detect when a user has placed the side of her hand on a touch-display as intentional touch input. The OS 204 may recognize different gestures and may perform various types of actions responsive to these type of gestures. In an example shown in
As another example, exposure to the full capacitive grid map allows different touch input sources to be differentiated from one another. For example, different objects (e.g., finger or stylus) can predictably cause different capacitance measurements, which may be detailed in the capacitive grid map and recognized by the OS 204. As such, the operating system and/or applications may be programmed to behave differently based on whether a finger, capacitive stylus, or other object is touching the screen. In an example shown in
In general, the rich information provided by a capacitive grid map allows the OS and/or applications to differentiate between various capacitive objects placed on the screen. As another example, an educational application can be programmed to differentiate between different alphabet objects that are placed on the screen. As yet another example, objects with unique and/or variable capacitive signatures, such as a capacitive paintbrush, may be supported. Using the capacitive grid map data, a realistic interpretation of such a paint brush's interaction with the screen can be determined, thus allowing richer experiences. In another example, the capacitive grid map enables detecting when a user's entire hand is flat on the screen or the ball of a user's first is pressed against the screen, and the OS may perform various operations based on recognizing these types of touch input and/or gestures, such as invoking a system menu, muting sound, turning the screen off, etc.
The hardware and scenarios described herein are not limited to capacitive touch-displays, as capacitive touch sensors, without display functionality, may also provide full capacitive grid maps to an operating system or application. The same principles of receiving and processing a capacitive grid map apply to a touchpad. A full capacitive grid map enables better algorithms to be crafted for palm rejection, preventing accidental activations, and supporting advanced gestures.
At 1302, the method 1300 includes generating, via a digitizer of the computing system, a capacitive grid map including a capacitance value for each of a plurality of touch-sensing pixels of a capacitive touch-display. At 1304, the method 1300 includes receiving, at an operating system of the computing system directly from the digitizer, the capacitive grid map.
In some implementations, at 1306, the method 1300 optionally may include outputting the capacitive grid map from the operating system to one or more applications executed by the computing system.
In some implementations, at 1308, the method 1300 optionally may include presenting, via a capacitive touch-display, a user interface object. In some implementations, at 1310, the method 1300 optionally may include providing capacitive grid map data as input to a previously-trained, machine-learning analysis tool configured to classify portions of the capacitive grid map as specific types of touch input. In some implementations, at 1312, the method 1300 optionally may include adjusting, via the capacitive touch-display, presentation of a user interface object based on the capacitive grid map. In some implementations, at 1314, the method 1300 optionally may include adjusting, via the capacitive touch-display, presentation of a user interface object based on the specific types of touch input of the portions of the capacitive grid map output from the previously-trained, machine-learning analysis tool.
In some implementations, the methods and processes described herein may be tied to a computing system of one or more computing devices. In particular, such methods and processes may be implemented as a computer-application program or service, an application-programming interface (API), an OS framework, library, and/or other computer-program product.
Computing system 1400 includes a logic machine 1402 and a storage machine 1404. Computing system 1400 may optionally include a touch-display subsystem, touch input subsystem, communication subsystem, and/or other components not shown in
Logic machine 1402 includes one or more physical devices configured to execute instructions. For example, the logic machine may be configured to execute instructions that are part of one or more applications, services, programs, routines, libraries, objects, components, data structures, or other logical constructs. Such instructions may be implemented to perform a task, implement a data type, transform the state of one or more components, achieve a technical effect, or otherwise arrive at a desired result.
The logic machine may include one or more processors configured to execute software instructions. Additionally or alternatively, the logic machine may include one or more hardware or firmware logic machines configured to execute hardware or firmware instructions. Processors of the logic machine may be single-core or multi-core, and the instructions executed thereon may be configured for sequential, parallel, and/or distributed processing. Individual components of the logic machine optionally may be distributed among two or more separate devices, which may be remotely located and/or configured for coordinated processing. Aspects of the logic machine may be virtualized and executed by remotely accessible, networked computing devices configured in a cloud-computing configuration.
Storage machine 1404 includes one or more physical devices configured to hold instructions executable by the logic machine to implement the methods and processes described herein. When such methods and processes are implemented, the state of storage machine 1404 may be transformed—e.g., to hold different data.
Storage machine 1404 may include removable and/or built-in devices. Storage machine 1404 may include optical memory (e.g., CD, DVD, HD-DVD, Blu-Ray Disc, etc.), semiconductor memory (e.g., RAM, EPROM, EEPROM, etc.), and/or magnetic memory (e.g., hard-disk drive, floppy-disk drive, tape drive, MRAM, etc.), among others. Storage machine 1404 may include volatile, nonvolatile, dynamic, static, read/write, read-only, random-access, sequential-access, location-addressable, file-addressable, and/or content-addressable devices.
It will be appreciated that storage machine 1404 includes one or more physical devices. However, aspects of the instructions described herein alternatively may be propagated by a communication medium (e.g., an electromagnetic signal, an optical signal, etc.) that is not held by a physical device for a finite duration.
Aspects of logic machine 1402 and storage machine 1404 may be integrated together into one or more hardware-logic components. Such hardware-logic components may include field-programmable gate arrays (FPGAs), program- and application-specific integrated circuits (PASIC/ASICs), program- and application-specific standard products (PSSP/ASSPs), system-on-a-chip (SOC), and complex programmable logic devices (CPLDs), for example.
The terms “module,” “program,” and “engine” may be used to describe an aspect of computing system 1400 implemented to perform a particular function. In some cases, a module, program, or engine may be instantiated via logic machine 1402 executing instructions held by storage machine 1404. It will be understood that different modules, programs, and/or engines may be instantiated from the same application, service, code block, object, library, routine, API, function, etc. Likewise, the same module, program, and/or engine may be instantiated by different applications, services, code blocks, objects, routines, APIs, functions, etc. The terms “module,” “program,” and “engine” may encompass individual or groups of executable files, data files, libraries, drivers, scripts, database records, etc.
It will be appreciated that a “service”, as used herein, is an application program executable across multiple user sessions. A service may be available to one or more system components, programs, and/or other services. In some implementations, a service may run on one or more server-computing devices.
When included, the display subsystem may be used to present a visual representation of data held by storage machine 1404. This visual representation may take the form of a graphical user interface (GUI). As the herein described methods and processes change the data held by the storage machine, and thus transform the state of the storage machine, the state of the display subsystem may likewise be transformed to visually represent changes in the underlying data. The display subsystem may include one or more display devices utilizing virtually any type of technology. Such display devices may be combined with logic machine 1402 and/or storage machine 1404 in a shared enclosure, or such display devices may be peripheral display devices.
When included, the input subsystem may comprise or interface with one or more user-input devices such as a keyboard, mouse, touch screen, touch pad, or game controller. In some implementations, the input subsystem may comprise or interface with selected natural user input (NUI) componentry. Such componentry may be integrated or peripheral, and the transduction and/or processing of input actions may be handled on- or off-board. Example NUI componentry may include a microphone for speech and/or voice recognition; an infrared, color, stereoscopic, and/or depth camera for machine vision and/or gesture recognition; a head tracker, eye tracker, accelerometer, and/or gyroscope for motion detection and/or intent recognition; as well as electric-field sensing componentry for assessing brain activity.
When included, the communication subsystem may be configured to communicatively couple computing system 1400 with one or more other computing devices. The communication subsystem may include wired and/or wireless communication devices compatible with one or more different communication protocols. As non-limiting examples, the communication subsystem may be configured for communication via a wireless telephone network, or a wired or wireless local- or wide-area network. In some implementations, the communication subsystem may allow computing system 1400 to send and/or receive messages to and/or from other devices via a network such as the Internet.
In an example, a computing system comprises a capacitive touch-display including a plurality of touch-sensing pixels, a digitizer configured to generate a capacitive grid map including a capacitance value for each of the plurality of touch-sensing pixels, and an operating system configured to receive the capacitive grid map directly from the digitizer. In this example and/or other examples, the plurality of touch-sensing pixels may include each touch-sensing pixel of the capacitive touch-display. In this example and/or other examples, the plurality of touch-sensing pixels may include touch-sensing pixels having a capacitance value that is either less than a negative noise threshold or greater than a positive noise threshold. In this example and/or other examples, the operating system may be configured to output the capacitive grid map from the operating system to one or more applications executed by the computing system. In this example and/or other examples, the capacitive grid map may be defined by a data structure formatted in accordance with a human interface device (HID) format recognizable by the operating system, the data structure may include an index pixel that identifies a first touch-sensing pixel in a sequence, a total number of touch-input pixels in the sequence, and a capacitance value for each touch-input pixel in the sequence. In this example and/or other examples, the capacitive touch-display may be configured to present a user interface object, and the operating system may be configured to adjust, via the capacitive touch-display, presentation of the user interface object based on the capacitive grid map. In this example and/or other examples, the operating system may be configured to provide capacitive grid map data as input to a previously-trained, machine-learning analysis tool configured to classify portions of the capacitive grid map as specific types of touch input and adjust presentation of the user interface object based on the specific types of touch input. In this example and/or other examples, the operating system may be configured to identify a single finger touch input based on the capacitive grid map, recognize a rotation gesture based on the single finger touch input, determine a direction of rotation of the rotation gesture, and rotate the user interface object in the direction of rotation based on the rotation gesture. In this example and/or other examples, the operating system may be configured to identify an intentional-touch portion and an unintentional-touch portion of the capacitive grid map, and adjust presentation of the user interface object to a position on the capacitive touch-display that is not occluded by the unintentional-touch portion. In this example and/or other examples, the operating system may be configured to identify a finger touch input based on the capacitive grid map, determine a handedness of the finger touch input, and adjust presentation the user interface object based on the handedness of the finger touch input. In this example and/or other examples, the operating system may be configured to identify a grip hand that is gripping the capacitive touch-display based on the capacitive grid map, and adjust presentation of the user interface object based on the grip hand. In this example and/or other examples, the operating system may be configured to identify a stylus-touch portion and a finger-touch portion of the capacitive grid map, adjust presentation of the user interface object based on the stylus-touch portion and adjust presentation of the user interface object differently based on the finger-touch portion.
In an example, a method for controlling operation of a computing system comprises generating, via a digitizer of the computing system, a capacitive grid map including a capacitance value for each of a plurality of touch-sensing pixels of a capacitive touch-display, and receiving, at an operating system of the computing system directly from the digitizer, the capacitive grid map. In this example and/or other examples, the method may further comprise presenting, via the capacitive touch-display, a user interface object, and adjusting, via the capacitive touch-display, presentation of the user interface object based on the capacitive grid map. In this example and/or other examples, the method may further comprise providing capacitive grid map data as input to a previously-trained, machine-learning analysis tool configured to classify portions of the capacitive grid map as specific types of touch input, and adjusting, via the capacitive touch-display, presentation of the user interface object based on the specific types of touch input. In this example and/or other examples, the method may further comprise identifying, via the operating system, an intentional-touch portion and an unintentional-touch portion of the capacitive grid map, and adjusting, via the capacitive touch-display, presentation of the user interface object based on the capacitive grid map such that a position of the user interface object does not overlap with the unintentional-touch portion on the capacitive touch-display. In this example and/or other examples, the method may further comprise identifying a stylus-touch portion and a finger-touch portion of the capacitive grid map, adjusting, via the capacitive touch-display, presentation of the user interface object based on the stylus-touch portion, and adjusting, via the capacitive touch-display, presentation of the user interface object differently based on the finger-touch portion.
In an example, a computing system, comprises a capacitive touch-display including a plurality of touch-sensing pixels, a digitizer configured to generate a capacitive grid map including a capacitance value for each of the plurality of touch-sensing pixels, and an operating system configured to receive the capacitive grid map directly from the digitizer, identify an intentional-touch portion and an unintentional-touch portion of the capacitive grid map, and present, via the capacitive touch-display, a user interface object based on the intentional-touch portion such that a position of the user interface object does not overlap with the unintentional-touch portion on the capacitive touch-display. In this example and/or other examples, the operating system may be configured to provide capacitive grid map data as input to a previously-trained, machine-learning analysis tool configured to classify portions of the capacitive grid map as the unintentional-touch portion and the intentional-touch portion. In this example and/or other examples, the operating system may be configured to identify a stylus-touch portion and a finger-touch portion of the capacitive grid map, adjust presentation of the user interface object based on the stylus-touch portion and adjust presentation of the user interface object differently based on the finger-touch portion.
It will be understood that the configurations and/or approaches described herein are exemplary in nature, and that these specific implementations or examples are not to be considered in a limiting sense, because numerous variations are possible. The specific routines or methods described herein may represent one or more of any number of processing strategies. As such, various acts illustrated and/or described may be performed in the sequence illustrated and/or described, in other sequences, in parallel, or omitted. Likewise, the order of the above-described processes may be changed.
The subject matter of the present disclosure includes all novel and non-obvious combinations and sub-combinations of the various processes, systems and configurations, and other features, functions, acts, and/or properties disclosed herein, as well as any and all equivalents thereof.
This application claims priority to U.S. Provisional Patent Application No. 62/399,224, filed Sep. 23, 2016, the entirety of which is hereby incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
62399224 | Sep 2016 | US |