The touch-screen display is a state-of-the-art user-interface (UI) modality for various electronic devices. Touch-screen display technology may employ resistive, capacitive, or optical touch sensing, for example. Of these variants, capacitive touch sensing is especially suitable for multi-touch tracking on modern liquid-crystal and organic light-emitting diode (OLED) displays. A capacitive touch screen reliably tracks contact from one or more fingers of a user or from a stylus held in the user's hand. In contrast to a passive stylus, which mimics the capacitive coupling of the user's finger on the touch screen, an active stylus employs active charge-sensing and charge-injection to reduce latency and enable more precise sensing of the touch point. No matter how precisely the touch point is sensed, however, optical parallax that the experiences on sighting the tip of a stylus may create an illusion of tracking error. This effect may be frustrating to the user and may degrade the overall UI experience.
This disclosure is directed in part to a touch-sensing display system comprising a contactable display surface in addition to touch-screen, pupil-estimation, user-pointer, and display logic. The touch-screen logic is configured to sense normal coordinates directly behind a point of the user contact on the contactable display surface. The pupil-estimation logic is configured to estimate the vantage vector pointing from the vantage point of the user and through the point of user contact. The user-pointer logic is configured to compute adjusted coordinates of the contactable display surface, the adjusted coordinates being shifted from the normal coordinates based on the estimated vantage vector. The display logic is configured to render a visible feature on the contactable display surface at the adjusted coordinates.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter. Furthermore, the claimed subject matter is not limited to implementations that solve any or all disadvantages noted in any part of this disclosure.
Positioning the tip of a ballpoint pen at a predetermined location on a piece of paper is trivial for most people. In contrast, placing a stylus tip at a predetermined location on a touch-screen display is not trivial, for the intended point of contact is often missed. As noted above, error in positioning the stylus tip on the touch-screen display may be perceived by the user as a precision defect of the display. In a well-calibrated system, however, most of the positioning error may actually be due to optical parallax that the user perceives because the light-emitting plane of the display is separated from the contactable display surface in front of it. Described herein is a series of approaches to remedy the problem of optical parallax for touch-screen display users. The remedy is intended to provide a more satisfying and intuitive UI experience, akin to touching pen to paper.
Aspects of this disclosure will now be described by example, and with reference to the drawing figures listed above. Components, process steps, and other elements that may be substantially the same in one or more embodiments are identified coordinately and described with minimal repetition. It will be noted, however, that elements identified coordinately may also differ to some degree. It will be further noted that the drawing figures are schematic and generally not drawn to scale. Rather, the various drawing scales, aspect ratios, and numbers of components shown in the figures may be purposely distorted to make certain features or relationships easier to see.
Prior to addressing the problem of optical parallax on a touch-screen display device, an example touch-sensing display system will first be described. It should be understood, however, that the solutions presented herein are equally applicable to various other touch-sensing display systems.
In the embodiment of
Column-sense logic 36 includes M column amplifiers, each coupled to a corresponding column electrode 30. Row-driver logic 34 includes a row counter 38 in the form of an N-bit shift register with outputs driving each of N row electrodes 28. The row counter is clocked by row-driver clock 40. The row counter includes a blanking input to temporarily force all output values to zero independent of the values stored. Excitation of one or many rows may be provided by filling the row counter with ones at every output to be excited, and zeroes elsewhere, and then toggling the blanking signal with the desired modulation from modulation clock 42. In the illustrated embodiment, the output voltage may take on only two values, corresponding to the one or zero held in each bit of the row counter; in other embodiments, the output voltage may take on a greater range of values, to reduce the harmonic content of the output waveforms, or to decrease radiated emissions, for example.
Row-driver logic 34 applies an excitation pulse to each row electrode 28 in sequence. During a period in which contactable display surface 26 is untouched, none of the column amplifiers registers an above-threshold output. However, when the user places a fingertip on the contactable display surface, the fingertip capacitively couples one or more row electrodes 28 intersecting the touch point 22 to one or more column electrodes 30 also intersecting the touch point. The capacitive coupling induces an above-threshold signal from the column amplifiers associated with the column electrodes beneath (i.e., adjacent) the touch point, which provides sensing of the touch point. Column-sense logic 36 returns, as the X coordinate of the touch point, the numeric value of the column providing the greatest signal. The touch-screen logic also determines which row was being excited when the greatest signal was received, and returns the numeric value of that row as the Y coordinate of the touch point.
In some examples, a passive stylus having a tip of relatively high dielectric-constant material may be used in lieu of the user's fingertip to capacitively couple row and column electrodes under the touch point. A passive stylus may provide better touch accuracy than the fingertip, and may prevent smudging of the display by the fingertip. Instead of a passive stylus, however, touch-screen display device 10 may be associated with an active stylus 44, as shown in
Active stylus 44 provides advantages over and beyond those of a passive stylus. For instance, the tip 46 of the active stylus may be very small in comparison to a fingertip. The smaller size of the tip allows the user to more precisely position the touch point on the touch screen. Moreover, the active stylus supports a faster and more accurate mode of touch sensing, as described further below.
Active stylus 44 includes a probe electrode 48 at tip 46. The probe electrode is operatively coupled to associated sensory logic 50 and injection logic 52. The sensory and injection logic are operatively coupled to, and may be embodied partially within, microprocessor 16′. Configured for digital signal processing (DSP), microprocessor 16′ is operatively coupled to associated computer memory 18′, as described further below. Sensory logic 50 includes linear analog componentry configured to maintain the probe electrode 48 at a constant voltage and convert any current into or out of the probe electrode 48 into a proportional current-sense voltage. The sensory logic includes an analog-to-digital (A/D) converter 54 that converts the current-sense voltage into digital data to facilitate subsequent processing. In one embodiment, the current-sense voltage may have bandwidth of approximately 100 kHz, and may be A/D-converted at a sampling rate of 1 million bits per second (Mbit/s).
Instead of capacitively coupling row and column electrodes of touch screen 12 via a dielectric, sensory logic 50 of active stylus 44 senses the arrival of an excitation pulse from row electrode 28, beneath (i.e., adjacent) touch point 22, and in response, injects charge into column electrode 30, also beneath the touch point 22. To this end, the active stylus 44 includes injection logic 52 associated with the probe electrode 48 and configured to control charge injection from the probe electrode 48 to the column electrode directly beneath (i.e., adjacent) the probe electrode. The injected charge appears, to column-sense logic 36 of the touch screen, similar to an electrostatic pulse delivered via capacitive coupling of the column electrode 30 to an energized row electrode 28 intersecting at touch point 22. In some embodiments, accordingly, the touch-screen logic is not limited to touch-screen display device 10, but extends also to the active stylus.
In some embodiments, sensory logic 50 and injection logic 52 are active during non-overlapping time windows of each touch-sensing frame, so that charge injection and charge sensing may be enacted at the same probe electrode 48. In this embodiment, touch-screen logic 32 excites the series of row electrodes 28 during the time window in which the sensory logic is active, but suspends row excitation during the time window in which the active stylus 44 may inject charge. This strategy provides an additional advantage, in that it enables touch-screen logic 32 to distinguish touch points effected by active stylus 44 from touch points effected by a fingertip or palm. If column-sense logic 36 detects charge from a column electrode 30 during the charge-injection time window of the active stylus 44 (when none of the row electrodes 28 are excited), then touch point 22 detected must be a touch point of the active stylus. However, if the column-sense logic detects charge during the charge-sensing window of the active stylus (when row electrodes 28 are being excited), then the touch point detected may be a touch point of a fingertip, hand, or passive stylus, for example.
Active sensing followed by charge injection enables a touch point 22 of a very small area to be located precisely, and without requiring long integration times that would increase the latency of touch sensing. For example, when receiving the signal from row electrode 28, the active stylus 44 may inject a charge pulse with amplitude proportional to the received signal strength. Thus, touch sensor 56 may receive the electrostatic signal from active stylus 44 and calculate the Y coordinate, which may be the row providing the greatest signal from the active stylus, or a function of the signals received at that row and adjacent rows. Nevertheless, this approach introduces various challenges. The major challenge is that the sensory logic 50 and injection logic 52 must operate simultaneously. Accordingly, probe electrode 48 may operate in full-duplex mode. Various methods for example, code division or frequency division multiple access—may be applied to cancel the strong interference at the receiving direction from the transmitting direction. The touch sensor may be required to receive two signals simultaneously (one from the row electrode 28, and the other from the stylus probe electrode 48). The system may also work by time-division, but at a cost in available integration time.
One solution to the above problem requires active stylus 44 to assume a more active role in determining the touch point coordinates. In the illustrated embodiment, sensory logic 50 of the active stylus 44 includes a local row counter 58, which is maintained in synchronization with row counter 38 (hereinafter, the remote row counter) of touch-screen logic 32. This feature gives the active stylus and the touch screen a shared sense of time, but without being wired together. In some embodiments, the local row counter 58 may be embodied as discrete hardware—e.g., a clocked register having a series of interconnected flip flops as described above. In other embodiments, the local row counter 58 may be embodied as a register within microprocessor 16′ of the touch-screen logic, or as a data structure held in computer memory 18′ associated with microprocessor 16′.
When probe electrode 48 touches contactable display surface 26 of touch screen 12, sensory logic 50 receives a waveform that lasts as long as the touch is maintained. The waveform acquires maximum amplitude at the moment in time when row electrode 28, directly beneath (i.e., adjacent) the probe electrode 48, has been energized. Sensory logic 50 is configured to sample the waveform at each increment of the local row counter 58 and determine when the maximum amplitude was sensed. This determination can be made once per frame, for example.
Because active stylus 44 and touch screen 12 enjoy a shared sense of timing (having synchronized row counters 38), the local row-counter 58 state at maximum sensed amplitude reports directly on the row coordinate—i.e., the Y coordinate—of touch point 22. In order to make use of this information, the Y coordinate must be communicated back to touch-screen logic 32. To this end, the active stylus includes communication componentry configured to wirelessly communicate the computed row coordinate to row-sense logic of the touch screen. This disclosure embraces various modes of communicating data, including the Y coordinate, from the active stylus to the touch screen.
The foregoing description of active stylus 44 is not intended to be limiting in any sense, for numerous variations, extensions, and omissions are also envisaged. For instance, a different type of active stylus may be configured to transmit charge pulses, but without the sensory logic referenced above. In still other examples, where some positioning uncertainty can be tolerated, a passive stylus may be used.
As noted above, the problem addressed herein is the optical parallax that a touch-screen display device user experiences on sighting the tip of a stylus on the contactable display surface of a touch screen. In general, the positioning error due to the optical parallax depends on the vantage point from which the stylus tip is sighted; it increases with increasing distance between the light-emissive structure and the contactable front surface of the touch screen—i.e., the thickness of refractive layers 84 in
Making the thickness T as small as possible—by using thinner cover glass 80, a thinner touch sensor, etc.—will reduce the error amount Δ. However, there is a practical lower limit to the thickness of the display stack due to manufacturing constraints and the need for a mechanically stable and robust contactable surface.
If the user's pupillary positions are known relative to the display coordinates, then the parallax error can be estimated and corrected by appropriate adjustment of the sensed normal coordinate X1. The quantitative estimate d correction Δ is based on the geometry and refractive indices of the display stack, the location of stylus tip 46 on contactable display surface 26, and the vantage point from which the stylus tip is sighted. In the scenario illustrated in
In order to operationally determine the user's vantage point, touch-screen display device 10 of
Returning briefly to
Despite the benefit of accurate estimation of the pupil positions, user-facing camera 88 may be omitted in some embodiments. Pupil-estimation logic 108 may then estimate the pupil positions based on a series of heuristics. For example, the user may be expected to view the display screen from a side opposite to the side that the operating system of computer 14 renders as the top. In addition, the palm location relative to the tip location can be used to predict the likely vantage point of the user. For example, the user may be expected to view the display screen from the side of the tip which is opposite to the side where the palm is located. This scenario is illustrated in
In some embodiments, touch-screen display device 10 may include non-user imaging sensory components to increase the reliability of the heuristic analysis outlined above. The components are configured to enable the touch-screen display device to reckon its position and orientation. In the embodiment shown in
In some examples, the tilt of touch-screen display device 10 may be ascertained via the accelerometer of IMU 110 and/or image data from world-facing camera 43. With the same grip on a touch-screen display device 10, the user in different postures will have different head positions, depending on whether she is seated, standing, or lying down. The georelative orientation of the touch-screen display device may narrow down the user's posture, which in turn will enable a more accurate estimate of the vantage vector. Such data may provide a high-confidence indicator of the direction in which the user is located, but does not indicate how far away the eyes are. To provide this data, an estimate of this metric based on a statistical model may be used. Alternatively, a calibration routine (vide infra) may be enacted separately for each user of the touch-screen display device, to further increase the pupil-estimation accuracy.
Continuing now in
At 124 of method 122 a user-pointer positioning error is identified heuristically, based on the user's interaction with the touch screen. In some examples, the user-pointer positioning error may include error in connecting inferentially connectable line segments drawn on the contactable display surface. In a handwriting recognition app, for instance, a user that has just drawn the two vertical lines of the capital letter ‘H’ may attempt to target but miss the first vertical line in attempting to draw the horizontal crossbar of the ‘H’. This type of error can be recognized by the user-pointer logic. Analogous scenarios are envisaged for users attempting to cross a ‘T’ in the handwriting recognition app, or to press a radio button or check a checkbox of a graphical user interface (GUI) presented on the touch-screen display.
At 126 of method 122, the dominant eye of the user is identified. In some examples, identifying the dominant eye may include identifying based on the user-pointer positioning error (at 124). For instance, the magnitude of the user-pointer positioning error may be greater in regions of the display that are farther from the dominant eye. In some examples, identifying the dominant eye may include presenting on the display a user-interface element specifically configured to test which eye is dominant. Presented during a calibration phase of the touch-sensitive display system, the user-interface element may include a graphic that the user is obliged to view. The user-interface element may further include and a query element that queries the user's perception of the graphic. The user's response received via the query element enables the user-pointer logic to determine which eye is dominant.
At 128 a user of the touch-sensing display system is identified from among a plurality of potential users of the touch-sensing display system. The user may be identified based on the user profile currently being accessed via the operating system of the touch-sensing display device. At 130 a parameter value 131 based on the identified user is stored by the pupil-estimation logic.
At 132 various forms of user contact on the contactable display surface of the touch-sensing display system are sensed by the touch-sensing logic. The user contact may include contact with the user's finger or stylus, whether active or passive. At a minimum, the sensed user contact includes normal coordinates directly behind a point of user contact on the contactable display surface. In examples that include presentation of a UI element to determine which eye is dominant (at 126, above), the sensed user contact may include contact made in response to presentation of the afore-mentioned UI element. In some examples, the forms of user contact sensed at 132 may include the locus of palm contact on the contactable display surface.
At 134 a vantage vector pointing from a vantage point of the user and through the point of user contact is estimated— In embodiments in which the dominant eye of the user is identified, the estimated vantage vector may be a vector that passes specifically through the dominant eye of the user. In other embodiments, the estimated vantage vector may pass between the identified or inferred location of the eyes, interocular axis, or head of the user.
In some embodiments, the vantage vector may be estimated heuristically, based on the region of the user contact on the contactable display surface. If the sensed user contact includes a locus of user palm contact, then the vantage vector may be estimated based on the locus of user palm contact. In systems having an orientation sensor responsive to an orientation of the contactable display surface, then the vantage vector may be estimated based on output of the orientation sensor. In systems having a camera, the pupil-estimation logic may be configured to estimate the vantage vector based on output of the camera. As noted above, the camera may be a user-facing camera that actually images the user's eyes or head, or, a world-facing camera providing image data from which the touch-sensing display device can reckon its position and orientation.
In embodiments in which a user-pointer positioning error is identified (at 124) the vantage vector may be estimated further based on the user-pointer positioning error. In embodiments that include identification of the user (at 128), the vantage vector may be estimated further based on the identity of the user. In embodiments in which a parameter value based on the identified user is stored (at 130), the vantage vector may be estimated further based on the stored parameter value.
In one example, the pupil-estimation logic may provide a larger eye-to-screen distance for adult users than for children. In other examples, different users may tend to hold the touch-screen display device differently. One user may use a two handed grip to hold a tablet in front of himself, with arms resting on a table; a different user may use the same grip to hold the device on his lap, providing a lower angle of observation (relative to the display surface normal). The pupil-estimation logic may intuit these differences based on the user identity. In cases where there is no stored profile linked to the current user, generic metrics from a generic profile may be applied.
At 136 adjusted coordinates of user touch on the contactable display surface are computed. The adjusted coordinates are coordinates shifted from the normal coordinates based on the estimated vantage vector. In some embodiments, the adjusted coordinates are coordinates of intersection of the pixel surface and the estimated vantage vector upon refraction through the display layer. In embodiments in which the user-pointer positioning error is determined, the adjusted coordinates may be coordinates chosen to null the positioning error.
At 138 a visible feature on the contactable display surface is rendered at the adjusted coordinates. The visible feature may include an ink mark representing some portion of drawing object, and/or a cursor element of a graphical user interface.
No aspect of the description above should be interpreted in a limiting sense, for numerous variations and departures are contemplated as well. For instance, although the computed positioning error Δ may be used to offset user-pointer coordinates, as described above, it may alternatively be applied in reverse to the coordinates of any object presented on the display.
Furthermore, at least some correction to the normal coordinates sensed by the touch-screen logic may be enacted prior to contact of the stylus tip, or without detailed knowledge of the point of contact. In some embodiments, a vantage vector originating at the user's pupils and terminating at any point on the contactable display surface (e.g., the current cursor location, the location of a newly rendered graphic, the midpoint of the display screen, etc.) may be used in lieu of the vantage vector described above. In general, an observed, estimated or predicted pupil position relative to the display may be used to preemptively compute and correct for positioning error Δ in all regions of the touch-screen.
At 134′ of method 140, a vantage vector of the user is estimated, substantially as described hereinabove. The vantage vector is a vector pointing from a vantage point of the user and through an observable point on the contactable display surface, which may or may not be a point that the user has already touched with the stylus.
At 142 the positioning error due to parallax at the observable point on the contactable display surface is computed based on the estimated vantage vector. Naturally, vantage vectors of different angles may be estimated for different observable points, points of contact, etc., leading to different computed positioning errors and adjusted coordinates. With reference again to
At 138′ a visible feature is rendered on the contactable display surface at coordinates adjusted according to the positioning error. Typically the visible feature may be a cursor or new ink mark deposited on the display surface in response to user touch. For example, when a stylus contacts a point on the contactable display surface, a pixel offset from the point of contact by the positioning error is used to render an ink mark to avoid the parallax error that would occur if the pixel directly behind the point of contact were used to render the ink mark. In some implementations, this effect can be achieved by shifting the touch sense locations relative to the display pixel locations across the entirety of the display. Conversely, it is also envisaged that a preexisting display image may be shifted and/or transformed based on the computed positioning error, so that subsequent touch points are accurately registered to the display image.
As noted above, the methods and processes described herein may be tied to a computer system of one or more computing devices. In particular, such methods and processes may be implemented as a computer system-application program or service, an application-programming interface (API), a library, and/or other computer system-program product.
Processor 16 includes one or more physical devices configured to execute instructions. For example, the processor may be configured to execute instructions that are part of one or more applications, services, programs, routines, libraries, objects, components, data structures, or other logical constructs. Such instructions may be implemented to perform a task, implement a data type, transform the state of one or more components, achieve a technical effect, or otherwise arrive at a desired result.
Processor 16 may be one of a plurality of processors configured to execute software instructions. Additionally or alternatively, the processor may include one or more hardware or firmware logic machines configured to execute hardware or firmware instructions. Processors of device 10 may be single-core or multi-core, and the instructions executed thereon may be configured for sequential, parallel, and/or distributed processing. Individual components of the computer system optionally may be distributed among two or more separate devices, which may be remotely located and/or configured for coordinated processing. Aspects of the computer system may be virtualized and executed by remotely accessible, networked computing devices configured in a cloud-computing configuration.
Electronic memory 18 includes one or more physical devices configured to hold instructions executable by processor 16 to implement the methods and processes described herein. When such methods and processes are implemented, the state of electronic memory 18 may be transformed—e.g., to hold different data.
Electronic memory 18 may include removable and/or built-in devices. Electronic memory 18 may include semiconductor memory (e.g., RAM, EPROM, EEPROM, etc.), and/or magnetic memory (e.g., hard-disk drive, floppy-disk drive, tape drive, MRAM, etc.), among others. Electronic memory 18 may include volatile, nonvolatile, dynamic, static, read/write, read-only, random-access, sequential-access, location-addressable, file-addressable, and/or content-addressable devices.
It will be appreciated that electronic memory 18 includes one or more physical devices. However, aspects of the instructions described herein alternatively may be propagated by a communication medium (e.g., an electromagnetic signal, an optical signal, etc.) that is not held by a physical device for a finite duration.
Aspects of processor 16 and electronic memory 18 may be integrated together into one or more hardware-logic components. Such hardware-logic components may include field-programmable gate arrays (FPGAs), program- and application-specific integrated circuits (PASIC/ASICs), program- and application-specific standard products (PSSP/ASSPs), system-on-a-chip (SOC), and complex programmable logic devices (CPLDs), for example.
The terms ‘module,’ ‘program,’ and ‘engine’ may be used to describe an aspect of device 10 implemented to perform a particular function. In some cases, a module, program, or engine may be instantiated via processor 16 executing instructions held by electronic memory 18. It will be understood that different modules, programs, and/or engines may be instantiated from the same application, service, code block, object, library, routine, API, function, etc. Likewise, the same module, program, and/or engine may be instantiated by different applications, services, code blocks, objects, routines, APIs, functions, etc. The terms ‘module,’ program,' and ‘engine’ may encompass individual or groups of executable files, data files, libraries, drivers, scripts, database records, etc.
It will be appreciated that a ‘service’, as used herein, is an application program executable across multiple user sessions. A service may be available to one or more system components, programs, and/or other services. In some implementations, a service may run on one or more server-computing devices.
Display 20 may be used to present a visual representation of data held by electronic memory 18. This visual representation may take the form of a graphical user interface (GUI). As the herein described methods and processes change the data held by the storage machine, and thus transform the state of the storage machine, the state of display 20 may likewise be transformed to visually represent changes in the underlying data. Display 20 may include one or more near-eye display devices utilizing virtually any type of technology. Such near-eye display devices may be combined with processor 16 and/or electronic memory 18 in a shared enclosure, or such near-eye display devices may be peripheral near-eye display devices.
In addition to touch screen 12, the input subsystem may comprise or interface with one or more user-input devices such as a keyboard, mouse, touch screen, or game controller. In some embodiments, the input subsystem may comprise or interface with selected natural user input (NUI) componentry. Such componentry may be integrated or peripheral, and the transduction and/or processing of input actions may be handled on- or off-board. Example NUI componentry may include a microphone for speech and/or voice recognition; an infrared, color, stereoscopic, and/or depth camera for machine vision and/or gesture recognition; a head tracker, eye tracker, accelerometer, and/or gyroscope for motion detection and/or intent recognition.
A communication subsystem may be configured to communicatively couple device 10 with one or more other computing devices. The communication subsystem may include wired and/or wireless communication devices compatible with one or more different communication protocols. As non-limiting examples, the communication subsystem may be configured for communication via a wireless telephone network, or a wired or wireless local- or wide-area network. In some embodiments, the communication subsystem may allow device 10 to send and/or receive messages to and/or from other devices via a network such as the Internet.
One aspect of this disclosure is directed to a touch-sensing display system comprising a contactable display surface operatively coupled to touch-screen, pupil-estimation, user-pointer, and display logic. The touch-screen logic is configured to sense normal coordinates directly behind a point of user contact on the contactable display surface. The pupil-estimation logic is configured to estimate a vantage vector pointing from a vantage point of the user and through the point of user contact. The user-pointer logic is configured to compute adjusted coordinates of the contactable display surface, the adjusted coordinates being shifted from the normal coordinates based on the estimated vantage vector. The display logic is configured to render a visible feature on the contactable display surface at the adjusted coordinates.
In some implementations, the touch-sensing display system further comprises an active stylus. In some implementations, the touch-screen logic is arranged in the active stylus. In some implementations, the contactable display surface is an outer surface of a display stack having a light-releasing pixel surface set behind the contactable display surface, and the adjusted coordinates are coordinates of intersection of the pixel surface and the estimated vantage vector upon refraction through the display stack. In some implementations, the pupil-estimation logic is configured to estimate the vantage vector pointing from a dominant eye of the user through the point of user contact. In some implementations, the pupil-estimation logic is configured to estimate the vantage vector heuristically, based on the region of the user contact on the contactable display surface. In some implementations, the user contact includes a locus of user palm contact, and the pupil-estimation logic is configured to estimate the vantage vector based on the locus of user palm contact. In some implementations, the touch-sensing display system further comprises an orientation sensor to measure an orientation of the contactable display surface, and the pupil-estimation logic is configured to estimate the vantage vector based on output of the orientation sensor. In some implementations, the touch-sensing display system further comprises a camera, wherein the pupil-estimation logic is configured to estimate the vantage vector based on output of the camera. In some implementations, the visible feature includes a virtual ink mark. In some implementations, the visible feature includes a cursor.
Another aspect of this disclosure is directed to a method enacted in a touch-sensing display system having a contactable display surface as an outer surface of a display stack, with a light-releasing pixel surface set behind the contactable display surface. The method comprises estimating a vantage vector pointing from a vantage point of the user and through an observable point on the contactable display surface; computing a positioning error due to parallax at the observable point, the positioning error being a distance travelled laterally, across the display stack, by a ray originating at the light-releasing pixel surface and refracting out from the contactable display surface at the observable point before continuing along the vantage vector; and rendering a visible feature on the contactable display surface at coordinates adjusted according to the positioning error.
In some implementations, the method further comprises sensing normal coordinates directly behind the point of user contact. In some implementations, the method further comprises identifying a user of the touch-sensing display system, wherein the vantage vector is estimated based on an identity of the user. In some implementations, the method further comprises storing a parameter value influencing estimation of the vantage vector for each of a plurality of users of the touch-sensing display system, and retrieving the parameter value for the identified user. In some implementations, the method further comprises identifying a dominant eye of the user, wherein the estimated vantage vector passes through the dominant eye. In some implementations, identifying the dominant eye includes identifying based on the positioning error. In some implementations, identifying the dominant eye includes presenting a user-interface element on the contactable display, and further comprising sensing user contact made in response to presentation of the user-interface element.
Another aspect of this disclosure is directed to a user-pointer adjustment method enacted in a touch-sensing display system having a contactable display surface. The method comprises sensing normal coordinates directly behind a point of user contact on the contactable display surface; identifying a user-pointer positioning error; estimating a vantage vector pointing from a vantage point of the user and through the point of user contact, responsive to the user-pointer positioning error; computing adjusted coordinates on the contactable display surface, the adjusted coordinates being shifted from the normal coordinates based on the estimated vantage vector; and rendering a visible feature on the contactable display surface at the adjusted coordinates.
In some implementations, the user-pointer positioning error includes error in connecting inferentially connectable line segments drawn on the contactable display surface.
It will be understood that the configurations and/or approaches described herein are exemplary in nature, and that these specific embodiments or examples are not to be considered in a limiting sense, because numerous variations are possible. The specific routines or methods described herein may represent one or more of any number of processing strategies. As such, various acts illustrated and/or described may be performed in the sequence illustrated and/or described, in other sequences, in parallel, or omitted. Likewise, the order of the above-described processes may be changed.
The subject matter of the present disclosure includes all novel and non-obvious combinations and sub-combinations of the various processes, systems and configurations, and other features, functions, acts, and/or properties disclosed herein, as well as any and all equivalents thereof.