This invention relates to sensor arrangements and signal processing architectures for touch-based user interfaces, and more specifically to heterogeneous tactile sensing via multiple sensor types.
By way of general introduction, touch screens implementing tactile sensor arrays have recently received tremendous attention with the addition multi-touch sensing, metaphors, and gestures.
Despite the many popular touch interfaces and gestures, there remains a wide range of additional control capabilities that can yet be provided by further enhanced user interface technologies. A number of enhanced touch user interface features are described in U.S. Pat. Nos. 6,570,078 and 8,169,414 as well as pending U.S. patent application Ser. Nos. 11/761,978, 12/418,605, 12/502,230, 12/541,948, and a significant number of related pending U.S. patent applications by the present and associated inventors. These patents and patent applications also address popular contemporary gesture and touch features.
In another embodiment taught in the specification of issued U.S. Pat. No. 7,557,797 and associated pending continuation applications more than two touchpads can be included in the advance mouse embodiment, for example as suggested in the arrangement of
Overview of Touch-Based User Interface Sensor Technology
The information in this section provides an overview of HDTP user interface technology as described in U.S. Pat. Nos. 6,570,078 and 8,169,414 as well as pending U.S. patent application Ser. Nos. 11/761,978, 12/418,605, 12/502,230, 12/541,948, and related pending U.S. patent applications.
In an embodiment, a touchpad used as a pointing and data entry device can comprise an array of sensors. The array of sensors is used to create a tactile image of a type associated with the type of sensor and method of contact by the human hand. In one embodiment, the individual sensors in the sensor array are pressure sensors and a direct pressure-sensing tactile image is generated by the sensor array. In another embodiment, the individual sensors in the sensor array are proximity sensors and a direct proximity tactile image is generated by the sensor array. Since the contacting surfaces of the finger or hand tissue contacting a surface typically increasingly deforms as pressure is applied, the sensor array comprised of proximity sensors also provides an indirect pressure-sensing tactile image. In another embodiment, the individual sensors in the sensor array can be optical sensors. In one variation of this, an optical image is generated and an indirect proximity tactile image is generated by the sensor array. In another variation, the optical image can be observed through a transparent or translucent rigid material and, as the contacting surfaces of the finger or hand tissue contacting a surface typically increasingly deforms as pressure is applied, the optical sensor array also provides an indirect pressure-sensing tactile image. In some embodiments, the array of sensors can be transparent or translucent and can be provided with an underlying visual display element such as an alphanumeric, graphics, or image display. The underlying visual display can comprise, for example, an LED array display, a backlit LCD, etc. Such an underlying display can be used to render geometric boundaries or labels for soft-key functionality implemented with the tactile sensor array, to display status information, etc. Tactile array sensors implemented as transparent touchscreens are taught in the 1999 filings of issued U.S. Pat. No. 6,570,078 and pending U.S. patent application Ser. No. 11/761,978.
In an embodiment, the touchpad or touchscreen can comprise a tactile sensor array that obtains or provides individual measurements in every enabled cell in the sensor array that provides these as numerical values. The numerical values can be communicated in a numerical data array, as a sequential data stream, or in other ways. When regarded as a numerical data array with row and column ordering that can be associated with the geometric layout of the individual cells of the sensor array, the numerical data array can be regarded as representing a tactile image. The only tactile sensor array requirement to obtain the full functionality of a touch-based user interface is that the tactile sensor array produce a multi-level gradient measurement image as a finger, part of hand, or other pliable object varies is proximity in the immediate area of the sensor surface.
Such a tactile sensor array should not be confused with the “null/contact” touchpad which, in normal operation, acts as a pair of orthogonally responsive potentiometers. These “null/contact” touchpads do not produce pressure images, proximity images, or other image data but rather, in normal operation, two voltages linearly corresponding to the location of a left-right edge and forward-back edge of a single area of contact. Such “null/contact” touchpads, which are universally found in existing laptop computers, are discussed and differentiated from tactile sensor arrays in issued U.S. Pat. No. 6,570,078 and pending U.S. patent application Ser. No. 11/761,978. Before leaving this topic, it is pointed out that these the “null/contact” touchpads nonetheless can be inexpensively adapted with simple analog electronics to provide at least primitive multi-touch capabilities as taught in issued U.S. Pat. No. 6,570,078 and pending U.S. patent application Ser. No. 11/761,978 (pre-grant publication U.S. 2007/0229477 and therein, paragraphs [0022]-[0029], for example).
More specifically,
In many various embodiments, the tactile sensor array can be connected to interface hardware that sends numerical data responsive to tactile information captured by the tactile sensor array to a processor. In various embodiments, this processor will process the data captured by the tactile sensor array and transform it various ways, for example into a collection of simplified data, or into a sequence of tactile image “frames” (this sequence akin to a video stream), or into highly refined information responsive to the position and movement of one or more fingers and other parts of the hand.
As to further detail of the latter example, a “frame” can refer to a 2-dimensional list, number of rows by number of columns, of tactile measurement value of every pixel in a tactile sensor array at a given instance. The time interval between one frame and the next one depends on the frame rate of the system and the number of frames in a unit time (usually frames per second). However, these features are and are not firmly required. For example, in some embodiments a tactile sensor array can not be structured as a 2-dimensional array but rather as row-aggregate and column-aggregate measurements (for example row sums and columns sums as in the tactile sensor of year 2003-2006 Apple™ Powerbooks™, row and column interference measurement data as can be provided by a surface acoustic wave or optical transmission modulation sensor as discussed later in the context of
The tactile sensor array employed by touch-based user interface technologies can be implemented by a wide variety of means, for example:
Below a few specific examples of the above are provided by way of illustration; however these are by no means limiting. The examples include:
An example implementation of a tactile sensor array is a pressure sensor array. Pressure sensor arrays discussed in U.S. Pat. No. 6,570,078 and pending U.S. patent application Ser. No. 11/761,978.
The capacitive touch sensors described above involve a capacitance change due to spatial compression of capacitive elements; there is no direct RF or electrostatic sensing of the finger itself, and the result is typically pressure sensing. Most capacitive touch sensors, however, do involve direct RF or electrostatic sensing of the finger itself, typically resulting in proximity sensing. It is also possible to create capacitive sensor arrays responsive to both proximity and pressure, for example such as the capacitive sensor arrays taught in U.S. Pat. No. 6,323,846 by Westerman.
Capacitive proximity sensors can be used in various handheld devices with touch interfaces (see for example, among many, http://electronics.howstuffworks.com/iphone2.htm, http://www.veritasetvisus.com/VVTP-12,%20Walker.pdf). Prominent manufacturers and suppliers of such sensors, both in the form of opaque touchpads and transparent touch screens, include Balda™ AG (Bergkirchener Str. 228, 32549 Bad Oeynhausen, Del., www.balda.de), Cypress™ (198 Champion Ct., San Jose, Calif. 95134, www.cypress.com), and Synaptics™ (2381 Bering Dr., San Jose, Calif. 95131, www.synaptics.com). In such sensors, the region of finger contact is detected by variations in localized capacitance resulting from capacitive proximity effects induced by an overlapping or otherwise nearly-adjacent finger. More specifically, the electrical field at the intersection of orthogonally-aligned conductive buses is influenced by the vertical distance or gap between the surface of the sensor array and the skin surface of the finger. Such capacitive proximity sensor technology is low-cost, reliable, long-life, stable, and can readily be made transparent.
Forrest M. Mims is credited as showing that an LED can be used as a light detector as well as a light emitter. Recently, light-emitting diodes have been used as a tactile proximity sensor array—for example, as taught in U.S. Pat. No. 7,598,949 by Han and depicted in the associated video available at http://cs.nyu.edu/˜jhan/ledtouch/index.html). Such tactile proximity array implementations typically need to be operated in a darkened environment (as seen in the video in the above web link). In one embodiment provided for by the invention, each LED in an array of LEDs can be used as a photodetector as well as a light emitter, although a single LED can either transmit or receive information at one time. Each LED in the array can sequentially be selected to be set to be in receiving mode while others adjacent to it are placed in light emitting mode. A particular LED in receiving mode can pick up reflected light from the finger, provided by said neighboring illuminating-mode LEDs.
An important special case of this is the use of OLED arrays such as those used in OLED displays increasingly deployed in cellphones, smartphones, and Personal Digital Assistants (“PDAs”) manufactured by Samsung, Nokia, LG, HTC, Phillips, Sony and others. As taught in pending U.S. patent application Ser. Nos. 13/452,461, 13/180,345 and 13/547,024, such an arrangement can be implemented in a number of ways to provide a high-resolution optical tactile sensor for touch-based user interfaces. Color OLED array displays are of particular interest, in general and as pertaining to the present invention, because:
As taught in U.S. Pat. No. 8,125,559 and pending U.S. patent application Ser. Nos. 13/452,461, 13/180,345 and 13/547,024, leveraging this in various ways, in accordance with embodiments of the invention, array of inorganic-LEDs, OLEDs, or related optoelectronic devices is configured to perform functions of two or more of:
Another type of optical tactile sensor approach arranged to serve as both a display and a tactile sensor is taught in U.S. Pat. No. 8,049,739 by Wu et al., which uses a deformable back-lit LCD display comprising internally reflective elements and photosensitive elements associated with the LCD display responsive to the reflective light.
Use of video cameras for gathering control information from the human hand in various ways is discussed in U.S. Pat. No. 6,570,078 and Pending U.S. patent application U.S. Ser. No. 10/683,915. Here the camera image array is employed as a touch-based user interface tactile sensor array. Images of the human hand as captured by video cameras can be used as an enhanced multiple-parameter interface responsive to hand positions and gestures, for example as taught in U.S. patent application U.S. Ser. No. 10/683,915 Pre-Grant-Publication 2004/0118268 (paragraphs [314], [321]-[332], [411], [653], both stand-alone and in view of [325], as well as [241]-[263]).
In another video camera tactile controller embodiment, a flat or curved transparent or translucent surface or panel can be used as sensor surface. When a finger is placed on the transparent or translucent surface or panel, light applied to the opposite side of the surface or panel reflects light in a distinctly different manner than in other regions where there is no finger or other tactile contact. The image captured by an associated video camera will provide gradient information responsive to the contact and proximity of the finger with respect to the surface of the translucent panel. For example, the parts of the finger that are in contact with the surface will provide the greatest degree of reflection while parts of the finger that curve away from the surface of the sensor provide less reflection of the light. Gradients of the reflected light captured by the video camera can be arranged to produce a gradient image that appears similar to the multilevel quantized image captured by a pressure sensor. By comparing changes in gradient, changes in the position of the finger and pressure applied by the finger can be detected.
Section 2.1.7.2 of both U.S. Pat. No. 6,570,078 and pending U.S. patent application Ser. No. 10/683,915 also teach that two or more video cameras can be used and can be used in conjunction with at least a pressure-sensor array touchpad sensor. Further development of these and related multiple sensor arrangements are considered in the final technical section of the present application.
Additionally, the macroscopic arrangement of sensor elements can introduce nonlinear spatial warping effects. As an example, various manufacturer implementations of capacitive proximity sensor arrays and associated interface electronics are known to comprise often dramatic nonlinear spatial warping effects.
Overview of 3D, 6D, and Other Capabilities of HDTP Technology User Interface Technology
Before providing details specific to the present invention, some embodiments of HDTP technology is provided. This will be followed by a summarizing overview of HDTP technology. With the exception of a few minor variations and examples, the material presented in this overview section is drawn from U.S. Pat. No. 6,570,078, pending U.S. patent application Ser. Nos. 11/761,978, 12/418,605, 12/502,230, 12/541,948, 12/724,413, 13/026,248, and related pending U.S. patent applications and is accordingly attributed to the associated inventors.
Each of the six parameters listed above can be obtained from operations on a collection of sums involving the geometric location and tactile measurement value of each tactile measurement sensor. Of the six parameters, the left-right geometric center, forward-back geometric center, and clockwise-counterclockwise yaw rotation can be obtained from binary threshold image data. The average downward pressure, roll, and pitch parameters are in some embodiments beneficially calculated from gradient (multi-level) image data. One remark is that because binary threshold image data is sufficient for the left-right geometric center, forward-back geometric center, and clockwise-counterclockwise yaw rotation parameters, these also can be discerned for flat regions of rigid non-pliable objects, and thus the HDTP technology thus can be adapted to discern these three parameters from flat regions with striations or indentations of rigid non-pliable objects.
Additionally, as taught in U.S. patent application Ser. No. 12/418,605, a wide range of richly-parameterized multi-touch configurations are supported by the HDTP technology.
The above and other aspects, features and advantages of the present invention will become more apparent upon consideration of the following description of preferred embodiments taken in conjunction with the accompanying drawing figures.
In the following, numerous specific details are set forth to provide a thorough description of various embodiments. Certain embodiments may be practiced without these specific details or with some variations in detail. In some instances, certain features are described in less detail so as not to obscure other aspects. The level of detail associated with each of the elements or features should not be construed to qualify the novelty or importance of one feature over the others.
In the following description, reference is made to the accompanying drawing figures which form a part hereof, and which show by way of illustration specific embodiments of the invention. It is to be understood by those of ordinary skill in this technological field that other embodiments may be utilized, and structural, electrical, as well as procedural changes may be made without departing from the scope of the present invention.
Despite the many popular touch interfaces and gestures in contemporary information appliances and computers, there remains a wide range of additional control capabilities that can yet be provided by further enhanced user interface technologies. A number of enhanced touch user interface features are described in U.S. Pat. Nos. 6,570,078 and 8,169,414 as well as pending U.S. patent application Ser. Nos. 11/761,978, 12/418,605, 12/502,230, 12/541,948, and related pending U.S. patent applications. These patents and patent applications also address popular contemporary gesture and touch features. The enhanced user interface features taught in these patents and patent applications, together with popular contemporary gesture and touch features, can be rendered by the “High Dimensional Touch Pad” (HDTP) technology taught in those patents and patent applications.
In one embodiment, each of the tactile sensing measurements 2105 and the secondary information 2145 are separately processed and the results of each are then combined and/or selectively chosen to produce improved performance. In another embodiment, both of tactile sensing measurements 2105 and the secondary information 2145 are processed together, for example, to produce improved performance.
In some embodiments (see, e.g., embodiments discussed in
In either of the approaches associated with
Some embodiments of the present invention additionally provide for the combination of at least one tactile sensor (as tactile sensing arrangement 2110) and at least one video camera (as secondary information sensing arrangement 2140) to be used as touch-based user interface arrangement. This can be done in a large variety of physical arrangements, hardware arrangements, measurement processing arrangements, and output signal calculation arrangements as will be indicated in the provided examples to follow. For example, as explained later, there are various structural approaches to processing the tactile sensor measurements and video camera signals to provide user interface output signals. In one structural approach, each of tactile sensor measurements and video camera signal(s) are separately processed and the results of each are then combined and/or selectively chosen to produce user interface output signals. In another structural approach, both of the tactile sensor measurements and video camera signal(s) are processed together to produce user interface output signals.
Regarding use of one camera,
Some embodiments of the present invention additionally provide for the combination of at least one tactile sensor (as tactile sensing arrangement 2110) and at least two video cameras (as secondary information sensing arrangement 2140) to be used as touch-based user interface arrangement.
As taught in section 2.1.7.2 of both U.S. Pat. No. 6,570,078 and pending U.S. patent application Ser. No. 10/683,915, two or more video cameras can be arranged in orthogonal arrangements to capture hand expressions within 3-space regions.
Regarding conversion of video camera signals to produce measurements or signals useful in producing user interface output signals, a number of approaches can be used. As an example, a video camera oriented to view the profile of far end of a user's fingertip, rolling a user finger clockwise or counterclockwise creates a video camera image that, upon edge extraction or high-contrast imaging, creates an elliptical shape whose angle of clockwise/counterclockwise orientation of its minor axis with respect to a reference position corresponds in some manner (for example, opposite direction) to the roll angle of the finger. Similarly, a video camera oriented to view the side-profile of a user's finger creates a video camera image that, upon edge extraction or high-contrast imaging, creates an elliptical shape whose angle of clockwise/counterclockwise orientation of its major axis with respect to a reference position corresponds in some manner to the pitch angle of the finger. The angle of finger rotation can then be recovered by various methods, for example employing the slope of a least-squares line-fit as taught in U.S. patent application Ser. No. 12/418,605 or the high-performance closed-form calculation of oblong-shape rotation angles methods taught in U.S. Pat. No. 8,170,346. The invention provides for a wide range of other approaches to be used, for example use of artificial neural networks as taught by Ser. No. 13/038,365 in conjunction with FIGS. 37-38 therein, the methods taught in in section 2.1.7.2 of U.S. Pat. No. 6,570,078 (such as use of pattern recognition, image interpretation, partitioning the video image into “sub-image cells,” etc.), and/or other approaches. Additional considerations and approaches can be used to measure and/or compensate for the effect of the yaw angle of the user finger.
Regarding use of two cameras,
Further as to the use of two cameras, as taught in section 2.1.7.2 of both U.S. Pat. No. 6,570,078 and pending U.S. patent application Ser. No. 10/683,915, two or more video cameras can be arranged in at least a stereoscopic arrangement to capture hand expressions within 3-space regions. As an example,
Embodiments of the present invention also provide for more than two cameras (for example, 4, 8, etc.) to be used in various configurations. As a representative but by no means limiting example,
Alternatively, a pressure sensor array can additionally be added to any of the arrangements depicted in
Alternatively, a pressure sensor array can be substituted for the optical tactile sensor in any of the arrangements depicted in
Alternatively, a pressure sensor array can be substituted for the capacitive tactile sensor in any of the arrangements depicted in
In each of the arrangements discussed above wherein the sensor arrangement is compatible with a touchscreen implementation, the resultant output signals can be used to implement touchscreen functions, which in turn can be used to control applications.
In various embodiments, each of the arrangements discussed above wherein the sensor arrangement is compatible with an HDTP implementation, the resultant output signals can be used to implement HDTP functions, which in turn can be used to control applications.
In various embodiments, each of the arrangements discussed above wherein the sensor arrangement is compatible with both a touchscreen implementation and an HDTP implementation, the resultant output signals can be used to implement HDTP touchscreen functions, which in turn can be used to control applications.
In various embodiments, each of the arrangements discussed above involving at least one video camera, the resultant output signals can be used to implement functions beyond those of HDTP functions, which in turn can be used to control applications.
This section provides a number of examples of interfacing and processing arrangements for multiple sensor configurations, for example those discussed above in conjunction with
As explained later, there are at least two general types of structural approaches to the handling and processing the sensor measurements 2115 and/or secondary information 2145 to provide user interface output signals. In a first structural approach, each of the sensor measurements 2115 and the secondary information 2145 is separately processed and the results of each are then combined and/or selectively chosen to produce improved performance. In a second structural approach, both of the sensor measurements 2115 and the secondary information 2145 are processed together to produce improved performance.
Compensation for Non-Ideal Behavior of Tactile Sensor Arrays
Individual sensor elements in a tactile sensor array may produce measurements that vary sensor-by-sensor when presented with the same stimulus. Inherent statistical averaging of the algorithmic mathematics (included for instance in instructions 2130) can damp out much of this, but for small image sizes (for example, as rendered by a small finger or light contact), as well as in cases where there are extremely large variances in sensor element behavior from sensor to sensor, embodiments of the present invention provide for each sensor to be individually calibrated in implementations where that can be advantageous. Sensor-by-sensor measurement value scaling, offset, and nonlinear warpings can be invoked for all or selected sensor elements during data acquisition scans. Similarly, embodiments of the invention provide for individual noisy or defective sensors can be tagged for omission during data acquisition scans.
The previous section provided a number of examples of interfacing and processing arrangements for multiple sensor configurations. This section provides examples of the handling and processing of the measured contact information produced by one or more tactile sensors and the observed contact information produced by one or more video cameras located on the edges of a touch sensing region.
In addition to the at least two general types of structural approaches described and cited repeatedly above, the handling and processing can additionally include aspects of at least four approaches to transforming sensor measurements video camera signals into user interface output signals:
An example of a functional partition approach would be to use a tactile sensor measurements for calculating left-right position (“x”), forward-back position (“y”), and yaw angle of at least one finger in contact with the tactile sensor and calculating pitch and roll angles of that finger from the signals provided by one or more video cameras observing that finger in contact with the tactile sensor.
An example of an internally-decision-based approach would be to use optical tactile sensor measurements for calculating the pitch angle of at least one finger in contact with a multiple-sensor tactile sensor surface within a first range of optical tactile sensor measurement values and use capacitive sensor measurements for calculating the pitch angle of at least one finger in contact with a multiple-sensor tactile sensor surface outside that first range of optical tactile sensor measurement values.
Implementation of internally-decision-based approaches can include threshold testing, conditional testing, vector quantization, algorithms employing parameterized calculations, algorithms employing compensation calculations and operations, artificial neural networks, etc. Other approaches can also be employed and are anticipated by the invention.
An example of an externally-decision-based approach would be to use optical tactile sensor measurements for calculating the pitch angle of at least one finger in contact with a multiple-sensor tactile sensor surface within a first range of observed video camera signals and use capacitive sensor measurements for calculating the pitch angle of at least one finger in contact with a multiple-sensor tactile sensor surface outside that first range of video camera signals.
Implementation of externally-decision-based approaches can include threshold testing, conditional testing, vector quantization, algorithms employing parameterized calculations, algorithms employing compensation calculations and operations, artificial neural networks, etc. Other approaches can also be employed and are anticipated by the invention.
An example of a sensor fusion approach would be to use observed video camera signals to invoke compensation or adjustments to calculations performed on tactile sensor measurement values, and use these compensation or adjustments to calculations performed on tactile sensor measurement values to create user interface output signals responsive to the location and orientation of the at least one finger as observed by the cameras.
Another example of a sensor fusion approach would be to use the observed video camera signals from at least two video cameras to create a 3-space representation of the location and orientation of at least one finger, and create user interface output signals responsive to the 3-space representation of the location and orientation of the at least one finger.
Implementation of sensor fusion approaches can include threshold testing, conditional testing, vector quantization, algorithms employing parameterized calculations, algorithms employing compensation calculations and operations, artificial neural networks, etc. Other approaches can also be employed and are anticipated by the invention.
While the invention has been described in detail with reference to disclosed embodiments, various modifications within the scope of the invention will be apparent to those of ordinary skill in this technological field. It is to be appreciated that features described with respect to one embodiment typically can be applied to other embodiments.
The invention can be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein.
Although exemplary embodiments have been provided in detail, various changes, substitutions and alternations could be made thereto without departing from spirit and scope of the disclosed subject matter as defined by the appended claims. Variations described for the embodiments may be realized in any combination desirable for each particular application. Thus particular limitations and embodiment enhancements described herein, which may have particular advantages to a particular application, need not be used for all applications. Also, not all limitations need be implemented in methods, systems, and apparatuses including one or more concepts described with relation to the provided embodiments. Therefore, the invention properly is to be construed with reference to the claims.
This application is a continuation of U.S. application Ser. No. 13/706,214, filed Dec. 5, 2012, which claims the benefit of U.S. Provisional Application No. 61/567,623 filed Dec. 6, 2011, the disclosures of both of which are incorporated herein in their entireties by reference.
Number | Name | Date | Kind |
---|---|---|---|
4748676 | Miyagawa | May 1988 | A |
4899137 | Behrens et al. | Feb 1990 | A |
5237647 | Roberts et al. | Aug 1993 | A |
5270711 | Knapp | Dec 1993 | A |
5292999 | Tumura | Mar 1994 | A |
5341133 | Savoy | Aug 1994 | A |
5347295 | Agulnick et al. | Sep 1994 | A |
5357048 | Sgroi | Oct 1994 | A |
5378850 | Tumura | Jan 1995 | A |
5386219 | Greanias | Jan 1995 | A |
5420936 | Fitzpatrick | May 1995 | A |
5440072 | Willis | Aug 1995 | A |
5442168 | Gurner et al. | Aug 1995 | A |
5459282 | Willis | Oct 1995 | A |
5471008 | Fujita et al. | Nov 1995 | A |
5475214 | DeFranco et al. | Dec 1995 | A |
5565641 | Gruenbaum | Oct 1996 | A |
5585588 | Tumura | Dec 1996 | A |
5592572 | Le | Jan 1997 | A |
5592752 | Fu | Jan 1997 | A |
5659145 | Weil | Aug 1997 | A |
5659466 | Norris et al. | Aug 1997 | A |
5665927 | Taki et al. | Sep 1997 | A |
5668338 | Hewitt et al. | Sep 1997 | A |
5675100 | Hewlett | Oct 1997 | A |
5717939 | Bricklin et al. | Feb 1998 | A |
5719347 | Masubachi et al. | Feb 1998 | A |
5719561 | Gonzales | Feb 1998 | A |
5724985 | Snell | Mar 1998 | A |
5741993 | Kushimiya | Apr 1998 | A |
5748184 | Shieh | May 1998 | A |
5786540 | Westlund | Jul 1998 | A |
5763806 | Willis | Aug 1998 | A |
5801340 | Peter | Sep 1998 | A |
5805137 | Yasutake | Sep 1998 | A |
5824930 | Ura et al. | Oct 1998 | A |
5827989 | Fay et al. | Oct 1998 | A |
5841428 | Jaeger et al. | Nov 1998 | A |
5850051 | Machover et al. | Dec 1998 | A |
5852251 | Su et al. | Dec 1998 | A |
5889236 | Gillespie et al. | Mar 1999 | A |
5932827 | Osborne et al. | Aug 1999 | A |
5969283 | Looney et al. | Oct 1999 | A |
5977466 | Muramatsu | Nov 1999 | A |
5986224 | Kent | Nov 1999 | A |
6005545 | Nishida et al. | Dec 1999 | A |
6037937 | Beaton et al. | Mar 2000 | A |
6047073 | Norris et al. | Apr 2000 | A |
6051769 | Brown, Jr. | Apr 2000 | A |
6100461 | Hewitt | Aug 2000 | A |
6107997 | Ure | Aug 2000 | A |
6140565 | Yamauchi et al. | Oct 2000 | A |
6204441 | Asahi et al. | Mar 2001 | B1 |
6225975 | Furuki et al. | May 2001 | B1 |
6285358 | Roberts | Sep 2001 | B1 |
6288317 | Willis | Sep 2001 | B1 |
6310279 | Suzuki et al. | Oct 2001 | B1 |
6310610 | Beaton et al. | Oct 2001 | B1 |
6320112 | Lotze | Nov 2001 | B1 |
6323846 | Westerman et al. | Nov 2001 | B1 |
6360019 | Chaddha | Mar 2002 | B1 |
6363159 | Rhoads | Mar 2002 | B1 |
6373475 | Challis | Apr 2002 | B1 |
6392636 | Ferrari | May 2002 | B1 |
6392705 | Chaddha | May 2002 | B1 |
6400836 | Senior | Jun 2002 | B2 |
6404898 | Rhoads | Jun 2002 | B1 |
6408087 | Kramer | Jun 2002 | B1 |
6570078 | Ludwig | May 2003 | B2 |
6703552 | Haken | Mar 2004 | B2 |
6793619 | Blumental | Sep 2004 | B1 |
7030860 | Hsu et al. | Apr 2006 | B1 |
7408108 | Ludwig | Aug 2008 | B2 |
7557797 | Ludwig | Jul 2009 | B2 |
7598949 | Han | Oct 2009 | B2 |
7611409 | Muir et al. | Nov 2009 | B2 |
8154529 | Sleeman | Apr 2012 | B2 |
8169414 | Lim | May 2012 | B2 |
8170346 | Ludwig | May 2012 | B2 |
8179376 | Griffin | May 2012 | B2 |
8345014 | Lim | Jan 2013 | B2 |
20010036299 | Senior | Nov 2001 | A1 |
20020005108 | Ludwig | Jan 2002 | A1 |
20020093491 | Gillespie et al. | Jul 2002 | A1 |
20040074379 | Ludwig | Apr 2004 | A1 |
20040118268 | Ludwig | Jun 2004 | A1 |
20040251402 | Reime | Dec 2004 | A1 |
20060252530 | Oberberger et al. | Nov 2006 | A1 |
20070044019 | Moon | Feb 2007 | A1 |
20070063990 | Park | Mar 2007 | A1 |
20070229477 | Ludwig | Oct 2007 | A1 |
20080010616 | Algreatly | Jan 2008 | A1 |
20080143690 | Jang | Jun 2008 | A1 |
20080164076 | Orsley | Jul 2008 | A1 |
20080259053 | Newton | Oct 2008 | A1 |
20080297482 | Weiss | Dec 2008 | A1 |
20080300055 | Lutnick | Dec 2008 | A1 |
20080309634 | Hotelling et al. | Dec 2008 | A1 |
20090006292 | Block | Jan 2009 | A1 |
20090027351 | Zhang et al. | Jan 2009 | A1 |
20090124348 | Yoseloff et al. | May 2009 | A1 |
20090146968 | Narita et al. | Jun 2009 | A1 |
20090167701 | Ronkainen | Jul 2009 | A1 |
20090254869 | Ludwig | Oct 2009 | A1 |
20100013860 | Mandella | Jan 2010 | A1 |
20100044121 | Simon | Feb 2010 | A1 |
20100060607 | Ludwig | Mar 2010 | A1 |
20100079385 | Holmgren | Apr 2010 | A1 |
20100087241 | Nguyen et al. | Apr 2010 | A1 |
20100090963 | Dubs | Apr 2010 | A1 |
20100110025 | Lim | May 2010 | A1 |
20100117978 | Shirado | May 2010 | A1 |
20100177118 | Sytnikov | Jul 2010 | A1 |
20100231550 | Cruz-Hernandez et al. | Sep 2010 | A1 |
20100231612 | Chaudhri et al. | Sep 2010 | A1 |
20100232710 | Ludwig | Sep 2010 | A1 |
20100267424 | Kim et al. | Oct 2010 | A1 |
20100289754 | Sleeman et al. | Nov 2010 | A1 |
20100302172 | Wilairat | Dec 2010 | A1 |
20100328032 | Rofougaran | Dec 2010 | A1 |
20110007000 | Lim | Jan 2011 | A1 |
20110037735 | Land | Feb 2011 | A1 |
20110063251 | Geaghan | Mar 2011 | A1 |
20110086706 | Zalewski | Apr 2011 | A1 |
20110202889 | Ludwig | Aug 2011 | A1 |
20110202934 | Ludwig | Aug 2011 | A1 |
20110210943 | Zaliva | Sep 2011 | A1 |
20110260998 | Ludwig | Oct 2011 | A1 |
20110261049 | Cardno et al. | Oct 2011 | A1 |
20110285648 | Simon et al. | Nov 2011 | A1 |
20120007821 | Zaliva | Jan 2012 | A1 |
20120034978 | Lim | Feb 2012 | A1 |
20120056846 | Zaliva | Mar 2012 | A1 |
20120194461 | Lim | Apr 2012 | A1 |
20120108323 | Kelly et al. | May 2012 | A1 |
20120192119 | Zaliva | Jul 2012 | A1 |
20120194462 | Lim | Aug 2012 | A1 |
20120195522 | Ludwig | Aug 2012 | A1 |
20120223903 | Ludwig | Sep 2012 | A1 |
20120235940 | Ludwig | Sep 2012 | A1 |
20120262401 | Rofougaran | Oct 2012 | A1 |
20120280927 | Ludwig | Nov 2012 | A1 |
20120317521 | Ludwig | Dec 2012 | A1 |
20130009896 | Zaliva | Jan 2013 | A1 |
20130038554 | West | Feb 2013 | A1 |
Number | Date | Country |
---|---|---|
0 574 213 | Dec 1993 | EP |
Entry |
---|
Dulberg, M. S., et al. An Imprecise Mouse Gesture for the Fast Activation of Controls, IOS Press, Aug. 1999, [online] [retrieved on Jul. 9, 2013] URL: http://www.csc.ncsu.edu/faculty/stamant/papers/interact.pdf.gz, 10 pgs. |
Moyle, M., et al. A Flick in the Right Direction: A Case Study of Gestural Input, Conferences in Research and Practice in Information Technology, vol. 18, Jan. 2005; New Zealand, [online] [retrieved on Jul. 9, 2013] URL:http://www.cosc.canterbury.ac.nz/andrew.cockburn/papers/moyle-cockburn.pdf, 27 pgs. |
Maltoni, D., et al., “Handbook of Fingerprint Recognition,” Springer Professional Computing, 2nd ed. 2009, XVI, p. 74, p. 361, [online] [retrieved on Jul. 9, 2013] URL: http://books.google.com/books?id=1Wpx25D8qOwC&pg=PA36lpg=PA361&dq=fingerprint+minutiae, 2 pgs. |
VeriFinger Information, [online] [retrieved on Jun. 11, 2013] URL: http://www.fingerprint-it.com/_sol_verifinger.html, 2 pgs. |
Prabhakar S., et al., Learning fingerprint minutiae location and type, Pattern Recognition 2003, 36, [online] URL: http://www.cse.msu.edu/biometrics/Publications/Fingerprint/PrabhakarJainPankanti_MinaLocType_PR03.pdf, pp. 1847-1857. |
Garcia Reyes, E., An Automatic Goodness Index to Measure Fingerprint Minutiae Quality, Progress in Pattern Recognition, Image Analysis and Applications, Lecture Notes in Computer Science vol. 3773, 2005, pp. 578-585, [online] [retrieved on Jun. 2, 2013] URL: http://www.researchgate.net/publication/226946511_An_Automatic_Goodness_Index_to_Measure_Fingerprint_Minutiae_Quality/file/d912f50ba5e96320d5.pdf. |
Kayaoglu, M., et al., Standard Fingerprint Databases: Manual Minutiae Labeling and Matcher Performance Analyses, arXiv preprint arXiv:1305.1443, 2013, 14 pgs, [online] [retrieved on Jun. 2, 2013] URL: http://arxiv.org/ftp/arxiv/papers/1305/1305.1443.pdf. |
Alonso-Fernandez, F., et al., Fingerprint Recognition, Chapter 4, Guide to Biometric Reference Systems and Performance Evaluation, (Springer, London, 2009, pp. 51-90, [online] [retrieved on Jun. 2, 2013] URL: http://www2.hh.se/staff/josef/public/publications/alonso-fernandez09chapter.pdf. |
Image moment, Jul. 12, 2010, 3 pgs, [online] [retrieved on Jun. 13, 2013] URL: http://en.wikipedia.org/wiki/Image_moment. |
Nguyen, N., et al., Comparisons of sequence labeling algorithms and extensions, Proceedings of the 24th International Conference on Machine Learning, 2007, [online] [retrieved on Jun. 2, 2013] URL: http://www.cs.cornell.edu/˜nhnguyen/icmI07structured.pdf, pp. 681-688. |
Nissen, S., Implementation of a Fast Artificial Neural Network Library (FANN), Department of Computer Science University of Copenhagen (DIKU)}, Oct. 31, 2003, [online] [retrieved on Jun. 21, 2013] URL: http://mirror.transact.net.au/sourceforge/f/project/fa/fann/fann_doc/1.0/fann_doc_complete_1.0.pdf, 92 pgs. |
Igel, C., et al., Improving the Rprop Learning Algorithm, Proceedings of the Second International ICSC Symposium on Neural Computation (NC 2000), 2000, 2000, [online] [retrieved on Jun. 2, 2013] URL: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.17.3899&rep=rep1&type=pdf, pp. 115-121. |
Bishop, C.M., Pattern Recognition and Machine Learning, Springer New York, 2006, pp. 561-593. |
Euler Angles, 2011, [online] [retrieved on Jun. 30, 2011] URL: http://en.wikipedia.org/w/index.php? title=Euler_angles&oldid=436460926, 8 pgs. |
Electronic Statistics Textbook, StatSoft, Inc., 2011, [online] [retrieved on Jul. 1, 2011] URL: http://www.statsoft.com/textbook, 1 pg. |
Central Moment, Dec. 16, 2009, [online] [retrieved on Oct. 26, 2010] URL: http://en.wikipedia.org/w/index.php?title=Central_moment&oldid=332048374. |
Local regression, Nov. 16, 2010, [online] [retrieved on Jun. 28, 2011] URL: http://en.wikipedia.org/w/index.php? title=Local_regression&oldid=416762287. |
USPTO Notice of Allowance dated Jun. 6, 2013 issued in U.S. Appl. No. 13/846,830, filed Mar. 18, 2013. |
Hernandez-Leon, R., et al., Classifying using Specific Rules with High Confidence, 9th Mexican International Conference on Artificial Intelligence, IEEE, Nov. 2010, pp . 75-80. |
Fang, Y., et al., Dynamics of a Winner-Take-All Neural Network, Neural Networks, 9(7), Oct. 1996, pp. 1141-1154. |
Viberg, M., Subspace Fitting Concepts in Sensor Array Processing, Department of Electrical Engineering, Linkoping University, 1989, Sweden, 15 pgs. |
Moog, R. A., The Human Finger-A Versatile Electronic Music Instrument Component, Audio Engineering Society Preprint, 1977, New York, NY, 4 pgs. |
Johnson, C., Image sensor tracks moving objects in hardware, Electronic Engineering Times, Apr. 5, 1999, 1 pg. |
Kaoss pad dynamic effect/controller, Korg Proview Users' magazine Summer 1999, 2 pgs. |
Leiberman, D., Touch screens extend grasp Into consumer realm, Electronic Engineering Times, Feb. 8, 1999. |
Lim, et al., A Fast Algorithm for Labelling Connected Components in Image Arrays, Technical Report Series, No. NA86-2, Thinking Machines Corp., 1986 (rev. 1987), Cambridge, Mass., USA, 17 pgs. |
Pennywitt, K., Robotic Tactile Sensing, Byte, Jan. 1986, 14 pgs. |
Review of KORG X-230 Drum (later called Wave Drum), Electronic Musician, Apr. 1994, 1 pg. |
Rich, R., Buchla Lightning MIDI Controller, Electronic Musician, Oct. 1991, 5 pgs. |
Rich, R., Buchla Thunder, Electronic Musician, Aug. 1990, 4 pgs. |
USPTO Notice of Allowance dated May 24, 2013 issued in U.S. Appl. No. 13/442,815, filed Apr. 9, 2012. |
USPTO Notice of Allowance dated Dec. 24, 2002 issued in U.S. Appl. No. 09/812,870, filed Mar. 19, 2001. |
Otsu's method, [online] [retrieved on Jun. 26, 2013] URL: http://en.wikipedia.org/wiki/Otsu_method, Sep. 13, 2010, 2 pgs. |
Principal component analysis, [online] [retrieved on Jun. 26, 2013] URL: http://en.wikipedia.org/wiki/Principal_component_analysis, Feb. 25, 2011, 9 pgs. |
USPTO Notice of Allowance dated May 30, 2013 issued in U.S. Appl. No. 13/442,806, filed Apr. 9, 2012. |
DIY Touchscreen Analysis, MOTO, [online] [retrieved on May 12, 2013] URL: http://labs.moto.com/diy-touchscreen-analysis/, Jul. 15, 2010, 23 pgs. |
Wilson, T.V., How the iPhone Works, howstuffworks, [online] [retrieved on May 12, 2013] URL: http://electronics.howstuffworks.com/iphone2.htm, Jan. 8, 2011, 11 pgs. |
Walker, G., Touch and the Apple iPhone, Veritas et Visus, [online] [retrieved on May 12, 2013] URL: http://www.veritasetvisus.com/VVTP-12,%20Walker.pdf, Feb. 2007, pp. 50-54. |
Han, J., Multi-Touch Sensing through LED Matrix Displays (video), [online] [retrieved on May 12, 2013] “http://cs.nyu.edu/˜jhan/ledtouch/index.html,” Feb. 18, 2011, 1 pg. |
Roberts Cross, [online] [retrieved on May 12, 2013] URL: http://en.wikipedia.org/wiki/Roberts_Cross, Jul. 20, 2010, visited Feb. 28, 2011, 3 pgs. |
Sobel Operator, [online] [retrieved on May 12, 2013] URL: http://en.wikipedia.org/wiki/Sobel_operator, Mar. 12, 2010, visited Feb. 28, 2011, 5 pgs. |
Dario P., et al., Tactile sensors and the gripping challenge, IEEE Spectrum, vol. 5, No. 22, Aug. 1985, pp. 46-52. |
Snell, J. M., Sensors for Playing Computer Music with Expression, Proceedings of the Intl. Computer Music Conf. at Eastman, 1983, pp. 113-126. |
Verner J., Artif Starr Switch Company Ztar 624-D, Electronic Musician, Nov. 1994, 5 pgs. |
Haken, L., An Indiscrete Music Keyboard, Computer Music Journal, Spring 1998, pp. 30-48. |
USPTO Notice of Allowance dated May 8, 2013 issued in U.S. Appl. No. 12/541,948, filed Aug. 15, 2009. |
Buxton, W. A. S., Two-Handed Document Navigation, Xerox Disclosure Journal, 19(2), Mar./Apr. 1994 [online] [retreived on May 28, 2013] URL: http://www.billbuxton.com/2Hnavigation.html, pp. 103-108. |
USPTO Notice of Allowance dated Mar. 20, 2012 issued in U.S. Appl. No. 12/724,413, filed Mar. 15, 2010. |
USPTO Notice of Allowance dated Jan. 10, 2008 issued in U.S. Appl. No. 10/683,914, filed Oct. 10, 2003. |
USPTO Notice of Allowance dated Nov. 9, 2012 issued in U.S. Appl. No. 12/502,230, filed Jul. 13, 2009. |
USPTO Notice of Allowance dated Mar. 12, 2012 issued in U.S. Appl. No. 12/511,930, filed Jul. 29, 2009. |
USPTO Notice of Allowance dated May 16, 2013 issued in U.S. Appl. No. 13/441,842, filed Apr. 7, 2012. |
Balda AG, Feb. 26, 2011, [online] [retrieved on May 12, 2013] URL: http://www.balda.de, 1 pg. |
Cypress Semiconductor, Feb. 28, 2011, [online] [retrieved on May 12, 2013] URL: http://www.cypress.com, 1 pg. |
Synaptics, Jan. 28, 2011, [online] [retrieved on May 12, 2013] URL: http://www.synaptics.com, 1 pg. |
Venolia, D., et al., T-Cube: A Fast, Self-Disclosing Pen-Based Alphabet, CHI '94 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Apr. 24-28, 1994, pp. 265-270. |
Davis, R. C., et al., NotePals: Lightweight Note Taking by the Group, for the Group, University of California, Berkeley, Computer Science Division, 1998, 8 pgs. |
Rekimoto, Jun, Pick-and-Drop: A Direct Manipulation Technique for Multiple Computer Environments, Sony computer Science Laboratory Inc., Tokyo, Japan, 1997, [online] [retrieved on May 30, 2013] URL: http://www.sonycsl.co.ip/person/rekimoto/papers/uist97.pdf, 8 pgs. |
Davis, R. C., et al., NotePals: Lightweight Note Sharing by the Group, for the Group, [online] [retrieved on Jun. 2, 2013] URL: http://dub.washington.edu:2007/projects/notepals/pubs/notepals-chi99-final.pdf, 9 pgs. |
Want, R., et al., The PARCTAB ubiquitous computing experiment, 1995-1996, [online] [retrieved on Jun. 10, 2013] URL: http://www.ece.rutgers.edu/˜parashar/Classes/02-03/ece572/perv-reading/the-parctab-ubiquitous-computing.pdf, 44 pgs. |
Prewitt, [online] [retrieved on May 12, 2013] URL: http://en.wikipedia.org/wiki/Prewitt, Mar. 15, 2010, visited Feb. 28, 2011, 2 pgs. |
Coefficient of variation, [online] [retrieved on May 12, 2013] URL: http://en.wikipedia.org/wiki/Coefficient_of_variation, Feb. 15, 2010, visited Feb. 28, 2011, 2 pgs. |
Canny edge detector, [online] [retrieved on May 12, 2013] http://en.wikipedia.org/wiki/Canny_edge_detector, Mar. 5, 2010, 4 pgs. |
Polynomial regression, [online] [retrieved on May 12, 2013] URL: http://en.wikipedia.org/wiki/Polynomial_regression, Jul. 24, 2010, 4 pgs. |
Pilu,M., et al., Training PDMs on models: The Case of Deformable Superellipses, Proceedings of the 7th British Machine Vision Conference, Edinburgh, Scotland, 1996, pp. 373-382, [online] [retrieved on Feb. 28, 2011] URL: https://docs.google.com/viewera=v&pid=explorer&chrome=true&srcid=0BxWzm3JBPnPmNDI1MDIxZGUtNGZhZi00NzJhLWFhZDMtNTJmYmRiMWYyMjBh&authkey=CPeVx4wO&hl=en. |
Osian, M., et al., Fitting Superellipses to Incomplete Contours, IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW '04), Jun. 2004, 8 pgs. |
Hough transform, [online] [retrieved on Feb. 13, 2010] URL: http://en.wikipedia.org/wiki/Hough_transform, Feb. 13, 2010, 7 pgs. |
Tactile Pressure Measurement, Pressure Mapping Systems, and Force Sensors and Measurement Systems, [online] [retrieved on Aug. 6, 2013] URL: http://www.tekscan.com, 2 pgs. |
Tactile Surface Pressure and Force Sensors,Sensor Products LLC, Oct. 26, 2006, [online] [retrieved on Aug. 6, 2013] URL: http://www.sensorprod.com, 2 pgs. |
Pressure Profile Systems, Jan. 29, 2011, [online] [retrieved on Jan. 29, 2011] URL: http://www.pressureprofile.com, 1 pg. |
Xsensor Technology Corporation, Feb. 7, 2011, [online] [retrieved on May 12, 2013] URL: http://www.xsensor.com, 1 pg. |
Number | Date | Country | |
---|---|---|---|
20180059873 A1 | Mar 2018 | US |
Number | Date | Country | |
---|---|---|---|
61567623 | Dec 2011 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 13706214 | Dec 2012 | US |
Child | 15804778 | US |