The present disclosure relates generally to proximity sensing systems, including touch sensing systems, and more particularly to capacitance proximity/touch sensing systems and methods.
In object sensing systems, including but not limited to touch capacitance sensing systems, linearity and accuracy are desirable responses. Accuracy can be the difference between an actual object position (e.g., finger touch location) and a location reported to a system (e.g., location on a touch display). Linearity can be the straightness of a linear path over sensor region (e.g., line drawn on a panel surface).
Conventionally, object sensing systems can have reduced linearity and accuracy at the edges of a sensing area (including corners). In some conventional approaches, a “curvature” of a sensing signal (caused by the object passing off the edge) was determined, and then compensated for when an object was at an edge location. Such compensation could include generating values for “virtual sensors” corresponding to regions beyond the edge.
A drawback to such conventional approaches can arise when sensed objects vary in shape. In particular, in capacitance sensing touch systems, contacting finger shapes may vary (i.e., can be various oval shapes), while tested shapes (used to compensate for curvature) can have a uniform shape (i.e., be circular). Further, such conventional approaches can constantly determine finger size and or shape while determining position. At edge locations, it can be difficult to accurately determine finger size. Consequently, a touch response can exhibit a “scalloping” response at edge locations (poor linearity) and/or jumps in finger size, which can result in poor accuracy.
Various embodiments will now be described that show object position sensing methods and systems that can provide more accurate and linear responses at edge locations (including corners) of a sensing area. Very particular embodiments are directed to touch capacitance sensing methods and/or systems having a sensing area formed by a number of adjacent sensors.
According to some embodiments, an object size can be “locked” to a previously determined size when a final edge position is determined. Such an approach can eliminate wide variations in object size that can occur in conventional approaches.
According to some embodiments, determination of a final edge position can rely on two metrics, one of which can dynamically vary according to object size. Such an approach can provide accurate object position despite differences in object size and/or shape.
In the various embodiments below, like items are referred to by the same reference characters, but with the leading digit(s) corresponding to the figure number.
A method 100 can include making an initial determination if an object is at an edge position (102). Such an action can include using initial sensing data to determine if an object is proximate to an edge of a sensing region. In very particular embodiments, a sensing region can include multiple sensors, each of which generates a sense value. Local maximums (or minimums) of such sensor values can be used to make an initial determination of an object position. In some embodiments, a sensing surface can include edges and corners, and such an action can determine if an object is near an edge (including a corner). In some embodiments, proximity to all edges of a surface can be determined. However, in other embodiments, such determination can be applied to less than all edges, including one edge.
If an initial position of an object is not an edge position (N from 102), a method 100 can determine a final core position (104). Such an action can include any suitable method for sensing an object position for sense surface. In particular embodiments, a sensing region can include multiple sensors, and a position can be calculated using a centroid from a local group of sensors (identified by maxima or minima).
If an initial position of an object is an edge position (Y from 102), a method 100 can determine if an object is being detected for a first time (106). Such an action can utilize any suitable object tracking method for associating an object with a previous object position or trajectory. If an object is being detected for a first time (Y from 106), a method 100 can determine a size of the object (108). Such an action can determine an object size based on edge sensors values. According to some embodiments, such an action can utilize calculations that are different from those that determine an object size within a core (not shown).
If a method 100 determines that an object proximate an edge is not being detected for a first time (N from 106), a method can lock an object size to that of a previously determined size. In particular embodiments, this can include any suitable object tracking method determining that the object is the same as that previously sensed or part of a trajectory. This is in contrast to conventional approaches that may continue to update object size with each position determination calculation.
A method 100 can then determine a final edge position (112). In the particular embodiment shown, such a determination can dynamically vary according to finger size. Such an action is also in contrast to conventional approaches, such as those noted above.
If an initial position is proximate to an edge (Y from 202), a method 200 can determine if a touch is a new touch (206). If a touch is a new touch (Y from 206), a method 200 can update a touch location with a calculated finger size value (208). However, in sharp contrast to conventional approaches such as those noted above, if a touch is not a new touch (N from 206), a touch location is not updated with a calculated finger size value. That is, a finger size has been locked, and a final edge position calculation can use a previous finger size value.
The embodiment of
In the embodiment of
In the embodiment shown, different position determination functions can be utilized according to an initial position determination. A Position_Edge function 320 can determine a final object (e.g., finger) position for edge locations, a Position_Corner function 318 can determine a final object position for corner locations, and a Position_COM function 316 can determine a final position for other locations (including core locations).
According to some embodiments, Position_Edge and Position_Corner functions can utilize virtual sensor values to calculate a final finger position. More particularly, a final finger positions can be calculated based on a grid of sensor values, where a portion of the grid is composed of virtual sensors having positions located beyond the edge of the sensor area. In very particular embodiments, virtual sensor values can vary dynamically according to finger size.
Higher level functions 336 can utilize math functions 338 to arrive at a final object position. Math functions 338 can include position calculation functions 338-0 and finger size calculation functions 338-1. Position calculation functions 338-0 can include a final position calculation function (Position_COM_Calc) 322. In addition, such functions can include two edge positions calculation functions (Position_Edge_Outer and Position_Edge_Inner). Position_Edge_Outer can correspond to an initial touch position having a first distance from an edge, and Position_Edge_Inner can correspond to an initial touch position closer to the edge than the first distance. In very particular embodiments, functions Position_Edge_Outer and Position_Edge_Inner can populate “virtual sensor” locations to enable the higher level functions 316 to derive a final object position.
Finger size calculation functions 338-1 can include different functions according to finger position. In the embodiment shown, a first function can be utilized for touch locations in a core (DetectFingerSize_Core 328), a second function can be utilized for touch locations proximate to an edge (DetectFingerSize_Edge 330), and a third function can be utilized for touch locations proximate to an edge (DetectFingerSize_Corner 332).
Having described system processing functions, a method using such functions will now be described.
A method 400 can include an initial determination of whether a touch is at an edge location and not a corner location (402-0). If a touch is an edge (and not a corner) (Y from 402-0), a method 400 can determine if a touch is a first touch of a single finger (440). If so (Y from 440-0), a method can detect a finger size using a function DetectFingerSize_Edge (i.e., a finger size detection calculation particular to an edge location). A final edge position can then be determined with a function Position_Edge (420). As noted above, in particular embodiments, such a function can dynamically vary a position calculation based on a finger size.
If the initial edge position is not a first touch (N from 440-0), a method 400 can determine if multiple fingers are detected (442-0). If multiple fingers are detected (Y from 442-0), a finger size can be set to a default value (444-0), and a final edge position can then be determined with a function Position_Edge (420).
If the initial edge touch is not a first touch or a multiple touch (N from 442-0), a method 400 can determine a final edge position with a function Position_Edge (420). It is understood that such an action will utilize a previously calculated finger size, and not update a finger size for such a calculation. Said in another way, such an action can rely on locking a finger size prior to determining the final edge position.
If an initial position is not an edge (N from 402-0), a method 400 can determine if an initial touch is within a core region (402-1). If the initial touch is within the core region (Y from 402-1), a method 400 can determine if a touch is a single finger (440-1). If so (Y from 440-1), a method can detect a finger size using a function DetectFingerSize_Core (i.e., a finger size detection calculation particular to a core). A final core position can then be determined with a function Position_COM (416). If an initial core determination corresponds to multiple fingers (N from 440-1), a method can set a finger size to a default value (444-1), and then determine a final core position with a function Position_COM (416).
If an initial position is a corner position (N from 402-1), a method 400 can determine if a touch is a single finger (440-2). If so (Y from 440-2), a method can detect a finger size using a function DetectFingerSize_Corner (i.e., a finger size detection calculation particular to a corner). A final core position can then be determined with a function Position_Corner (418). If an edge determination corresponds to multiple fingers (Y from 442), a method can set a finger size to a default value (444-2), and then determine a final core position with a function Position_Corner (418).
If the initial corner touch is not a first touch or a multiple touch (N from 442), a method 400 can determine a final corner position with a function Position_Corner (418). As in the case of the edge position determination noted above, it is understood that such an action will utilize a previously calculated finger size, and not update a finger size for such a calculation (i.e., finger size is locked going into the final corner position determination).
According to embodiments, which can include those noted above, final position determinations for edges and/or corner locations can utilize virtual sensor values. According to some embodiments, virtual sensor values can vary dynamically according to finger size. Very particular embodiments of such virtual sensor value calculations will now be described.
It is understood, and will be shown in more detail below, that while
Referring to
B=(C−D)*Outer_gain/Gain_Scaling, where
C is an actual sensor value closest to the virtual sensor (A); D is the next closest actual sensor value; Outer_gain can be a gain value; and a Gain_Scaling can be a scaling value (which is 16 in
B=(C−D)*Outer_gain*Scalar/(Gain_Scaling*Scalar_Scaling), where
“Scalar” can be a value that can dynamically vary according to finger size (e.g., C−D); and Scalar_Scaling can be a second scaling value (which is 256 in
A11+A12=B1+G1, where
in some embodiments, a scalar that dynamically varies according to finger size can be used to modify the value G1, as described above.
As noted above, embodiments can include translating sets of sensors to a common orientation to enable one calculation to populate virtual sensor values. Such translation, according to one very particular embodiment, is shown in
Particular operations will now be described.
Operation 652-0 shows a corner position sensing operation. Initial sensor values can be acquired. Because, the sensor grid already has an edge aligned with a left side, no translation can take place. For the corner case, virtual sensor values (shown in this case as 664) can be generated for a left side and a top row. Because the sensor set was not translated to a different orientation, it does not need to be translated back.
Operation 652-1 shows an inner top edge sensing operation. A translation operation can align the top edge with a left edge. Virtual sensors values can be generated, then the grid can be translated back.
Operation 652-2 shows an outer top edge sensing operation. A translation operation can align the top edge with a left edge. Virtual sensors values can be generated. Because the sense region is an outer edge, a number of virtual sensors can be greater than the inner edge case (e.g., 652-1). The grid can then be translated back.
The remaining operations are understood from the description of operations 652-0 to 652-2. In very particular embodiments, translation operations can include various mirroring operations along horizontal, vertical and/or diagonal axes. More particular examples of such mirror operations are described in more detail below.
In the function of
Referring to
If the local maximum is not proximate the left edge (N from 702-0), the function can determine if a local maximum is proximate another edge (e.g., bottom edge) of a sense area, by determining if a Y-axis maximum is detected by a last number of receive (rx) electrodes (in this case, either of the last two electrodes) (702-1). If this is the case (Y from 702-1), a function 700 can vertically mirror the grid values (768-0). It is understood that later actions can align the vertical sensors along the left edge.
A function can then determine if the initial position represents an outer edge or inner edge case. An outer edge case can correspond to an initial position occurring at a far edge of a surface. An inner edge case can correspond to an initial position proximate to the edge, but more inset from the edge than the outer edge case. Consequently, in the outer edge case, a greater number of virtual sensors can be populated (assigned values) than the inner edge case.
In the particular embodiment shown, a first outer edge case can occur when a local X-axis maximum occurs at a last or first tx electrode (CDC_txNum-1 or 0) (702-2). If so, a function can execute an Edge Outer operation (770-0) that populates virtual sensors with values (in this case, two columns). It is understood that such virtual sensor values can be generated with a dynamic gain that varies according to calculated finger size, as described herein, or equivalents.
A second outer edge case can occur when a local Y-axis maximum occurs at a first or last rx electrode (CDC_rxNum-1 or 0) (702-3). If so, a function can diagonally mirror grid values (772-0), to align virtual sensors along a left edge. An Edge Outer operation (770-1) can then populate virtual sensors (again, in this case two columns). A function may then diagonally mirror the grid values again (772-1), to return the grid to its prior orientation.
Referring now to
A second outer edge case can occur when a local Y-axis maximum occurs at a second or second to last rx electrode (CDC_rxNum-2 or 1) (702-5). If so, a function can diagonally mirror grid values (772-2), to align virtual sensors along a left edge. An Inner Edge operation (774-1) can then populate virtual sensors (again, in this case one column). Such values can be generated with a dynamic gain value, as described herein, or equivalents. A function may then diagonally mirror the grid values again (772-3), to return the grid to its prior orientation.
A function 700 can then execute additional mirroring operations, if necessary, to return the grid to its original orientation. In the embodiment shown, if the initial touch occurred on a right edge (702-6) it is horizontally mirrored (766-1) back from a left edge alignment. Similarly, if the initial touch occurred on a bottom edge (702-7) it is vertically mirrored back from a top edge alignment (768-1).
Section 874-1 can calculate virtual sensors for an “outer edge” case, which can include a greater number of virtual sensor values than an “inner edge” case. Such a section can correspond to 702-2, 770-0, 702-3, 772-0, 770-1 and 772-1 of
Section 874-2 can calculate virtual sensors for an “inner edge” case, which can include a smaller number of virtual sensor values than an “outer edge” case. Such a section can correspond to 702-4, 774-0, 702-5, 772-2, 774-1 and 772-3 of
Section 874-3 can return a grid (centroid) back to its previous orientation, if necessary. Such a section can correspond to 702-6, 766-1, 702-7 and 768-1 of
In very particular embodiments, a function 800 can be instructions executable by a processor.
In the function of
Referring to
Once a grid has been translated as needed, a function 900 can perform an Edge Outer operation (970-0) that populates virtual sensors with values (in this case, two columns). A function can then store a portion of the virtual sensors (976). Such an action can retain virtual sensor values for one edge of a corner. A function 900 can then clear those virtual sensors from which values were saved (978).
A function 900 can then diagonally mirror the grid 972-0, to align the other edge of the corner along the left side. A second Edge Outer operation (970-1) can then populate virtual sensors with values (in this case, two columns) for this second edge of the corner. A function can then diagonally mirror the grid (972-1) once again, to return the cleared virtual sensors to the left edge.
Referring now to
A function 900 can then execute additional mirroring operations, if necessary, to return the grid to its original orientation. In the embodiment shown, if the initial touch occurred on a right edge (902-2) it is horizontally mirrored (966-1) back from a left edge alignment. Similarly, if the initial touch occurred on a bottom edge (902-3) it is vertically mirrored back from a top edge alignment (968-1).
Unlike the Position Edge functions shown in
In the embodiment shown, once a Y-axis position has been determined, the grid can be diagonally mirrored (972-3) back to its previous position. A final X-axis position can then be calculated (984).
Section 1074-1 can calculate virtual sensors for the first corner edge “Position_Edge_Outer”, store these values “StoreGrid( )”, then clear the virtual sensor locations. The function can then align the second corner edge on the left side, calculate virtual sensor values for the second corner edge, and then orient the grid so the second edge values are aligned on a top edge. Such a section can correspond to 970-0, 976, 978, 972-0, 970 and 972-1 of
Section 1074-2 can recall the first edge values back into grid, and then mirror the values as necessary to return them to an original orientation. Such a section can correspond to 980, 902-2, 966-1, 902-3, 968-1 of
Section 1074-3 can include orienting the grid to calculate a Y-axis position “yPos=Position_COM_Calc”. The grid can be oriented back to calculate an X-axis position “xPos=Position_COM_Calc”. Such a section can correspond to 972-2, 982, 972-3 and 984 of
In the function of
Referring to
A function 1100 can then determine if a local X-axis maximum occurs on first column (i.e., left edge). If this is the case (Y from 1102-1), a function can clear left virtual sensor values (1178-0).
If a local maximum is at a top edge or bottom edge, but not a left edge (N from 1102-1), a function 1100 can clear right virtual sensor values (1178-1). A grid can then be translated diagonally (1172-0) and a final Y-axis position (1182-0) can be calculated. The grid can then be translated back to its previous orientation (1172-1). A function 1100 can then end (see
Referring now to
A function 1100 can then determine if a local X-axis maximum occurs on first column (i.e., left edge). If this is the case (Y from 1102-2), a function can clear top virtual sensor values (1178-2). If this is not the case (N from 1102-2), bottom virtual sensor values can be cleared (1178-3). A final X-axis position can then be calculated (1184-1). A function 1100 can then end (see
Referring now to
As noted above, according to embodiments, a finger size value can be locked (if previously determined) or calculated to determine a final edge (including corner) position. Further, a virtual sensor value used for such a final determination can dynamically vary for different finger sizes. While embodiments can include any suitable finger size determination calculation, very particular finger size calculation methods will now be described.
In very particular embodiments, any of the functions of
In very particular embodiments, any of the functions of
In very particular embodiments, either of functions of
As noted above, embodiments can include functions that populate virtual sensor values for calculated final edge (and corner) object positions. In particular embodiments, there can be different functions according to how close an object is to an edge (determined with an initial position determination). The closer an object is to an edge, the greater the number of virtual sensor values. Further, in some embodiments, virtual sensor values can vary dynamically according to finger size. Very particular implementations of such a function are shown in
Equation 9 shows the generation of a ScalarRange value, which incorporates the FingerScale value (and a scaling factor 2SCALAR_SHFT), and thus varies according to finger size. Equation 10 shows the generation of a Scalar value which can be applied to vary a gain value used to calculate virtual sensor values. As shown, the Scalar incorporates the high and low pivot points (HighPivot, LowPivot) as well as the FingerScale value (and a scaling factor 2SCALAR_SHFT), and thus will vary dynamically according to finger size.
Equation 11 shows the population of virtual sensor values on an edge closest to the actual sensor values (i.e., column 1 of a grid). Such values are generated using the dynamic gain value Scalar, a set gain value (outergain), and a value of an adjacent actual sensor (centroid[2][j]) (as well as two scaling factors 2GAIN_SHFT, 2SCALAR_SHFT).
Equation 12 shows the population of virtual sensor values on an edge further away from the actual sensor values (i.e., column 0 of the grid). Such values are generated using a set gain value (innergain), and a value of an adjacent virtual sensor (centroid[1][j]) (as well as a scaling factors 2GAIN_SFT).
Section 1674-2 shows the setting of a dynamic value can be established by comparing a summation value zSum to pivot point values (HighPivot, LowPivot). Section 1674-3 shows the setting of a scalar value, using the dynamic value from section 1674-2, which can adjust gain values of virtual sensors.
Section 1674-4 shows the population of virtual sensor values (along columns 0 and 1 of a grid) using gain values and the scalar value noted above.
In very particular embodiments, either of functions of
In very particular embodiments, either of functions of
As noted above, embodiments can arrive at a final position by calculations based on a grid of values (which may or may not include virtual sensor values). In particular embodiments, a same final position calculation can be used regardless of whether the initial position is a core positions or an edge position.
A final position calculation (i.e., centroid calculation) can then be made using the above values, as well as scaling values (RES_SHIFT, COM_SHIFT).
It is understood that in some embodiments, functions like those of
In very particular embodiments, either of functions of
As noted above, embodiments can include mirroring operations that can translate sensor values (including virtual sensor values) to provide a common orientation to populate virtual sensor values and/or calculate final object positions. Examples of such functions will now be described. The various functions described below operation on a 5×5 grid of sensor values, but alternate embodiments can include larger or smaller grid sizes, as well as non-rectangular grids.
In very particular embodiments, the function of
In very particular embodiments, the function of
In very particular embodiments, the function of
In addition to mirroring operations, embodiments can include other functions such as clearing functions, which can clear sensor values in a particular range, as well as summation functions, which can add up sensor values of a particular range. Very particular implementations of such functions will now be described.
In very particular embodiments, the function of
In very particular embodiments, the function of
A capacitance sense network 2286 can include a number of capacitance sensors that can provide values reflecting the proximity of one or more objects. In the embodiment shown, a capacitance sense network 2286 can be a touch mutual capacitance sensing network that includes transmit (TX) and receive (RX) electrodes, and can generate capacitance values that vary according to the proximity of fingers. In one embodiment, TX and RX electrodes can be perpendicular to one another, with TX electrodes being driven to induce a change on RX electrodes. Such changes can be sensed to detect variations in capacitance, and hence the proximity of an object. A position of the object can be determined based on the TX electrode driven, and the RX electrode exhibiting the change in capacitance.
A sense section 2284 can drive TX electrodes of network 2286 and sense values on RX electrodes. In some embodiments “raw” sense values (e.g., counts) can be provided to processing section 2280. In other embodiments, sense values from network 2286 can be pre-processed before being supplied to a processing section 2280.
A processing section 2280 can include a processor 2280-0 that can execute functions stored instructions 2280-1. Instructions enable a processing section to execute various functions, as described herein, or equivalents. In
Area transform functions 2294 can allow a set of sensor values to be translated to a common orientation (e.g., aligned along a particular edge), to enable a same function call to populate sensor locations with virtual sensor values, if needed.
Final position calculation function 2296 can determine a final position of an object. Final position calculation function 2296 can include a virtual sensor section, which can generate virtual sensor values for touches occurring on an edge-type location of a sense network 2286. Such virtual sensor values can be function of finger size, and in particular embodiments, can dynamically change according to finger size.
A memory section 2282 can store various data values for a processing section 2280. In the embodiment shown, a memory section 2282 can store a finger size value 2282-0 (e.g., lock a finger size value) to enable such a value to reused in a position calculation at an edge-type location (i.e., a finger size is not re-calculated if already determined).
While memory section 2282 and instructions 2280 may exist in different memories (e.g., one in a volatile memory the other as firmware in a nonvolatile memory), in alternate embodiments such data can occupy different locations in a same memory.
In the particular embodiment shown, a processing section 2280, a memory section 2282, and a sense section 2284 can be parts of a same integrated circuit (IC) device 2298. For example, such sections can be formed in a same IC substrate, or may be formed in a same package (e.g., multi-chip package). In one very particular embodiment, an IC device can be from the PSoC®, CapSense® and/or TrueTouch® family of devices manufactured by Cypress Semiconductor Corporation of San Jose, Calif., U.S.A.
It should be appreciated that reference throughout this specification to “one embodiment” or “an embodiment” means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Therefore, it is emphasized and should be appreciated that two or more references to “an embodiment” or “one embodiment” or “an alternative embodiment” in various portions of this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures or characteristics may be combined as suitable in one or more embodiments of the invention.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure aiding in the understanding of one or more of the various inventive aspects. This method of disclosure, however, is not to be interpreted as reflecting an intention that the claims require more features than are expressly recited in each claim. Rather, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
This application claims the benefit of U.S. provisional patent applications having Ser. No. 61/716,917, filed on Oct. 22, 2012, and Ser. No. 61/776,554, filed on Mar. 11, 2013, the contents of both of which are incorporated by reference herein.
Number | Name | Date | Kind |
---|---|---|---|
3904822 | Kamm et al. | Sep 1975 | A |
5367130 | Isono | Nov 1994 | A |
5420379 | Zank et al. | May 1995 | A |
5543591 | Gillespie et al. | Aug 1996 | A |
5995084 | Chan et al. | Nov 1999 | A |
6011545 | Henderson et al. | Jan 2000 | A |
6323845 | Robbins | Nov 2001 | B1 |
6323846 | Westerman et al. | Nov 2001 | B1 |
6727892 | Murphy | Apr 2004 | B1 |
7170468 | Knopf | Jan 2007 | B2 |
7663607 | Hotelling et al. | Feb 2010 | B2 |
7800592 | Kerr et al. | Sep 2010 | B2 |
7848825 | Wilson et al. | Dec 2010 | B2 |
8022940 | Hung et al. | Sep 2011 | B2 |
8125463 | Hotelling et al. | Feb 2012 | B2 |
8310459 | Nurmi | Nov 2012 | B2 |
8350824 | Hung et al. | Jan 2013 | B2 |
8358285 | Leung et al. | Jan 2013 | B2 |
8384675 | Westerman et al. | Feb 2013 | B2 |
8405618 | Colgate et al. | Mar 2013 | B2 |
8592698 | Hung et al. | Nov 2013 | B2 |
8638107 | Schwartz et al. | Jan 2014 | B2 |
8704781 | Kii | Apr 2014 | B2 |
8810543 | Kurikawa | Aug 2014 | B1 |
20020122029 | Murphy | Sep 2002 | A1 |
20060097991 | Hotelling et al. | May 2006 | A1 |
20060197750 | Kerr et al. | Sep 2006 | A1 |
20070165006 | Sato et al. | Jul 2007 | A1 |
20070188474 | Zaborowski | Aug 2007 | A1 |
20080158177 | Wilson et al. | Jul 2008 | A1 |
20090096757 | Hotelling et al. | Apr 2009 | A1 |
20090096758 | Hotelling et al. | Apr 2009 | A1 |
20090174679 | Westerman | Jul 2009 | A1 |
20090231288 | Liao | Sep 2009 | A1 |
20090244016 | Casparian et al. | Oct 2009 | A1 |
20090250269 | Hung et al. | Oct 2009 | A1 |
20090251428 | Hung et al. | Oct 2009 | A1 |
20090251429 | Hung et al. | Oct 2009 | A1 |
20090256817 | Perlin et al. | Oct 2009 | A1 |
20090322689 | Kwong et al. | Dec 2009 | A1 |
20100259494 | Kii | Oct 2010 | A1 |
20100259504 | Doi et al. | Oct 2010 | A1 |
20100283760 | Leung et al. | Nov 2010 | A1 |
20110018821 | Kii | Jan 2011 | A1 |
20110025636 | Ryu et al. | Feb 2011 | A1 |
20110069015 | Nurmi | Mar 2011 | A1 |
20110090155 | Caskey et al. | Apr 2011 | A1 |
20110148435 | Schwartz et al. | Jun 2011 | A1 |
20110254865 | Yee et al. | Oct 2011 | A1 |
20110291961 | Hsieh et al. | Dec 2011 | A1 |
20110316679 | Pihlaja | Dec 2011 | A1 |
20120075192 | Marsden et al. | Mar 2012 | A1 |
20120084697 | Reeves | Apr 2012 | A1 |
20120206381 | Pereverzev et al. | Aug 2012 | A1 |
20120268376 | Bi | Oct 2012 | A1 |
20120313861 | Sumi et al. | Dec 2012 | A1 |
20130009900 | Pryor | Jan 2013 | A1 |
Number | Date | Country |
---|---|---|
1256902 | Nov 2002 | EP |
1477889 | Nov 2004 | EP |
1657819 | Feb 2013 | EP |
6160117 | Mar 1986 | JP |
3210621 | Sep 1991 | JP |
6176198 | Jun 1994 | JP |
9146686 | Jun 1997 | JP |
2012172364 | Dec 2012 | WO |
WO2013120071 | Feb 2013 | WO |
Entry |
---|
Bahr et al., Ex Parte Schulhauser, Apr. 28, 2016, https://uspto.gov/patents-application-process/appealing-patent-decisions/decisions-and-opinions/precedential, p. 6-8 and 14. |
International Search Report for International Application No. PCT/US13/65758 dated Nov. 27, 2013; 2 pages. |
Written Opinion of the International Searching Authority for International Application No. PCT/US13/65758 dated Nov. 27, 2013; 12 pages. |
Cypress Semiconductor Corporation, International Preliminary Report on Patentability, PCT/US2013/065758, 13 pgs. |
“Intuos4 Pen Table 1 Wacom Americas,” Copyright 2011 Wacom, downloaded from http://www.wacom.com/en/Products/Intuos.aspx on Jun. 29, 2011, 3 pgs. |
Kurikawa, Final Office Action, U.S. Appl. No. 13/072,634, filed Mar. 12, 2014, 12 pgs. |
Kurikawa, Final Office Action, U.S. Appl. No. 13/072,634, filed May 23, 2013, 11 pgs. |
Kurikawa, Notice of Allowance, U.S. Appl. No. 13/072,634, filed Jun. 20, 2014, 8 pgs. |
Kurikawa, Office Action, U.S. Appl. No. 13/072,634, filed Oct. 10, 2013, 12 pgs. |
Kurikawa, Office Action, U.S. Appl. No. 13/072,634, filed Feb. 27, 2013, 13 pgs. |
Kurikawa, Office Action, U.S. Appl. No. 14/463,240, filed Aug. 27, 2015, 14 pgs. |
Kurikawa, Notice of Allowance, U.S. Appl. No. 14/463,240, filed May 18, 2016, 9 pgs. |
Tevfik Metin Sezgin and Randall Davis, MIT Computer Science and Artificial Intelligence Laboratory, Cambridge, MA, “Scale-space Based Feature Point Detection for Digital Ink,” Copyright 2004, downloaded from http://rationale.csail.mit/edu/publications/Sezgin2004Scalespace.pdf on Jun. 29, 2011, 7 pgs. |
“Wacom Intuos3 6 × 8-Inch Pen Tablet,” downloaded from http://www.amazon.com/Wacom-Intuos3-8-Inch-Pen-Tablet/dp/B00030097G on Jun. 29, 2011, 3 pgs. |
Number | Date | Country | |
---|---|---|---|
20140111468 A1 | Apr 2014 | US |
Number | Date | Country | |
---|---|---|---|
61716917 | Oct 2012 | US | |
61776554 | Mar 2013 | US |