The present application is related to commonly-owned U.S. patent application Ser. No. 11/261,316, entitled “Two-Dimensional Motion Sensor,” filed Oct. 28, 2005, by inventors Jahja I. Trisnadi, Clinton B. Carlisle, and Robert J. Lang, which application is hereby incorporated by reference.
The present application is also related to commonly-owned U.S. patent application Ser. No. 11/355,551, entitled “Signal Processing Method for Use with an Optical Navigation System,” filed Feb. 16, 2006, by inventors Brian D. Todoroff and Yansun Xu, which application is hereby incorporated by reference.
The present application is also related to commonly-owned U.S. patent application Ser. No. 11/446,694, entitled “Method and Apparatus for Robust Velocity Prediction,” filed Jun. 5, 2006, by inventors Brian D. Todoroff and Yansun Xu, which application is hereby incorporated by reference.
The present disclosure relates generally to optical navigation apparatus and methods of sensing movement using the same.
Data input devices, such as computer mice, touch screens, trackballs and the like, are well known for inputting data into and interfacing with personal computers and workstations. Such devices allow rapid relocation of a cursor on a monitor, and are useful in many text, database and graphical programs. A user controls the cursor, for example, by moving the mouse over a surface to move the cursor in a direction and over distance proportional to the movement of the mouse.
Computer mice come in both optical and mechanical versions. Mechanical mice typically use a rotating ball to detect motion, and a pair of shaft encoders in contact with the ball to produce a digital signal used by the computer to move the cursor. One problem with mechanical mice is that they are prone to inaccuracy and malfunction after sustained use due to dirt accumulation, etc. In addition, the movement and resultant wear of the mechanical elements, particularly the shaft encoders, necessarily limit the useful life of the device.
One solution to the above-discussed problems with mechanical mice has been the development of mice using an optical navigation system. These optical mice have become very popular because they provide a better pointing accuracy and are less susceptible to malfunction due to accumulation of dirt.
The dominant technology used today for optical mice relies on a light sources, such as a light emitting diode (LED), illuminating a surface at or near grazing incidence, a two-dimensional (2D) CMOS (complimentary metal-oxide-semiconductor) detector which captures the resultant images, and signal processing unit that correlates thousands of features or points in successive images to determine the direction, distance and speed the mouse has been moved. This technology provides high accuracy but suffers from a complex design and relatively high image processing requirements.
As an improvement, the use of a coherent light source, such as a laser, to Illuminate a rough surface creates a complex interference pattern, called speckle, which has several advantages, including efficient laser-based light generation and high contrast images even under illumination at normal incidence. Laser-based light generation has a high electrical-to-light conversion efficiency, and a high directionality that enables a small, efficient illumination footprint tailored to match a footprint of the array of photodiodes. Moreover, speckle patterns allow tracking operation on virtually any rough surfaces (broad surface coverage), while maintaining the maximum contrast even under unfavorable imaging condition, such as being “out-of-focus”.
An alternative approach for measuring linear displacements uses an optical sensor having one-dimensional (1D) arrays of photosensitive elements, such as photodiodes, commonly referred to as a comb-array. The photodiodes within a 1D array may be directly wired in groups to enable analog, parallel processing of the received signals, thereby reducing the signal processing required and facilitating motion detection. For two-dimensional (2D) displacement measurements using this approach, multi-axes linear arrays have been proposed in which two or more 1D arrays are arranged along non-parallel axes.
Although a significant simplification over prior correlation-type optical mice, these 1D comb-array devices have not been wholly satisfactory for a number of reasons. In particular, one drawback of these devices is their limited accuracy along directions that deviate significantly from the 1D array orientations. This is especially a problem where the optical mouse is moved in an off-axis direction causing the speckle pattern or optical image to enter and leave the field of view of the 1D array too quickly before the image has a chance to build-up an unambiguous signal. This deficiency can be partially remedied by increasing the number of axes, but at the price of reducing the simplicity of the linear comb-array approach.
The approach disclosed in U.S. patent application Ser. No. 11/261,316 (“Two-Dimensional Motion Sensor”) avoids the shortcomings of 1D comb detectors while permitting simpler, more efficient signal processing to provide estimates of 2D displacements.
It is highly desirable to further improve optical navigation apparatus and methods of sensing movement using the same.
The present disclosure relates generally to optical navigation systems, and more particularly to optical sensors for sensing relative lateral movement between the sensor and a surface on or over which it is moved. Optical navigation systems can include, for example, an optical computer mouse, trackballs and the like, and are well known for inputting data into and interfacing with personal computers and workstations.
In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of embodiments of the present invention. It will be evident, however, to one skilled in the art that the invention may be practiced without these specific details. In other instances, well-known structures, and techniques are not shown in detail or are shown in block diagram form in order to avoid unnecessarily obscuring an understanding of this description.
In accordance with an embodiment of the invention, the optical sensor senses movement based on displacement of a complex intensity distribution pattern of light, which can be provided by a pattern created from LED (light emitting diode) illumination or from the laser-interference pattern known as speckle. Speckle is essentially the complex interference pattern generated by scattering of coherent light off of a rough surface and detected by an intensity photosensitive element, such as a photodiode, with a finite angular field-of-view (or numerical aperture). More particularly, the optical sensor may comprise a two-dimensional (2D) array that combines the displacement measurement accuracy of a 2D correlator with the signal processing simplicity and efficiency of a linear or one-dimensional (1D) comb-array. The 2D array may be either a periodic, 2D comb-array, which includes a number of regularly spaced photosensitive elements having 1D or 2D periodicity, a quasi-periodic 2D array (such as a Penrose tiling), or a non-periodic 2D array, which has a regular pattern but doesn't include periodicities. By a 2D comb-array it is meant a planar array of a number of regularly spaced and electrically connected photosensitive elements extending substantially in at least two non-parallel directions, and having periodicity in two dimensions.
I. Image-Correlation vs. Comb-Array Processing
It is instructive to compare the signal processing for image correlation to a comb-array technique in one-dimension (1D).
A. 1D Correlation
The correlation between two signals f and g can be expressed as:
It is assumed that both f and g have zero means. Otherwise the signals can always be redefined by offsetting with their respective means.
If g has some resemblance to f the correlation peaks at the particular value of shift m for which the common features of both signals are generally best aligned, or “correlated.” For a “1D” mouse, it is sufficient to consider the case where g is, to a large degree, a displaced version off, i.e. gn=fn+x. The correlation becomes:
where x is the displacement.
The peaks of the correlation function (Eq. 2) occurs at m=x. Therefore knowledge of the peak position determines the displacement.
In the conventional optical mouse, a captured signal f is used as a short-term template to be correlated with a subsequent capture. Once the displacement is determined, the new capture replaces the old template and so on. This dynamic template is desirable for arbitrary signals. If the class of signals is predetermined, such as a periodic signal, a fixed template can be employed, thereby removing the necessity of continuously updating the signal templates. This greatly simplifies the correlation operation as well as the device implementation. Indeed a comb-array is such a device as described in greater detail below.
To this end the signals can be represented as discrete Fourier transform (DFT) expansions as follows:
Thus, correlation (1) becomes:
and correlation (2) becomes:
B. 1D Comb-Array
A linear or 1D comb-array is an array having multiple photosensitive elements that are connected in a periodic manner, so that the array acts as a fixed template that interrogates one spatial frequency component of the signal. An embodiment of one such 1D comb-array is shown in
Vx=FAF*Ae2πiA(m−x)/N=CeiK(m−x) (6)
where C is a slowly varying amplitude and K≡2πA/N the selected spatial frequency. The factor eiKm can be thought as the phase that encodes the initial alignment of the selected spatial frequency component and the template.
Thus, it can be concluded that a 1D comb-array is essentially a 1D correlation at one spatial frequency.
II. Two-Dimensional Comb-Array Detector
A 2D comb-array may be constructed and configured to provide a 2D correlation at one spatial frequency {right arrow over (K)}=(Kx, Ky).
A. Introduction
The 2D correlation of an image f and a displaced version of itself [(x, y) is the displacement] is:
In analogy to equation 6 above, the 2D comb-array signal is:
Vx,y=CeiK
As above, (Kx, Ky)≡(2πA/N, 2πB/N) is the selected 2D spatial frequency. The comb signal is simply the product of harmonic functions of the x and y displacements. Notice that the comb-array signal is periodic and peaks whenever the template is spatially in-phase with the image spatial frequency.
Setting m, n=0 for simplicity, the exponential products in equation 8 can be expanded into four trigonometric products:
CC=cos(Kxx)cos(Kyy)
CS=cos(Kxx)sin(Kyy)
SC=sin(Kxx)cos(Kyy)
SS=sin(Kxx)sin(Kyy) (9)
The next step is to determine the 2D array configuration that generates the four signals shown in (9) above.
It is instructive to first review the generation of the in-phase and the quadrature signals in a 1D comb-array configuration with 4 elements per period.
Referring to
The above cosine and sine assignments can now be applied to the 2D case. The result is four matrices shown in
B. Example 2D Comb-array with 4×4 Elements-per-cell
A 2D comb array may now be constructed from the above matrices, as shown in
The eight wired-sum signals are further combined with differential amplifiers 308 to give the following four signals:
CC=A1-A2
CS=B1-B2
SC=C1-C2
SS=D1-D2 (10)
These four signals contain the in-phase and quadrature information in the x and y directions. Using trigonometry identities, the harmonic products can be converted to simple harmonics (of sum and difference):
cos(Kxx+Kyy)=CC−SS
sin(Kxx+Kyy)=SC+CS
cos(Kxx−Kyy)=CC+SS
sin(Kxx−Kyy)=SC−CS (11)
Optionally, the coordinate system or the array may be rotated by 45° to get expression in pure x and y. In either orientation, the 2D displacement can then be determined. In practice, the Kx and Ky can be taken to be equal.
The 2D comb-array detector provides a simplicity of design and several further advantages over the conventional 2D correlation and/or multi-axis 1D comb-array detector, including: (i) faster signal processing; (ii) reduced power consumption; (iii) high angular accuracy; and (iv) performance that is independent of a direction movement relative to an array orientation.
The 2D comb-array detector has significantly faster signal processing than a correlation-based approach because the 2D comb array generates much less data to process, and consequently much simpler algorithms to execute. For example, zero-crossing detection algorithm can be employed to determine the displacements. To specify a displacement in a plane, two real numbers are needed, namely the x and y translations. In a conventional correlation-based optical mouse, these two real numbers are determined from successive image correlation. Because each image in the correlation-based approach typically comprises about 103 pixels, a large amount of data needs to be processed just to determine the two x- and y-translation values. In contrast, the 2D comb-array produces only four (4) positive real numbers, which are equivalent to just two (2), signed real numbers. In a sense, parallel processing is built into the inter-connection architecture of the 2D comb array. By “wiring” the processing into the architecture, the remaining external computation becomes relatively simple and can be accomplished quickly. Simple computation translates to smaller signal processing circuitry, while faster processing allows high velocity tracking and increased resources to implement sophisticated digital signal processing (DSP) algorithms that can boost tracking performance of an optical navigation system using the optical sensor even further.
The 2D comb-array detector is expected to consume less electric power than a correlation-based device because it has much less data to process, and consequently much simpler algorithms to implement. This is a highly desirable feature for power-sensitive applications such as a wireless optical mouse. The electric power consumption can be further reduced by combination with efficient laser illumination, such as in laser speckle based mice.
The angular accuracy of a 2D comb-array based optical navigation sensor may be scaled much easier than that of a conventional 2D correlator based optical navigation sensor. The minimum angle that can be detected by a 2D sensor is inversely proportional to the number of photosensitive elements in a row or a column. Improving angular accuracy depends generally on an increase in the number of photosensitive elements of the array. This constitutes a heavy penalty for a 2D correlator sensor, because the quantity of data to be processed goes up quadratically with the number of elements in a row or a column. In contrast, the quantity of data or number of signals to be processed in a 2D comb-array sensor is independent of the number of elements. That is, the number of differential signals output from the 2D comb array is always equal to four in a 2D comb array having a configuration similar to that shown in
Finally, compare to the multi-axis 1D comb-array detector, the performance of the 2D comb-array detector is independent of the direction movement relative to the array. Referring to
C. Exemplary Embodiment and Experimental Validation
An exemplary embodiment of an optical navigation system having a 2D comb-array is shown in
For the purpose of validating advantages of an optical navigation system 502 having a 2D comb-array 514, a square, symmetrical 2D comb-array similar to that shown in
D. Array Generalizations
Numerous generalizations for linear or 1D comb-arrays have been described, for example, in co-pending, commonly assigned U.S. patent application Ser. Nos. 11/129,967, 11/123,525, and 11/123,326, which are each incorporated herein by reference in its entirety. Many of these generalizations are similarly applicable to the 2D comb-array including: (i) 2D comb-arrays having other than 4×4 elements-per-cell; (ii) 2D comb-arrays having multiple sub-arrays of a given spatial frequency; (iii) 2D comb-arrays having multiple sub-arrays of different spatial frequencies; and (iv) 2D comb-arrays with dynamically reconfigurable comb connections between the photosensitive elements to enable the spatial frequency to be dynamically changed, for example, to optimize strength of signals from the array. It will further be appreciated that a 2D comb-array may also include a combination of the above generalizations or embodiments.
Certain alternative embodiments of a 2D comb-array including one or more of the above generalizations will now be described in greater detail with reference to
One alternative embodiment of 2D comb-array has other than 4×4 elements-per-cell. For example, as shown in
In other alternative embodiments, the optical sensor can include multiple 2D comb-array or sub-arrays of a given spatial frequency or different spatial frequencies. For example,
As in the examples described above, elements within each cell 812 in a quadrant 804, 806, 808 and 810 as well as corresponding elements of all cells in the array-pair are coupled to form sixteen (16) wired-sum signals 814. The 16 wired-sum signals 814 are further combined with differential amplifiers 816 to produce eight (8) signals, CC1, CS1, SC1, SS1 from the first 2D comb-array, and CC2, CS2, SC2, SS2 from the second 2D comb-array. In operation, the strengths of the signals from either of the 2D comb-arrays or array-pairs may decrease because the selected spatial frequency component is weak at some particular location on the surface, or because contributions from various parts of the array add coherently to zero. However, it will be appreciated that fading in any one array-pair is unlikely to result in fading in the other pair, therefore such a multiple array or sub-array configuration is often desirable to mitigate signal fading. Moreover, the square symmetry arrangement of the optical sensor 802 enables simple and efficient illumination of all photosensitive elements 818 in the optical sensor.
E. Velocity Prediction Using 2D Comb-array Detector
As discussed above, the technique of tracking 2D motion using a 2D comb-array detector has been developed for optical navigation sensors. This technique needs much less signal processing power than conventional optical navigation sensor technology which is based on 2D image correlation over successive surface images. The reduced power consumption requirements of the 2D comb-array detector is advantageously suitable for power-sensitive applications, such as wireless optical navigation sensors.
A 2D comb-array detector comprises a 2D array of photo-detector cells in which the individual detectors in the array are wired together in a repeating 2D pattern spanning M detectors along each of two orthogonal axes. For example, in the embodiment illustrated in
Consider such a 2D comb-array detector with M=4. As shown in
The four quasi-sinusoidal output signals (CC, CS, SC, and SS) represent separate in-phase and quadrature signals. These four quasi-sinusoidal output signals may be processed for motion along each of two orthogonal axes so as to track the 2D movement of the surface relative to the detector array. In particular, the four quasi-sinusoidal outputs may be processed according to the following equations.
At each sample frame, the phase angle values φx and φy and the radius values Rx and Ry may be computed in accordance with the above equations. The radius values Rx and Ry indicate the contrast of the detected quasi-sinusoidal signals. The phase angle changes (Δφx and Δφy) relative to the previous sample frame) are basically proportional to the 2D displacements along the two orthogonal axes between the current and previous sample frames. The phase angle changes (Δφx and Δφy) at frame i may be determined in accordance with the following equations.
In the above equations, the current frame is denoted by i, such that phase angles for a current frame are denoted by the subscript i, and the phase angles for the previous frame are denoted by the subscript i-1.
While Δφx and Δφy are indicative of motion along the x and y axes, they do not completely reflect the actual two-dimensional motion. This is because the values of Δφx and Δφy are restricted to the range from −π to +π due to the inverse tangent function. In other words, the values of Δφx and Δφy are “wrapped” in the range [−π, +π].
Consider the functions ΔΦx and ΔΦy to be “unwrapped” versions of Δφx and Δφy, respectively. Hence, Δφx is a modulo function of ΔΦx, and Δφy is a modulo function of ΔΦy, where the values of Δφx and Δφy each “wraps” within the range [−π, +π]. ΔΦx and ΔΦy are indicative of the actual (unwrapped) motion of the sensor relative to the surface.
Since Δφx and Δφy are computed from the differential signals output by the 2D comb array, they may be “unwrapped” to determine the functions ΔΦx and ΔΦy. Such unwrapping of an example 1D function Δφ(t) to generate the corresponding 1D function ΔΦ(t) is illustrated in
With such an initial assumption, the “actual velocities” ΔΦx and ΔΦy may be computed, for example, by tracking the average velocities over the past K frames (where K>2) and assuming that the next velocity with be within +/−π of the average velocity. This computation may be implemented according to the following equations.
In the above equations, the INTEGER function takes the largest integer value that is not greater than its argument. <ΔΦx> and <ΔΦy> are the average velocities over the past K frames (K>2) and may be computed according to the following equations.
<ΔΦx> and <ΔΦy> are utilized within the INTEGER functions so as to track the number of “2π” rotations that have occurred within a frame. These average velocity values <ΔΦx> and <ΔΦy> may be considered to be “velocity predictors.”
In other words, to compute the actual x and y displacements (ΔΦx and ΔΦy) between two successive frames, the phase angle changes Δφx and Δφy need to be “unwrapped” to account for the number of full 2π phase rotations that may have occurred between the two sample frames. The actual x and y displacements (ΔΦx and ΔΦy) may be determined in accordance with Equations (14) given above.
III. Variable Tracking Resolution
A. Resolution of 2D Comb-array Detector
In the conventional 2D correlation-based technique, the precision of the displacement measurement is inherently limited by the photodiode resolution. However, using the above-discussed 2D comb-array based technique, the precision of the displacement measurement is limited by the precision of the measurement of the phase angle from the analog CS, SC, CC and SS signals. These analog signals may provide for a high resolution displacement measurement without increasing the size or complexity of the photodiode array.
Consider a 2D comb-array detector where the period of the 2D comb-array is Λcomb, the phase step is Δφ, and the imaging optic magnification is m. In that case, the theoretical resolution of the 2D comb-array detector is given by the following equation.
Typical values may be, for example, Δφ˜3 degrees, m/Λcomb˜1000 per inch, and 5-bit measurements. In that case, the theoretical displacement resolution reaches on the order of 105 (100,000) counts per inch. With such a high theoretical displacement resolution, an actual system may be limited primarily by noise in the analog circuits. Furthermore, such high theoretical resolution may be considered to provide continuous variability in the linear resolution of the estimated displacements in x and y.
B. Variable Tracking Resolution Using Divisor
One technique for variable tracking resolution according to an embodiment of the invention involves using a divisor. This technique is now discussed in relation to the flow charts of
In block 1002, a displacement is measured in “phase angle changes” for a current frame (frame i). This measured displacement may be denoted Mx,i.
In block 1004, a remainder phase angle from a previous frame (frame i-1) is added to the measured displacement Mx,i. This remainder phase angle for the previous frame may be denoted Rx,i-1. The result is an adjusted displacement in phase angle changes for the current frame. This adjusted displacement may be denoted M′x,i. In equation form, M′x,i=Mx,i+Rx,i-1.
Note the step in block 1004 advantageously avoids losing residual counts (i.e. the remainder phase angle) from a prior frame. Without this step, applicants believe that the effective resolution would drop at a rate of 1.5 Dx, and noise in the output would increase with Dx/2 due to the lost measurement counts in each calculation.
In block 1006, the adjusted displacement M′x,i is divided by the divisor Dx. This division operation generates the current frame's output displacement in counts Ox,i and a remainder Rx,i for the current frame. In equation form,
Ox,i=M′x,i/Dx
Rx,i=M′x,i mod(Dx) (17)
where the mod ( ) operator represents a modulus or remainder operator.
The displacement Ox,i is output for use by the optical navigation apparatus per block 1008, and the remainder Rx,i is stored per block 1010.
Similarly,
In block 1022, a displacement is measured in “phase angle changes” for a current frame (frame i). This measured displacement may be denoted My,i.
In block 1024, a remainder phase angle from a previous frame (frame i-1) is added to the measured displacement My,i. This remainder phase angle for the previous frame may be denoted Ry,i-1. The result is an adjusted displacement in phase angle changes for the current frame. This adjusted displacement may be denoted M′y,i. In equation form, M′y,i=My,i+Ry,i-1.
Note the step in block 1024 advantageously avoids losing residual counts (i.e. the remainder phase angle) from a prior frame. Without this step, applicants believe that the effective resolution would drop at a rate of 1.5 Dy, and noise in the output would increase with Dy/2 due to the lost measurement counts in each calculation.
In block 1026, the adjusted displacement M′y,i is divided by the divisor Dy. This division operation generates the current frame's output displacement in counts Oy,i and a remainder Ry,i for the current frame. In equation form,
Oy,i=M′y,i/Dy
Ry,i=M′y,i mod(Dy) (18)
where the mod ( ) operator represents a modulus or remainder operator.
The displacement Oy,i is output for use by the optical navigation apparatus per block 1028, and the remainder Ry,i is stored per block 1030.
The processes shown in
Note that the technique discussed above in relation to
The various circuitry shown includes analog-to-digital conversion (ADC) circuits 1102, data acquisition buffers 1104, and data processing circuitry 1106. The data processing circuitry 1106 includes memory 1108 for storing and retrieving instructions and data. The ADC circuits 1102 are configured to receive the analog output signals (CC, CS, SC, and SS) from the 2D comb array 302 and convert them to digital form. The data acquisition buffers 1104 are configured to receive the digital output signals from the ADC circuits 1102 and to buffer the digital data such that the data may be appropriately processed by the data processing circuitry 1106.
The memory 1108 may be configured to include various computer-readable code, including computer-readable code configured to implement the steps described above in relation to
C. Controlling Maximum Achievable Tracking Resolution
According to an embodiment of the invention the least significant bit (LSB) size of analog-to-digital conversion (ADC) measurements may be varied to control the maximum achievable tracking resolution. The LSB size of an ADC measurement relates to the quantization error due to the finite resolution of the ADC. In other words, varying the LSB size effectively varies the ADC resolution.
More particularly, the technique involves changing the LSB size of ADC measurements of the CS, SC, CC, and SS signals. By changing these resolutions, the accuracy of the phase change measurement is affected. A reduction in ADC resolution increases the phase change measurement error and reduces the maximum achievable resolution of the tracking system. Conversely, an increase in ADC resolution reduces the phase change measurement error and increases the maximum achievable resolution of the tracking system.
For example, consider an embodiment where the apparatus includes slope converter type ADC circuits to convert the CS, SC, CC and SS signals from analog to digital form. In such slope converter type ADC circuits, a slope signal is generated using a current, and the slope signal is compared to the measured analog signal. In this case, the current used to generate the slope signal may be varied while a constant clock rate is maintained. As the slope is increased, the ADC resolution is effectively decreased. Conversely, as the slope is decreased, the ADC resolution is effectively increased.
Controlling the maximum achievable tracking resolution using ADC LSB size is well suited to analog measurements, such as the analog measurements from a 2D comb-array detector. However, this approach would not work well with inherently discrete measurements, such as the discrete measurements from an image-correlation-based detection system.
In block 1201, an electrical current level (or other control parameter) is set. The electrical current level (or other control parameter) may be selected by a user or in a more automated fashion by system software. The electrical current level (or other control parameter) may be utilized to vary the LSB size of the ADC circuitry.
In block 1202, the analog signals (e.g., the CC, CS, SC and SS signals) are received from the 2D comb array. In block 1204, these analog signals are converted to digital signals using the ADC circuitry, where the resolution of the conversion is controlled by the electrical current level (or other control parameter). The digital signals are output in block 1206, and the data is processed in block 1208 so as to result in the navigation tracking.
The various circuitry shown includes analog-to-digital conversion (ADC) circuits 1102, data acquisition buffers 1104, data processing circuitry 1106, and circuitry to control maximum achievable tracking resolution 1302. The ADC circuits 1102 are configured to receive the analog output signals (CC, CS, SC, and SS) from the 2D comb array 302 and convert them to digital form. The data acquisition buffers 1104 are configured to receive the digital output signals from the ADC circuits 1102 and to buffer the digital data such that the data may be appropriately processed by the data processing circuitry 1106. The data processing circuitry 1106 includes memory 1108 for storing and retrieving instructions and data. The memory 1108 may be configured to include various computer-readable code, including computer-readable code configured to implement the navigation tracking.
In accordance with an embodiment of the invention, the circuitry for controlling the maximum achievable tracking resolution 1302 provides the electrical current level (or other control parameter) to the ADC circuitry 1102. By controllably varying the electrical current level (or other control parameter), the resolution of the ADC circuitry 1102 may be varied. Varying the resolution of the ADC circuitry 1102 changes the maximum achievable tracking resolution of the optical navigation apparatus.
IV. Conclusion
In a conventional optical mouse, the resolution of the tracking is typically set by the pitch of the pixels in the CMOS image-capture camera, and by the configuration of the optics and the image-correlation signal processing algorithm used to determine motion. While lower values of image resolution may be obtained by “binning” (combining in logic the signal values from the individual pixels), the capability to adjust the resolution value up or down in a continuous or quasi-continuous manner cannot be readily achieved by the conventional optical navigation methodology.
Unlike the conventional optical navigation technique, the present disclosure provides technology for varying the tracking resolution in a continuous or quasi-continuous manner. The variation in tracking resolution may be considered as continuous or quasi-continuous when the resolution is adjustable in steps which are sufficiently small so as to be perceived as continuous by a human user. In other words, the small steps appear to a user to be a continuous adjustment without perceptible increments.
In accordance with one embodiment, the continuous or quasi-continuous variable tracking resolution is provided by processing the data signals from a 2D comb-array detector using a divisor-based algorithm. In accordance with another embodiment, the upper limit of the tracking resolution may be controlled by the resolution of ADC circuitry used with the 2D comb-array detector.
Being able to vary tracking resolution in a continuous or quasi-continuous manner has various advantageous applications. In general, this capability allows for improved customization and better feel for various applications, such as in the gaming and computer aided design (CAD) markets.
For example, the resolution of an optical mouse may be automatically adjusted “on the fly” by control software so as to be a function of an estimated speed at which the mouse is being moved. For instance, the tracking resolution may be adjusted to be higher at slower speeds, and to be lower at faster speeds. The desired resolution at each velocity range may be pre-defined, and the sensor may change the resolution scale based on these pre-defined settings and the velocity that the sensor is currently predicting.
Note that while a preferred embodiment of the techniques disclosed herein utilizes laser speckle, the techniques disclosed herein may be applicable to various 2D optical navigation sensors, including both laser-based sensors and LED-based sensors.
The foregoing description of specific embodiments and examples of the present invention have been presented for the purpose of illustration and description, and it is not to be construed as being limited thereby. They are not intended to be exhaustive or to limit the invention to the precise forms disclosed, and many modifications, improvements and variations within the scope of the invention are possible in light of the above teaching. It is intended that the scope of the present invention encompass the generic area as herein disclosed, and by the claims appended hereto and their equivalents.
Number | Name | Date | Kind |
---|---|---|---|
3922093 | Dandliker et al. | Nov 1975 | A |
4225240 | Balasubramanian | Sep 1980 | A |
4546347 | Kirsch | Oct 1985 | A |
4751380 | Victor et al. | Jun 1988 | A |
4799055 | Nestler et al. | Jan 1989 | A |
4814553 | Joyce | Mar 1989 | A |
4920260 | Victor et al. | Apr 1990 | A |
4936683 | Purcell | Jun 1990 | A |
5288993 | Bidiville et al. | Feb 1994 | A |
5345257 | Lebby et al. | Sep 1994 | A |
5391868 | Vampola et al. | Feb 1995 | A |
5473344 | Bacon et al. | Dec 1995 | A |
5578813 | Allen et al. | Nov 1996 | A |
5606174 | Yoshimura et al. | Feb 1997 | A |
5644139 | Allen et al. | Jul 1997 | A |
D382550 | Kaneko et al. | Aug 1997 | S |
D385542 | Kaneko et al. | Oct 1997 | S |
5703356 | Bidiville et al. | Dec 1997 | A |
5729008 | Blalock et al. | Mar 1998 | A |
5729009 | Dandliker et al. | Mar 1998 | A |
5781229 | Zediker et al. | Jul 1998 | A |
5786804 | Gordon | Jul 1998 | A |
5825044 | Allen et al. | Oct 1998 | A |
5854482 | Bidiville et al. | Dec 1998 | A |
5907152 | Dandliker et al. | May 1999 | A |
5963197 | Bacon et al. | Oct 1999 | A |
5994710 | Knee et al. | Nov 1999 | A |
6031218 | Piot et al. | Feb 2000 | A |
6034760 | Rees | Mar 2000 | A |
6037643 | Knee | Mar 2000 | A |
6057540 | Gordon et al. | May 2000 | A |
6097371 | Siddiqui et al. | Aug 2000 | A |
6151015 | Badyal et al. | Nov 2000 | A |
6172354 | Adan et al. | Jan 2001 | B1 |
6176143 | Mo et al. | Jan 2001 | B1 |
6195475 | Beausoleil, Jr. et al. | Feb 2001 | B1 |
6225617 | Dandliker et al. | May 2001 | B1 |
6233368 | Badyal et al. | May 2001 | B1 |
6256016 | Piot et al. | Jul 2001 | B1 |
6281881 | Siddiqui et al. | Aug 2001 | B1 |
6281882 | Gordon et al. | Aug 2001 | B1 |
6326950 | Liu | Dec 2001 | B1 |
6330057 | Lederer et al. | Dec 2001 | B1 |
6351257 | Liu | Feb 2002 | B1 |
6396479 | Gordon | May 2002 | B2 |
6421045 | Venkat et al. | Jul 2002 | B1 |
6424407 | Kinrot et al. | Jul 2002 | B1 |
6433780 | Gordon et al. | Aug 2002 | B1 |
6452683 | Kinrot et al. | Sep 2002 | B1 |
6455840 | Oliver et al. | Sep 2002 | B1 |
D464352 | Kerestegian | Oct 2002 | S |
6462330 | Venkat et al. | Oct 2002 | B1 |
6476970 | Smith | Nov 2002 | B1 |
6529184 | Julienne | Mar 2003 | B1 |
6585158 | Norskog | Jul 2003 | B2 |
6603111 | Dietz et al. | Aug 2003 | B2 |
6608585 | Benitz | Aug 2003 | B2 |
6618038 | Bohn | Sep 2003 | B1 |
6621483 | Wallace et al. | Sep 2003 | B2 |
6642506 | Nahum et al. | Nov 2003 | B1 |
6657184 | Anderson et al. | Dec 2003 | B2 |
6664948 | Crane et al. | Dec 2003 | B2 |
6674475 | Anderson | Jan 2004 | B1 |
6677929 | Gordon et al. | Jan 2004 | B2 |
6703599 | Casebolt et al. | Mar 2004 | B1 |
6737636 | Dietz et al. | May 2004 | B2 |
6774351 | Black | Aug 2004 | B2 |
6774915 | Rensberger | Aug 2004 | B2 |
6795056 | Norskog et al. | Sep 2004 | B2 |
6809723 | Davis | Oct 2004 | B2 |
6819314 | Black | Nov 2004 | B2 |
6823077 | Dietz et al. | Nov 2004 | B2 |
6825998 | Yoshida | Nov 2004 | B2 |
6869185 | Kaminsky et al. | Mar 2005 | B2 |
6951540 | Ebbini et al. | Oct 2005 | B2 |
6963059 | Lauffenburger et al. | Nov 2005 | B2 |
7042575 | Carlisle et al. | May 2006 | B2 |
7110100 | Buermann et al. | Sep 2006 | B2 |
7119323 | Brosnan et al. | Oct 2006 | B1 |
7138620 | Trisnadi et al. | Nov 2006 | B2 |
7248345 | Todoroff et al. | Jul 2007 | B2 |
7268341 | Lehoty et al. | Sep 2007 | B2 |
7298460 | Xu et al. | Nov 2007 | B2 |
7459671 | Trisnadi et al. | Dec 2008 | B2 |
7460979 | Buckner | Dec 2008 | B2 |
7492445 | Todoroff et al. | Feb 2009 | B1 |
20020060668 | McDermid | May 2002 | A1 |
20020130835 | Brosnan | Sep 2002 | A1 |
20020158300 | Gee | Oct 2002 | A1 |
20020190953 | Gordon et al. | Dec 2002 | A1 |
20030034959 | Davis et al. | Feb 2003 | A1 |
20030048255 | Choi et al. | Mar 2003 | A1 |
20030058218 | Crane et al. | Mar 2003 | A1 |
20030058506 | Green et al. | Mar 2003 | A1 |
20030142288 | Kinrot et al. | Jul 2003 | A1 |
20040046744 | Rafii et al. | Mar 2004 | A1 |
20040084610 | Leong et al. | May 2004 | A1 |
20040119682 | Levine et al. | Jun 2004 | A1 |
20040189593 | Koay | Sep 2004 | A1 |
20040189617 | Gerpheide et al. | Sep 2004 | A1 |
20050024336 | Xie et al. | Feb 2005 | A1 |
20050024623 | Xie et al. | Feb 2005 | A1 |
20050083303 | Schroeder et al. | Apr 2005 | A1 |
20050258346 | Lehoty et al. | Nov 2005 | A1 |
20050259078 | Roxlo et al. | Nov 2005 | A1 |
20050259097 | Lehoty et al. | Nov 2005 | A1 |
20060044276 | Baer et al. | Mar 2006 | A1 |
20060158433 | Serban et al. | Jul 2006 | A1 |
20070247428 | Hock et al. | Oct 2007 | A1 |
20080221711 | Trainer | Sep 2008 | A1 |
Number | Date | Country | |
---|---|---|---|
20080007526 A1 | Jan 2008 | US |