Touch sensing apparatus and method of operating the same

Information

  • Patent Grant
  • 10474249
  • Patent Number
    10,474,249
  • Date Filed
    Thursday, June 14, 2018
    6 years ago
  • Date Issued
    Tuesday, November 12, 2019
    5 years ago
Abstract
A touch sensing apparatus includes a group of emitters arranged to emit light to illuminate at least part of the touch surface, a light detector arranged to receive light from the group of emitters, and a processing element. Each emitter is controlled to transmit a code by way of the emitted light such that the code identifies the respective emitter. The codes may at least partly be transmitted concurrently. The codes may be selected such that a value of an autocorrelation of each code is significantly higher than a value of a cross-correlation between any two codes of different emitters. The processing element processes an output signal from the light detector to separate the light received from the individual emitters based on the transmitted codes, and to determine the position of the object/objects based on the light received from the individual emitters.
Description
TECHNICAL FIELD

The present invention relates to techniques for determining the location of one or more of objects on a touch surface.


BACKGROUND

To an increasing extent, touch-sensitive panels are being used for providing input data to computers, electronic measurement and test equipment, gaming devices, etc.


In one category of touch-sensitive panels, known from e.g. U.S. Pat. No. 3,673,327, a plurality of optical emitters and optical receivers are arranged around the periphery of a touch surface to create a grid of intersecting light paths above the touch surface. Each light path extends between a respective emitter/receiver pair. An object that touches the touch surface will block certain ones of the light paths. Based on the identity of the receivers detecting a blocked light path, a processor can determine the location of the intercept between the blocked light paths. This type of system is only capable of detecting the location of one object (single-touch detection). Further, the required number of emitters and receivers, and thus cost and complexity, increases rapidly with increasing surface area and/or spatial resolution of the touch panel.


In a variant, e.g. shown in WO2006/095320, each optical emitter emits a beam of light that diverges across the touch surface, and each beam is detected by more than one optical receiver positioned around the periphery of the touch surface. Thus, each emitter creates more than one light path across the touch surface. A large number of light paths are created by sequentially activating different emitters around the periphery of the touch surface, and detecting the light received from each emitter by a plurality of optical receivers. Thereby, it is possible to reduce the number of emitters and receivers for a given surface area or spatial resolution, or to enable simultaneous location detection of more than one touching object (multi-touch detection). However, this is achieved at the cost of a reduced temporal resolution, since the emitters are activated in sequence. This may be a particular drawback when the number of emitters is large. To increase the temporal resolution, each emitter may be activated during a shortened time period. However, this may result in a significant decrease in signal-to-noise ratio (SNR).


SUMMARY

It is an object of the invention to at least partly overcome one or more of the above-identified limitations of the prior art.


This and other objects, which will appear from the description below, are at least partly achieved by means of a touch sensing apparatus, a method of operating a touch sensing apparatus and a computer-readable medium according to the independent claims, embodiments thereof being defined by the dependent claims.


According to a first aspect, there is provided a touch sensing apparatus, comprising: a touch surface; a group of emitters arranged to emit light to illuminate at least part of the touch surface; a light detector arranged to receive light from the group of emitters; and a processing element configured to process an output signal from the light detector to determine the position of one or more objects interacting with the touch surface; wherein each emitter is controlled to transmit a code by way of the emitted light such that the code identifies the respective emitter, and wherein the processing element is configured to separate the light received from individual emitters based on the transmitted codes.


According to a second aspect, there is provided a method of operating a touch sensing apparatus, which comprises a touch surface, a group of emitters arranged to emit light to illuminate at least part of the touch surface, and a light detector arranged to receive light from the group of emitters, said method comprising: controlling each emitter to transmit a code by way of the emitted light such that the code identifies the respective emitter; processing an output signal from the light detector to separate the light received from the individual emitters based on the transmitted codes; and determining the position of one or more objects interacting with the touch surface based on the light received from the individual emitters.


According to a third aspect, there is provided a computer-readable medium storing processing instructions that, when executed by a processor, performs the method according to the second aspect.


It is also an objective to provide an alternative to the touch sensing techniques of the prior art, and in particular a touch sensing technique that is capable of accurately determining a touch location irrespective of the shape of the touching object. This objective is at least partly achieved by means of a further inventive concept.


According to a first aspect of the further inventive concept, there is provided a touch sensing apparatus, comprising: a light transmissive element that defines a touch surface; a set of emitters arranged around the periphery of the touch surface to emit beams of light into the light transmissive element, wherein the beams of light propagate inside the light transmissive element while illuminating the touch surface such that an object touching the touch surface causes an attenuation of the propagating light, wherein each beam of light diverges in the plane of the touch surface as the beam propagates through the light transmissive element; a set of light detectors arranged around the periphery of the touch surface to receive light from the set of emitters on a plurality of light paths, wherein each light detector is arranged to receive light from more than one emitter; and a processing element configured to determine, based on output signals of the light detectors, a light energy value for each light path; to generate a transmission value for each light path based on the light energy value; and to operate an image reconstruction algorithm on at least part of the thus-generated transmission values so as to determine the position of the object on the touch surface.


According to a second aspect of the further inventive concept, there is provided a method in a touch sensing apparatus. The touch sensing apparatus comprises a light transmissive element that defines a touch surface, a set of emitters arranged around the periphery of the touch surface to emit beams of light into the light transmissive element, wherein the beams of light propagate inside the light transmissive element while illuminating the touch surface such that an object touching the touch surface causes an attenuation of the propagating light, and wherein each beam of light diverges in the plane of the touch surface as the beam propagates through the light transmissive element, said apparatus further comprising a set of light detectors arranged around the periphery of the touch surface to receive light from the set of emitters on a plurality of light paths and generate a set of output signals that represents the light energy received by each detector, wherein each light detector is arranged to receive light from more than one emitter. The method comprises the steps of: determining, based on the set of output signals, a light energy value for each light path; generating a transmission value for each light path by dividing the light energy value by a background value; and operating an image reconstruction algorithm on at least part of the thus-generated transmission values so as to determine the position of the object on the touch surface.


According to a third aspect of the further inventive concept, there is provided a computer-readable medium storing processing instructions that, when executed by a processor, performs the method according to the second aspect.


Still other objectives, features, aspects and advantages of the present invention will appear from the following detailed description, from the attached claims as well as from the drawings.





BRIEF DESCRIPTION OF THE DRAWINGS

Example embodiments of the invention will now be described in more detail with reference to the accompanying schematic drawings.



FIG. 1 is a top plan view of a touch sensing apparatus with detection of light beams above a touch surface.



FIG. 2 is a side view of the apparatus in FIG. 1.



FIGS. 3(A)-3(C) are top plan views of another embodiment, with FIG. 3(A) illustrating light paths between a single emitter and plural detectors, FIG. 3(B) illustrating a detection grid formed by all light paths, and FIG. 3(C) illustrating the light paths affected by a touching object.



FIGS. 4(A)-4(E) are top plan views of the apparatus in FIG. 1, illustrating activation of emitters in a sequence of time intervals during a code-generation cycle.



FIG. 5 is a timing diagram for the activation of the individual emitters in FIGS. 4(A)-4(E).



FIG. 6 is a top plan view of an alternative embodiment.



FIG. 7 is a top plan view of a touch sensing apparatus with detection of light beams propagating inside a light transmissive panel.



FIG. 8 is a side view of the apparatus in FIG. 7.



FIG. 9 is a side view of another touch sensing apparatus with detection of light beams propagating inside a light transmissive panel.



FIG. 10 is a top plan view of the apparatus in FIG. 9.



FIG. 11 is a top plan view of a touch sensing apparatus with detection of light scattered from a touching object.



FIGS. 12-15 are top plan views illustrating exemplary arrangements of emitters and detectors around the periphery of a touch surface.



FIG. 16 is a side view of an exemplary arrangement of an emitter and a panel.



FIG. 17 is a flow chart of an exemplary method for determining touch locations.



FIGS. 18-19 are graphs of signals obtained in a touch sensing apparatus.



FIGS. 20-21 are timing diagrams to illustrate alternative ways of embedding codes by modulation of light.



FIGS. 22-23 are top plan views of different embodiments using frequency modulation for embedding codes.





DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS

The description starts out by presenting an embodiment of a touch sensing apparatus that creates a grid of light beams above a touch surface. Then follows a description of codes to be transmitted by a group of emitters in a touch sensing apparatus


according to embodiments of the invention, together with examples of criteria for selecting and optimizing the codes and for combining codes between different groups of emitters. Thereafter, embodiments of alternative types of touch sensing apparatuses are described, as well as exemplifying arrangements of emitters and detectors. The description is concluded by a data processing example, and a general discussion about components of a touch sensing apparatus according to embodiments of the invention. Throughout the description, the same reference numerals are used to identify corresponding elements.



FIG. 1 is a top plan view of a touch surface 1 which is illuminated by a plurality of emitters 2. The emitters 2 are arranged around the periphery of the touch surface 1. Each emitter 2 can be activated by a control unit 3 to generate a diverging beam of light above the touch surface 1, as seen in a top plan view. The beam of light is suitably collimated to propagate parallel to the touch surface 1, as seen in the side view of FIG. 2. A plurality of optical detectors 4 are arranged around the periphery to detect the light emitted by the emitters 2, and a processing element 5 is electrically connected to the detectors 4 to receive a respective output or measurement signal that represents the light energy received by each detector 4.


Thus, light paths are formed between each emitter 2 and a number of detectors 4.


The light paths, which are indicated by dashed lines, collectively define a detection grid. As shown in FIG. 1, each detector 4 receives light from a group of emitters 2, along a plurality of light paths, wherein each light path has a given angle of incidence to the detector 4.


An object 7 that is brought into the vicinity of the touch surface 1 within the detection grid may at least partially block one or more light paths, as indicated in the side view of FIG. 2. Whenever the object 7 at least partially blocks two or more light paths, i.e. when the object 7 is brought close to any intersection between the dotted lines in FIG. 1, it is possible to determine the location of the object 7. The processing element 5 processes the output signals from the detectors 4 to identify the blocked light paths. Each blocked light path corresponds to an angle of incidence at a specific detector 4, and thus the processing element 5 can determine the location of the object 7 by triangulation.


The location of the object 7 is determined during a so-called sensing instance, and the temporal resolution of the apparatus in FIG. 1 is given by the duration of each sensing instance. The duration of a sensing instance is set by the time required for generating a complete detection grid and/or the time required for sampling the output signals from all detectors 4 at an acceptable signal-to-noise ratio (SNR).


The spatial resolution of the touch sensing apparatus of FIG. 1 is dependent on the density of the detection grid. For example, it may be desirable to attain a high and


possibly uniform density of light path intersections. This may be achieved by proper selection of the number and location of emitters 2 and detectors 4, as well as by proper selection of the beam angle of the emitters 2 and the field of view of the detector 4 (i.e.


the range of angles at which the detector is capable of receiving incoming light).


As noted above, each detector 4 receives light from a group of emitters 2. Thus, the output signal from each detector 4 will represent the received light energy from a number of light paths. The apparatus is designed to allow the processing element 5 to distinguish between the contributions of different emitters 2 to the output signal of a specific detector 4. To this end, each emitter 2 is controlled to transmit a code by way of the emitted light such that the code identifies the respective emitter 2 to the detector 4, and the processing element 5 is configured to separate the light received by the detector 4 from individual emitters 2 based on the transmitted codes. As will be further explained below, this allows two or more emitters 2 to generate a beam at the same time, even is these beams overlap on one and the same detector 4. This in turn enables the temporal resolution and/or the SNR to be improved, compared to a scenario in which the individual emitters 2 are activated in sequence one after the other during a sensing instance.


In the context of the present application, a “code” denotes any time-varying function that can be embedded in the transmitted light. For example, the code may be a sequence of discrete values, e.g. binary values. Alternatively, the code may be a periodic function, e.g. a cosine function.


Each code is typically emitted during a code-generating cycle. The code-generating cycles of different emitters 2 may or may not be concurrent in time. It should be understood that a detection grid for a sensing instance is set up when all emitters 2 of the touch sensing apparatus has completed one code-generating cycle.


Typically, the code is embedded into the emitted light by modulation. Thus, the processing element 5 is able to discriminate between simultaneous transmissions of modulated light from different emitters 2 based on the time-resolved output signal of a single detector 4. Thereby, the processing element 5 may identify each of the emitters 2 in the output signal and measure the energy of the modulated light from the identified emitter 2 in the output signal.


In one embodiment, the codes are selected such that a value of an autocorrelation of each code is significantly higher than a value of a cross-correlation between any two codes of different emitters 2. The processing element 5 may, e.g., measure the energy of the individual emitters 2 by auto-correlating the output signal with a set of known signal patterns that represent the available codes.


If the code is a sequence of discrete values, the code-generating cycle may include a sequence of time intervals, wherein each time interval Includes one value of the code.


Before discussing the selection of codes in more detail, some general advantages of using wide-angle beams will be briefly discussed. FIG. 3 shows an embodiment in which a large number of emitters 2 and detectors 4 are alternately arranged around the periphery of the touch surface 1. FIG. 3(A) illustrates the light paths that are set up between one of the emitters 2 and a number of detectors 4 when the emitter emits a beam of light. FIG. 3(B) illustrates the complete detection grid that is generated during a sensing instance, when all emitters 2 have been activated. Clearly, a dense grid is generated, allowing a high spatial resolution.



FIG. 3(C) illustrates an example of the light paths that are affected by an object 7 that is brought close to or touches the touch surface 1 during a sensing instance. The large number of affected light paths gives redundancy to the determination of the touch location. This redundancy allows for a high precision in the determined location.


Alternatively or additionally, it may allow the processing element 5 to determine the size and/or shape of the object 7. Furthermore, the redundancy allows the processing element 5 to determine the locations of more than one touching object 7 during one sensing instance. Conventional touch sensing using an orthogonal grid of light paths above a touch surface is normally limited to detection of a single touching object 7, since the touching object shadows a section of the touch surface 1 and thereby prevents detection of another touching object in this shadowed section. However, is should be evident from FIG. 3 that a high density, non-orthogonal grid may be generated such that even if one touching object 7 blocks a number of light paths in the detection grid, the remaining (non-blocked) detection grid allows the processing element 5 to determine the location of further touching objects.


Code Selection


Generally, the following discussion examines different criteria for selecting the codes to be emitted by the respective emitters in the touch sensing apparatus. The following discussion is given in relation to an embodiment in which the codes, at least to the extent they are transmitted concurrently, are linearly independent. It should be noted that linearly independent codes also have the characteristic that a value of an autocorrelation of each code is significantly higher than a value of a cross-correlation between any two codes.


As will be shown below, the use of linearly independent codes generally enables efficient processing of the output signal to measure the energy received from the individual emitters. The linearly independent codes may form a multiplexing matrix, and the processing element can separate the energy from different emitters by operating the inverse of the multiplexing matrix on the output signal.


Further, in the following discussion, each code is made up of a sequence of binary values generated by on/off modulation of the emitter at the time intervals of the aforesaid code-generating cycle.


Thus, the amount of light emitted from the emitters is modulated with linearly independent functions in a multiplexing scheme. In one example, the amount of light detected by one detector that has five light paths to different emitters is given by η=M·E, i.e.







[




η
1






η
2






η
3






η
4






η
5




]

=


[



1


1


0


0


0




0


1


1


0


0




0


0


1


1


0




0


0


0


1


1




1


0


0


0


1



]



[




e
1






e
2






e
3






e
4






e
5




]






Where ηi is the light detected at the detector at a given time interval during the code generating cycle, M is the multiplexing matrix, and ek is the amount of light that can reach the detector from emitter k.


In this example, each of the codes for the emitters is given by a sequence of five bits. For the first emitter the bit sequence is 10001 which correspond to the first emitter being switched on, off, off, off, on.


As described in detail below, the SNR may be improved if each emitter is controlled to emit light during longer times in each code-generation cycle, i.e. during more than one time interval. In the example above, two emitters emit light during each time interval. Each emitter will then emit light twice during a code-generation cycle.


To separate the detected signal into a measured energy from each emitter, the multiplexing matrix M is inverted, and the resulting inverse M−1 is operated on the light detected at the detector according to: E=M−1·η.


In this example, the inversion process becomes:







[




e
1






e
2






e
3






e
4






e
5




]

=



1
2



[



1



-
1



1



-
1



1




1


1



-
1



1



-
1






-
1



1


1



-
1



1




1



-
1



1


1



-
1






-
1



1



-
1



1


1



]




[




η
1






η
2






η
3






η
4






η
5




]






In this way, the processing element can compute the amount of light (energy) that reaches the detector from every single emitter.


In another example, the code of the first emitter is modified to include only one light pulse:






M
=

[



1


1


0


0


0




0


1


1


0


0




0


0


1


1


0




0


0


0


1


1




0


0


0


0


1



]





Thereby, the matrix M may be easier to invert. For this multiplexing scheme the inversion process becomes:







[




e
1






e
2






e
3






e
4






e
5




]

=


[



1



-
1



1



-
1



1




0


1



-
1



1



-
1





0


0


1



-
1



1




0


0


0


1



-
1





0


0


0


0


1



]



[




η
1






η
2






η
3






η
4






η
5




]






The idea of controlling multiple emitters to emit light at the same time may be expanded to three emitters at a time, and so forth. An example of a matrix M for multiplexing scheme in which three emitters are activated during each time interval is:






M
=

[



1


1


1


0


0




0


1


1


1


0




0


0


1


1


1




1


0


0


1


1




1


1


0


0


1



]






FIGS. 4(A)-(E) illustrate the embodiment of FIG. 1 at five sequential time steps during a code-generating cycle according to the last-mentioned multiplexing scheme (detectors not shown, emitters denoted e1-e5, and activated emitters being illustrated as emitting a diverging beam). Each time step represents a code value for each emitter. FIG. 5 is a timing diagram that illustrates the time steps of code-generation cycle (CGC) using on/off-modulation for each emitter e1-e5.


In the example of FIGS. 4-5, each emitter e1-e5 is controlled to emit light at the same time as at least one of its neighbors. However, it is currently believed that a better SNR may be achieved by controlling the emitters such that the emitters that are activated at the same time are more spread out along the periphery of the touch surface. Such an arrangement may result in a multiplexing matrix of more optimal properties, as will be explained further below.


In essence, the multiplexing scheme can be based on any invertible multiplexing matrix. However, there are certain criteria that, when fulfilled, may be used to design a multiplexing matrix that serves to further improve the SNR. Such a matrix may be useful when the code-generation cycle is to be limited in time, e.g. to achieve a high temporal resolution. For a detector having N light paths to different emitters, these criteria make it possible to increase the SNR up to a factor of NIN/2 for a given duration of a sensing instance (compared to a sensing instance involving sequential activation of the emitters, denoted “non-multiplexed lighting scheme” in the following), or to decrease the duration of the sensing instance while maintaining the same SNR.


These criteria will be described and motivated in the following.


It should be emphasized, though, that these criteria are just examples of ways to improve or “optimize” the multiplexing matrices for a specific purpose. There may be other ways of improving the multiplexing matrices, for this or other purposes. Further, even an arbitrary selection of a multiplexing matrix with linearly independent columns will serve to improve the SNR compared to a non-multiplexed lighting scheme.


It should also be noted that although the following discussion may refer to on/off modulation, it is also applicable to other types of modulation of the emitters.


Optimization Criteria


Consider a system of N emitters and a single detector (dk). Each emitter may contribute the following energy to the investigated detector: E=(e1, e2, e3, . . . , eN)T, we want to find a multiplexing matrix, M, of size N×N that maximizes the SNR. The measured signals, η (one measured value on the detector, dk, for each time interval), are thus: η=M·E+ε, where ε is the noise level in the measurements. Each column in the multiplexing matrix, M=[m1 m2 . . . mN], is the multiplexing basis for a single emitter, ek.


To find the energy received from each emitter, we multiply the measured signals with the inverse of the multiplexing matrix: M−1·η=E+M−1·ε


We see that we can compute the measured energy from each emitter as: Ê=M−1·η. The resulting noise on the measured energy of the emitters is then given by {circumflex over (ε)}=M−1·ε. Since this algorithm uses the inverse of the multiplexing matrix, we see that we want to use a multiplexing matrix that has a low condition number.


The condition number of the matrix can be calculated as:

κ(M)=∥M−1∥·∥M∥


The condition number of a matrix measures the stability/sensitivity of the solution to a system of linear equations. In our context, it essentially means how errors in the inversion process affect the result of the de-multiplexing of the signals. When choosing a multiplexing matrix, it may be preferable that the norm of its inverse is small. Using an l2-norm the condition number becomes:








κ


(
M
)


=


σ
max


σ
min



,





Where σmax and σmin are the maximum and minimum singular values of the matrix. Choosing a multiplexing matrix that has as low condition number as possible may be preferable in order not to increase the noise level during the inversion process. If we let M be a normal matrix (MT·M=M·MT), we can compute the condition number as








κ


(
M
)


=




λ


max




λ


min



,





where |λ|max and |λ|min are the maximum and minimum of the magnitudes of the eigenvalues of the matrix.


To get an estimate of how the noise propagates in the inversion process, we may look at the unbiased mean squared error (MSE) estimator: MSE=Ê((E−Ê)2)=cov(Ê).


The variance of the matrix is the diagonal elements of the covariance matrix, which is given by: cov(Ê)=σ2(MT·M)−1.


It can be shown that the individual noise contributions from the different measurements are uncorrelated. Consequently, we can disregard the off-diagonal elements of the covariance matrix. The sum of squared errors (SSE) is thus the sum of all diagonal elements in the covariance matrix, i.e. the individual variances of the estimated parameters.


In one embodiment, the SSE parameter is used as optimization parameter for the multiplexing matrix: SSE=σ2 trace ((MT·M)−1), where σ2 is the variance of the noise in a non-multiplexed lighting scheme. The resulting variance (noise) in a single estimated value, êk, is then its corresponding diagonal element in the covariance matrix. The diagonal elements in the covariance matrix give the decrease in noise level (variance of the noise) in the system.


When finding an optimized solution, we try to minimize the above function. For a system where the noise is not dependent on the light incident on the detector and if the total number of emitters is fixed, we can simplify this minimization problem to:

minimize(SSE)=minimize(trace(MT·M)−1),


It can be shown that the optimum number of emitters turned on at the same time is close to N/2. Thus, this value is likely to give close to optimum inversion properties of the multiplexing matrix.


Further, it can be shown that Hadamard and Sylvester matrices fulfil the desired aspects of a multiplexing matrix as described in the foregoing. The use of codes that form Hadamard/Sylvester multiplexing matrices may improve the SNR by a significant factor (N+1)/2√N, which for large N becomes √N/2.


Generally speaking, the multiplexing matrix can contain any values, as long as its determinant is non-zero, i.e. its columns are linearly independent.


The above-mentioned Hadamard matrix is a matrix that only contains values of 1 or −1 and whose columns are linearly independent. A Hadamard matrix can, for instance, be constructed by the following recursive definition:







H
m

=

(




H

m
-
1





H

m
-
1







H

m
-
1





-

H

m
-
1






)








H
0

=

+
1





A Hadamard matrix satisfies H·HT=HT·H=N2·I, where I is the identity matrix. From the above recursive definition, it is clear that Hadamard matrices of order N=2p exist, where p is a non-negative number. It can be shown that Hadamard matrices of order N=1, 2 and N=4·p exist.


The absolute eigenvalues of a Hadamard matrix (including its transpose and inverse) are all equal. This means that the condition number of the multiplexing inversion is 1, which thus provides low noise in the inversion process.


In the example of on/off modulation, it may be difficult to achieve negative signals. Seemingly, such modulation would be limited to binary multiplexing values, e.g. 0 (no light) and 1 (full power). It is however possible to set the zero signal level to half the maximum signal level and consider −1 to be no light and 1 full power.


To achieve the same multiplexing characteristics as the Hadamard matrix but with only zeros and ones in the multiplexing matrix, we can construct a Sylvester matrix by deleting the first row and column in a Hadamard matrix (creating a Ĥ matrix) and then substituting 1 (in Hadamard) to 0 (in Sylvester) and −1 (in Hadamard) to 1 (in Sylvester), S=(1−Ĥ)/2. An example of a Sylvester matrix is:







S
7

=


[



1


0


1


0


1


0


1




0


1


1


0


0


1


1




1


1


0


0


1


1


0




0


0


0


1


1


1


1




1


0


1


1


0


1


0




0


1


1


1


1


0


0




1


1


0


1


0


0


1



]

.





The Sylvester versions of multiplexing matrices are normal matrices, i.e. ST·S=S·ST. All the absolute eigenvalues of a Sylvester matrix (including its transpose) are equal except for a single eigenvalue that is larger. The value of the largest eigenvalue is C, which is the number of emitters that are turned on at the same time. All the eigenvalues of the inverse of the Sylvester matrices are equal, except for one eigenvalue that is lower (1/C). Thus, the Sylvester matrices have good condition numbers and are useful in the multiplexing inversion process.


Multiplexing of Other Orders


The use of Hadamard/Sylvester multiplexing requires the number of emitters to be a multiple of 4, N=4p for Hadamard and N=4p−1 for Sylvester. In a multi-touch system that is rectangular, it is quite possible to arrange emitters and detectors such that each detector receives light from a multiple of 4 emitters. However, it may be desirable to be able to do multiplexing with an arbitrary number of emitters. Since the Sylvester matrix requires 4p-1 emitters, we may have to use a Hadamard/Sylvester matrix that is slightly larger than what actually is required by the actual number of emitters, i.e. we may have to add a number of fictive emitters.


One way to construct an optimum multiplexing matrix may be to use graph theory concerning Strongly Regular Graphs, srg, e.g. as described by R. C. Bose in “Strongly regular graphs, partial geometries and partially balanced designs”, Pacific J. Math., Vol. 13, No. 2 (1963), pp 389-419. This type of graph may be defined as follows. G=(V,E) is a regular graph with V vertices, E edges, and degree k (the number of edges going out from each vertex). If there exist two integers λ and μ such that every two adjacent vertices have λ common neighbors, and every two non-adjacent vertices have μ common neighbors, then this graph is strongly regular and is denoted srg(ν, κ, λ, μ). It can be shown that the adjacency matrix of an srg(N,C,a,a), where C is the number of emitters turned on at the same time and a=C·(C−1−/(N−1), forms an optimum multiplexing matrix. The properties of the resulting multiplexing matrices are consistent with the properties of Hadamard/Sylvester matrices.


In a Hadamard or Sylvester matrix, as well as other optimum or near-optimum multiplexing matrices, roughly half of the emitters are turned on during each time interval. If saturation of the detectors is expected to be an issue, it might be desired to reduce the number of concurrently activated emitters. Reducing the energy that is detected by a detector may be done by reducing the order, C, of the the expression srg(N,C,a,a) that is used for computing the adjacency matrix for the graph. The order is the number of connections each vertex has with the other vertices, which is equivalent to the number of emitters that are turned on during each time interval.


Multiple Detector Multiplexing


If we have several different detectors in the system, the output signals of all detectors may be de-multiplexed using the inverse of one and the same multiplexing matrix. Thus, the multiplexing matrix may be designed to account for all emitters in relation to all detectors in the system.


However, if each detector only receives light from a subset of the emitters, it may be advantageous to use inverses of several multiplexing matrices, e.g. one for each detector.


Such an embodiment will be further exemplified with reference to FIG. 6, which is a top plan view of a touch sensing apparatus with six emitters (denoted e1-e6) and six detectors (denoted d1-d6). The light paths between the emitters and detectors are indicated by dashed lines. In this example, the touch surface 1 is circular, but any other shape is possible, e.g. rectangular.


When an emitter emits light that may be detected by a subset of the detectors and another emitter emits light that may be detected by another subset of the detectors, the multiplexing matrix may be reduced to a set of multiplexing matrices that are permuted in a circular fashion. From FIG. 6, it is clear that there are only light paths between emitter e1 and detectors d3, d4 and d5, and that there are only light paths between emitter e2 and detectors d4, d5 and d6, and so forth.


Instead of using a 6-by-6 multiplexing matrix, it is possible to use a set of 3-by-3 matrices based on a main matrix S. For example, the main matrix may be given by:







S
=

[



1


0


1




0


1


1




1


1


0



]


,





which is based on the linearly independent codes: S1=[1 0 1]T, S2=[0 1 1]T, S3=[1 1 0]T. Thus, the main matrix may be written as a combination of three individual codes: S=[S1, S2, S3]. In this example, the main matrix is a Sylvester matrix. In a Hadamard/Sylvester matrix, the columns or the rows may change order (different row/column permutations) without changing the characteristics of the matrix. Thus, the emitters may be assigned a respective one of the codes S1-S3, such that a 3-by-3 multiplexing matrix is formed for each detector. In one example, emitters e1 and e4 are modulated with S1, emitters e2 and e5 are modulated with S2, and emitters e3 and e6 are modulated with S3. In this example, the respective output signal of the detectors d1-d6 will be:







d

5
,
η


=


[




S
1




S
2




S
3




]

·

[




e
1






e
2






e
3




]









d

6
,
η


=


[




S
2




S
3




S
1




]

·

[




e
2






e
3






e
4




]









d

1
,
η


=


[




S
3




S
1




S
2




]

·

[




e
3






e
4






e
5




]









d

2
,
η


=


[




S
1




S
2




S
3




]

·

[




e
4






e
5






e
6




]









d

3
,
η


=


[




S
2




S
3




S
1




]

·

[




e
5






e
6






e
1




]









d

4
,
η


=


[




S
3




S
1




S
3




]

·

[




e
6






e
1






e
2




]






This type of simple circular construction of multiplexing matrices is possible when the ratio between the total number of emitters and the number of light paths to each detector is an integer number >2. If the ratio is not such an integer number, a number of fictive emitters may be added for the ratio to be an integer number. Further, it may be desirable for the ratio between the total number of emitters (including any fictive emitters) and the size of the main matrix to be an integer number >2, and thus the number of bits in the codes of the main matrix may need to be increased.


It is to be understood that the above is merely an example, and that there are other ways of enabling the use of individual multiplexing matrices for different detectors.


Application to Alternative Touch Sensing Techniques


The above-described techniques of emitting codes and separating the light received by a detector based on the codes are equally applicable to other concepts for touch detection. Below, a number of different concepts will be described. Although not explicitly discussed in relation to each configuration, it should be understood that all of the disclosed configurations may include a processing element and a control unit that operate as discussed above in relation to the embodiment in FIGS. 1-2.


In one alternative touch-sensing apparatus, the detection grid and thus a touch surface 1 is formed at a boundary surface of a light transmissive panel, by propagating light inside the light transmissive panel. Such an embodiment is shown in FIGS. 7-8, in which a number of emitters 2 are arranged around the periphery of a light transmissive panel 8, to inject a respective beam of light into the panel, typically via the edges of the panel 8, or via one or more wedges (not shown) arranged on the top or bottom surface 9, 10 of the panel 8. Each beam of light is diverging in the plane of the touch surface 1, i.e. as seen in a top plan view, and may or may not be diverging in a plane perpendicular to the touch surface 1, i.e. as seen in a side view (cf. FIG. 8). One or more detectors 4 are arranged around the periphery of the panel 8 to measure the energy of received light. Light may e.g. be received by the detectors 4 via the side edges of the panel 8, or via one or more wedges (not shown) arranged on the top or bottom surfaces of the panel 8. Thus, each detector 4 receives light from a group of emitters 2 along a set of light paths. The panel 8 defines two opposite and generally parallel surfaces 9, 10 and may be planar or curved. A radiation propagation channel is provided between two boundary surfaces 9, 10 of the panel 8, wherein at least one of the boundary surfaces allows the propagating light to interact with a touching object 7. Typically, the light propagates by total internal reflection (TIR) in the radiation propagation channel. In this interaction, part of the light may be scattered by the object 7, part of the light may be absorbed by the object 7, and part of the light may continue to propagate unaffected. Thus, as shown in the side view of FIG. 8, when the object 7 touches a boundary surface of the panel (e.g. the top surface 9), the total internal reflection is frustrated and the energy of the transmitted light is decreased. The location of the touching object 7 may be detected by measuring the energy of the light transmitted through the panel 8 from a plurality of different directions.


It is thus understood that the above-described techniques of controlling the emitters 2 to transmit codes and of separating the light received from individual emitters 2 based on the transmitted codes may be used to identify any light paths to each detector 4 that are affected by the touching object 7.


It should be noted that, unlike the embodiment of FIGS. 1-2, the light will not be blocked by the touching object 7. Thus, if two objects happen to be placed after each other along a light path from an emitter 2 to a detector 4, part of the light will interact with both objects. Provided that the light energy is sufficient, a remainder of the light will reach the detector 4 and generate a measurement signal that allows both interactions to be identified. This means that the generation of the detection grid inside the panel 8 may improve the ability of the apparatus to detect the locations of multiple touching objects during a sensing instance.


Normally, each touch point pn has a transmission tn, which is in the range 0-1, but normally in the range 0.7-0.99. The total transmission Tij along a light path Sij is the product of the individual transmissions tn of the touch points pn on that light path: Tij=Πtn. For example, two touch points and with transmissions 0.9 and 0.8, respectively, on a light path Sij, yields a total transmission Tij=0.72.


Like in FIG. 1, each of the emitters 2 may emit a diverging beam of light, and one or more detectors 4 may receive light from plural emitters. However, it may not be necessary for emitters 2 to inject diverging beams into the panel. If sufficient scattering is present in the panel, the injected beams will be inherently broadened in the plane of the panel 8 as they propagate from the injection site through the panel 8. For each internal reflection, some radiation is diverted away from the main direction of the beam, and the center of the beam looses energy with distance. Scattering is particularly noticeable if an anti-glare structure/layer is provided on one or both of the boundary surfaces 9, 10. The anti-glare structure/layer provides a diffusing structure which may enhance the scattering of the beam for each internal reflection, and which may also cause radiation to escape through the surface 9, 10 for each internal reflection. Thus, the provision of an anti-glare structure/layer generally increases the broadening of the beam with distance from the injection site.


The use of an anti-glare structure/layer may be advantageous to reduce glares from external lighting on the touch surface 1 of the panel 8. Furthermore, when the touching object 7 is a naked finger, the contact between the finger 7 and the panel 8 normally leaves a fingerprint on the touch surface 1. On a perfectly flat surface, such fingerprints are clearly visible and usually unwanted. By adding an anti-glare structure/layer to the surface, the visibility of fingerprints is reduced. Furthermore, the friction between finger and panel decreases when an anti-glare is used, thereby improving the user experience.



FIG. 9 is a side view of an alternative configuration, in which light also propagates inside a light transmissive panel 8. Here, emitters 2 are arranged beneath the panel 8 to inject a respective beam of light through the lower boundary surface 10 into the panel 8. The injected beam of light propagates by total internal reflection between the boundary surfaces 9, 10, and the propagating light is intercepted by a number of detectors 4. These detectors 4 are also arranged beneath the panel 8, typically interspersed among the emitters 2. One example of such an arrangement of interspersed emitters 2 and detectors 4 is shown in the top plan view of FIG. 10. It is understood that a number of light paths may be set up between each emitter 2 and a number of adjacent detectors 4, thereby creating a detection grid at the upper boundary surface 9.



FIG. 9 illustrates a respective light path set up between two different pairs of emitters 2 and detectors 4. When an object 7 touches the top surface 9 of the panel 8, one or more of the propagating beams will be frustrated, and the detector 4 will measure a decreased energy of received light. It should be realized that if the detection grid is known, and if the measured energy at each detector 4 can be separated into different light paths, it is possible to determine the location of a touching object 7 based on the light paths that experience a decrease in measured light energy.


It is thus understood that the above-described techniques of controlling the emitters 2 to transmit codes and of separating the light received from individual emitters 2 based on the transmitted codes may be used to identify any light paths to each detector 4 that are affected by the touching object 7.


As seen in FIG. 9, part of the propagating light is scattered by the touching object 7. This scattered light may also be detected by one or more detectors 4. However, the energy of the scattered light is generally much less than the energy that is attenuated in the interaction with the touching object 7. Thus, the scattered light will generally not contribute significantly to the energy measured by the detectors 4 in the apparatus.


Typically, each emitter 2 generates a diverging beam such that at least part of the beam will have an angle of incidence to the normal of the upper boundary surface 9 that is larger than the critical angle. The emitter 2 may be arranged to emit the beam with a beam angle of at least 90°, and preferably of at least 120°. In one embodiment, the beam angle is close to 180°, such as at least 160°. The beam may or may not have a main direction which is orthogonal to the upper boundary surface 9. When using diverging beams, a significant part of the emitted radiation may pass through the panel 8 instead of being internally reflected. To this end, an element (not shown) may be provided between each emitter 2 and the lower boundary surface 10 to block a part of the emitted beam, so as to only pass rays that have an angle of incidence at the upper boundary surface 9 that sustains total internal reflection. Alternatively, the element may be configured to redirect the rays in said part of the beam so as to cause these rays to have at least the necessary angle of incidence at the upper boundary surface 9.


Alternatively, each emitter 2 may emit collimated light at a suitable angle to the normal of the upper boundary surface 9.


When light is propagated inside a transmissive panel 8, the resulting signal levels at the detectors 4 may be lower compared to when light is propagated above a touch surface 1. Thus, the above-described optimization criteria may need to be revised to also account for shot noise (photon noise) when optimizing the multiplexing matrix. In this case, we want to minimize a modified SSE function:

SSE=(σ2+Cσ12)trace((MT·)−1),

where is the variance of the signal-independent noise, is the variance of the signal dependent noise, and C is the number of emitters turned on at the same time.


When the shot noise is a significant factor, we may start by finding an optimum or near-optimum multiplexing matrix M without considering the shot noise (though we may consider saturation, see above). When the matrix M is found, we may compute the SNR improvement using the modified SSE function. We can then compute the optimal setting of C (below the saturation limit) to get an optimum or near-optimum multiplexing matrix with shot noise taken into account, i.e. the matrix yielding the best SNR improvement.


The alternative detection concepts presented above rely on detecting/measuring an attenuation of propagating light that is caused by one or more touching objects.


According to yet another alternative detection concept, touch locations are determined based on the light that is scattered by a touching object. FIG. 11 illustrates an example embodiment in which light is injected to propagate inside a light transmissive panel 8 as described in the foregoing. In the example of FIG. 11, emitters 2 are arranged along two opposing sides of the panel 8 to emit a respective beam of light (only two beams shown). The light beam from each emitter 2 preferably has a small beam angle, and may be collimated. Thus, in this example, each emitter 2 generates a light path across the panel 8. In the illustrated example, the detectors 4 are positioned along the other two sides of the panel 8, perpendicular to the emitters 2, typically to receive light via the side edges of the panel 8, or via one or more wedges (not shown) arranged on the top or bottom surfaces of the panel 8. An object 7 touching the panel 8 will cause light to be scattered in all directions inside the panel 8. A number of detectors 4 will detect the scattered light, but due to the bulk absorption in the plate, radial intensity dependence, and possibly surface scattering, the detector 4 positioned at the same X coordinate as the touching object 7 will detect the highest intensity of scattered light. Thus, an X coordinate of the touching object 7 can be determined from the total energy measured by the respective detector 4.


To increase the precision, the detectors 4 may be configured with a confined field of view, so that only light scattered at the X coordinate, or nearby X coordinates, of a detector may be detected by that detector. This may be achieved by any combination of lenses, pinholes, etc, between the panel 8 and the detector 4. Alternatively or additionally, an air gap may be provided between the panel 8 and the detectors 4, whereby total reflection of scattered light in the panel side edge will limit the field of view of the detectors.


The Y coordinate of the touching object is determined by determining the emitter(s) 2 that generated the scattered light measured by one or more detectors 4. It is thus understood that the above-described techniques of controlling the emitters 2 to transmit codes and of separating the light received from individual emitters 2 based on the transmitted codes may be used to identify any light paths that are affected by the touching object 7. In the example of FIG. 11, the light paths are parallel to the X axis, and the Y coordinate of the touching object 7 will be given by the Y coordinate(s) of the emitter(s) 2 generating the identified light path(s). In an alternative configuration, the light paths could be non-parallel to the X axis. As long as the directions of the light paths are known, and the X coordinate has been obtained, the Y coordinate can be calculated when an affected light path has been identified.


In the above-configuration, one position coordinate (Y) is determined based on the affected light paths, as identified by separating the light received by the detector(s).


In an alternative (not shown), both position coordinates (X, Y) may be determined by identifying light paths based on the light received by the detector(s). In one such configuration, the emitters 2 are arranged to generate light paths that intersect within the touch surface 1. Thereby, both the X and Y coordinates may be determined by separating the light received by the detector(s), by identifying light from at least one emitter in the separated light, and by reconstructing the intersection(s) of the light paths of the thus identified emitters.


It should be understood that the detection concept discussed above in relation to FIG. 11 is equally applicable when the light is emitted to propagate above a touch surface 1.


Still further, this detection concept is not restricted to the illustrated arrangement of emitters 2 and detectors 4. For example, the emitters 2 and/or detector 4 could be arranged along only one side of the touch surface 1. Alternatively, emitters 2 and detectors 4 may be interleaved at one or more sides of the touch surface 1. In fact, it may be advantageous to combine detection of attenuation with detection of scattered light. For example, if the embodiment of FIGS. 7-8 is implemented with the detection grid of FIG. 3(B), the detectors 4 that do not receive direct light from the emitters may 2 be used to detect the light that is scattered by objects 7 touching the panel 8. Thus, whenever a specific detector does not receive direct light it may be used for scatter detection. The scattered light may be used to improve the precision of the determined location of the touching object 7.


Peripheral Arrangements of Detectors and Emitters


The following relates to potential advantages of using different arrangements of emitters and detectors in the embodiments shown in FIGS. 1-2 and FIGS. 6-8, i.e. when emitters 2 and detectors 4 are arranged around the periphery of a touch surface 1 to define a detection grid of light paths.


In one variant, the emitters 2 and the detectors 4 may be alternated around the periphery of the touch surface 1 (cf. FIG. 3). This may, e.g., result in a more uniform detection grid.


In this and other variants, the number of emitters 2 may equal the number of detectors 4.


Alternatively, the number of emitters 2 may exceed the number of detectors 4, e.g. as shown in FIG. 12. An increased number of emitters 2 may be used to decrease the number of detectors 4 and thus reduce cost. The spatial resolution mainly depends on the number of light paths, and emitters 2 may be cheaper than detectors 4 and possibly additional detector equipment such as lenses, A/D-converters, amplification circuits or filters.


In yet another alternative configuration, the number of detectors 4 exceeds the number of emitters 2. Examples of such configurations are shown in FIGS. 13(A)-(B). One advantage of such configurations may be to reduce the size of the multiplexing matrix and thereby the sampling frequency, i.e. the frequency of sampling the output signals of the detectors 4.


In these and other variants, the emitters 2 and detectors 4 may be arranged equidistantly around the periphery of the touch surface 1, e.g. as shown in FIGS. 3, 12 and 13. Alternatively, as shown in FIG. 14, the distances between each emitter 2 and/or detector 4 may be randomized. For example, randomized distances between the emitters 2 may be used to reduce interference phenomena that may appear when a number of light sources inject light of the same wavelength into the panel.



FIG. 15 illustrates yet another embodiment where emitters 2 near or at the corners of the touch surface 1 are positioned so as to emit light with a wide light beam directed towards the center of the touch surface 1 so as to spread the emitted light over as large a portion of the touch surface 1 as possible. If near-corner emitters 2 are positioned so as to emit light centered perpendicular to the periphery of the touch surface 1, a large portion of the emitted beam will reach a detector 4 after having propagated only a short path across the touch surface 1. Hence, the resulting light paths between near-corner emitters 2 and detectors 4 may cover only a small area of the touch surface 1. It may therefore be advantageous to position near-corner emitters, as well as corner emitters, if present, so as to point towards the center of the touch surface. This embodiment is generally applicable whenever the touch surface is polygonal, and at least one emitter is arranged at a corner of the touch surface. In one variant, all emitters are positioned to point towards the center of the touch surface, thereby ensuring that as much as possible of the emitted light is used for touch detection.



FIG. 16 is a side view of an embodiment, in which emitters 2 (one shown) are arranged at the periphery to inject a respective beam of light into a light transmissive panel 8. A V-shaped light deflector 11 is placed between each emitter 2 and the panel 8. The light deflector 11 is configured to redirect, by way of angled mirror surfaces 12, 13, rays that are emitted essentially parallel to the opposite surfaces 9, 10 of the panel 8.


Specifically, the rays are redirected towards either of the boundary surfaces 9, 10 at an angle that ensures propagation by total internal reflection. In another embodiment (not shown), the deflector 11 is replaced or supplemented by an element that prevents light rays from reaching the detector without having been reflected in the touch surface 1 at least once. Any part of the light that propagates through the panel 8 without being reflected in the touch surface 1 does not contribute to the touch detection signal, as this light cannot be frustrated by a touching object. Such a blocking element may be an absorbing or reflecting element/layer, which may be arranged between the emitter 2 and the side edge of the panel 8, and/or between the side edge of the panel 8 and the detector 4. For example, the blocking element may be attached to the side edge of the panel 8.


Similar deflecting elements or blocking elements may be arranged intermediate the emitters 2 and the panel 8 when the emitters 2 are arranged beneath the panel, as discussed above in relation to FIG. 9-10.


In any of the embodiments disclosed herein, a lens (not shown) may be inserted between the panel 8 and the detector 4 so as to focus light onto the detector surface. This may increase the SNR.


Whenever light propagates inside a transmissive panel 8, it may be advantageous to provide an air gap between the panel 8 and the detectors 4. The air gap will result in a reduced field of view of the detectors 4, which in turn may serve to reduce shot noise in the detection.


Data Processing


In all of the above described embodiments, configurations, arrangements, alternatives and variants, the processing element 5 (see FIGS. 1 and 3) may be configured to calculate the touch locations based on output or measurement signals obtained from the detectors 4. The skilled person will readily realize that there are numerous methods for determining the touch locations. FIG. 17 is a flow chart of an exemplifying method.


In step 20, measurement signals are acquired from the detectors in the system. Each measurement signal represents the sum of light received from k different angles (i.e. k different emitters), sampled at n time intervals during a sensing instance.


In step 21, each measurement signal is separated into a set of emitter signals, using the multiplexing inversion scheme. Each emitter signal thus represents the received light energy along one of the available light paths to the relevant detector. The measurement/emitter signals may also be pre-processed. For example, the measurement/emitter signals may be processed for noise reduction using standard filtering techniques, e.g. low-pass filtering, median filters, Fourier-plane filters, etc. Further, if the energy of the emitted beams is measured in the system, the measurement/emitter signals may be compensated for temporal energy fluctuations in beam energy. Still further, the touch surface may be a sub-area of the detection grid, and certain emitter signals may thus originate from light paths outside this sub-area. Thus, the pre-processing may involve removing such emitter signals from further processing. Furthermore, the emitter signals may be rectified, which essentially means that the emitter signals of each detector are interpolated to achieve the same mutual angle between all incoming light paths to the detector. Thus, the emitter signals for each detector are interpolated with a non-linear angle variable, resulting in a complete set of emitter signals that are evenly distributed over the panel. Rectification is optional, but may simplify the subsequent computation of touch locations. Rectification may alternatively be made on transmission signals (below).


In step 22, the emitter signals are processed to identify any light paths that are affected by touching objects.


If the light is propagated above the touch surface, these light paths are blocked or occluded by the touching object(s) and are thus identified by an absence of the corresponding emitter signals.


If the light is propagated inside a panel, these light paths are identified based on an attenuation of the emitter signals. Suitably, a transmission signal is calculated for each pre-processed emitter signal, by dividing the emitter signal with a background signal, which represents an energy of the emitter signal without any object touching the touch surface. The background signal may or may not be unique to each detector or each emitter signal. The background signal may be pre-set, obtained during a separate calibration step, or obtained from the same emitter signal acquired during one or more preceding sensing instances, possibly by averaging the resulting set of emitter signals. The resulting transmission signals will indicate any light paths that are affected by touching objects.


To further illustrate the calculation of transmission signals, FIG. 18A shows a subset of the emitter signals El obtained during one sensing instance with a single object touching the panel. Specifically, FIG. 18A is a plot of the received light energy on light paths extending between a single emitter and an ordered set of detectors along the periphery of the touch surface, e.g. as shown in FIG. 3(A). FIG. 18B shows corresponding background signals REF, also given as a plot of received light energy for the same set of light paths. In this example, the distribution of radiation across the detectors is highly non-uniform. FIG. 18C shows the resulting transmission signals TI=EI/REF, which result in a essentially uniform signal level at a (relative) transmission of about 1 with a peak T11 caused by the touching object. It is to be understood that the conversion of emitter signals into transmission signals greatly facilitates the identification of relevant peaks, and thus the affected light paths. It also makes it possible to compare emitter signal values obtained on different light paths.


As mentioned above, if there are more than one touch point on the same light path, the total transmission signal is the product of individual transmissions of the touch points. This is true for any number of objects on any light path, provided that a remainder of the light reaches the detector. Thus, by converting the emitter signals into transmission signals, it is possible to separate the contribution from individual touching objects to a transmission signal value. FIG. 19A corresponds to FIG. 18A, but shows emitter signals El obtained with three touching objects, where two touching objects interact with essentially the same light paths. FIG. 19B shows that the resulting transmission signal T1 is made up of two peaks T11 and T12, wherein the magnitude of each transmission signal value within the peak T11 represents the product of the transmissions of two touching objects along the respective light path.


The skilled person realizes that the position determination may be simplified by operating on logarithms (in any base), since the logarithm of the total transmission signal Tij along a light path Sij is then equal to the sum of the logarithms of the individual transmissions tn of the touch points pn on that light path: log Tij=Σ log tn. Furthermore, the logarithm of the total transmission signal may be calculated by subtracting a logarithmic background value from the logarithm of the emitter signal: log Tij=log(E)−log(REF). In the context of the present application such a subtraction is regarded as a division operation. However, logarithms need not be used in the determination of touch locations.


In step 23, touch locations are determined based on the identified light paths.


If the light is propagated above the touch surface or inside a panel, touch locations may be determined by determining intersections between the identified light paths, i.e. triangulation.


If the light is propagated inside a panel, touch locations may alternatively be determined using the collection of identified light paths and the corresponding transmission signals. For example, the touch-sensing system may be modelled using known algorithms developed for transmission tomography with a fan beam geometry. Thus, the touch locations may be reconstructed using any available image reconstruction algorithm, which is operated on the transmission signals for the collection of light paths. The image reconstruction algorithm results in a two-dimensional distribution of transmission values (or equivalently, attenuation values) within the touch surface. The skilled person realizes that the use of an image reconstruction algorithm, compared to triangulation, may enable position determination irrespective of the shape of the touching object(s). It may also improve the ability to discriminate between multiple touching objects, and facilitate determination of other touch data such as the shape and/or size of the touching object(s).


Tomographic reconstruction, which is well-known per se, is based on the mathematics describing the Radon transform and its inverse. The general concept of tomography is to do imaging of a medium by measuring line integrals through the medium for a large set of angles and positions. The line integrals are measured through the image plane. To find the inverse, i.e. the original image, many algorithms uses the so-called Projection Slice theorem. This theorem states that a 1-dimensional slice through the origin of the 2-dimensional Fourier transform of the medium is mathematically equal to the 1-dimensional Fourier transform of the projected line integrals for that particular angle. Several efficient algorithms have been developed for tomographic reconstruction, e.g. Filtered Back Projection, FFT-based algorithms, ART (Algebraic Reconstruction Technique), SART (Simultaneous Algebraic Reconstruction Technique), etc. More information about the specific implementations of the algorithms can be found in the literature, e.g. in the book “The Mathematics of Computerized Tomography” by Frank Natterer.


It is to be understood that step 22 may be included in step 23, e.g by operating the image reconstruction algorithm on all available transmission signals. In such an embodiment, the light paths that are affected by touching objects are inherently identified when the algorithm processes the transmission signals.


The accuracy and/or computation speed of step 23 may be increased by using a priori knowledge about the touch locations, e.g. by using information about the touch locations that were identified during preceding sensing instance(s).


In step 24, the determined touch locations are output and the method returns to step 20 for processing of a forthcoming sensing instance.


The data processing may also involve determining other touch data such as the shape and/or size of the touching object(s), e.g. using the algorithms disclosed in aforesaid WO2006/095320, which is incorporated herein by reference.


General


The touch surface 1 can have any shape, e.g. polygonal, elliptic or circular.


The emitter 2 can be any suitable light source, such as an LED (light-emitting diode), an incandescent lamp, a halogen lamp, a diode laser, a VCSEL (vertical-cavity surface-emitting laser), etc. All beams may be generated with identical wavelength. Alternatively, some or all beams may be generated in different wavelength ranges, permitting differentiation between the beams based on wavelength. The emitters 2 may generate diverging or collimated beams.


The energy of the beams may be measured by any type of radiation detector 4 capable of converting radiation into an electrical signal. For example, the detectors 4 may be simple 0-dimensional detectors, but alternatively they may be 1-dimensional or 2-dimensional detectors.


The above-described panel 8 may be made of any solid material (or combination of materials) that transmits a sufficient amount of light in the relevant wavelength range to permit a sensible measurement of transmitted energy. Such material includes glass, poly(methyl methacrylate) (PMMA) and polycarbonates (PC).


The processing element 5 and the control unit 3 may be implemented by program instructions executed by a processor. The processor may be a commercially available microprocessor such as a CPU (“Central Processing Unit”), a DSP (“Digital Signal Processor”) or some other programmable logical device, such as an FPGA (“Field Programmable Gate Array”). Alternatively, the processing element or the control unit may be implemented by dedicated circuitry, such as an ASIC (“Application-Specific Integrated Circuit”), discrete analog and digital components, or some combination of the above. It should be noted that the control unit 3 and the processing element 5 may be implemented by processes in one and the same device.


The invention has mainly been described above with reference to a few embodiments. However, as is readily appreciated by a person skilled in the art, other embodiments than the ones disclosed above are equally possible within the scope of the invention. The different features of the invention could be combined in other combinations than those described. The scope of the invention is defined and limited only by the appended patent claims.


For example, the above-mentioned linearly independent codes may have any length. Thus, the resulting multiplexing matrix need not be square (i.e. have equal number of rows and columns). Instead, the linearly independent codes may define an overdetermined system of linear equations, which means that the multiplexing matrix is non-square, and therefore cannot be inverted analytically. However, it is still possible to calculate an approximate inverse to such an overdetermined multiplexing matrix, e.g. by deriving and solving the corresponding normal equations, as is well-known for the person skilled in linear algebra and numerical methods.


The codes of the emitters may be embedded in the emitted light by any type of amplitude modulation, which is not limited to on/off-modulation. For example, any number of different code values may be coded by any different energy values of the emitted light.



FIG. 20 illustrates yet another type of modulation, in which different pulse lengths of the emitter are used to represent different code values of the associated code. Thus, the duty cycle of the emitter is modulated by changing the duration of the activation interval in relation of a constant time interval ΔT of the code-generation cycle. In the example of FIG. 20, the pulse length t1 represent a code value 0, whereas the pulse length t2 represents a code value of 1, and the resulting code is 0100.



FIG. 21 illustrates yet another type of modulation, in which delays for the activation of the emitter are used to represent different code values of the associated code. Thus, the emitted light is modulated by changing the pulse delays within a constant time interval ΔT of the code-generation cycle. In the example of FIG. 21, the pulse delay Δt1 represents a code value 0, whereas the pulse delay Δt2 represents a code value of 1, and the resulting code is 0100.


It is also possible to combine any of the above modulations for embedding the codes in the emitted light.


In another variant, the codes are embedded in the emitted light by modulating the amplitude of the emitted light according to different functions, which are selected such that a value of an autocorrelation of each function is significantly higher than a value of a cross-correlation between any two functions of different emitters. In one such example, the different functions are given by different modulation frequencies ωk of a basic periodic function (carrier wave). Preferably, the basic function has a well-defined frequency spectrum around its modulation frequency. The basic function may, e.g, be a cosine or sine function, such as:







e
k

=


E
k

·



1
-

cos


(


ω
k

·
t

)



2

.






This means that the functions (codes) of the different emitters are orthogonal, since:








1
T





0
T





cos


(


ω
k

·
t

)


·

cos


(


ω
i

·
t

)




dt



=

{




π
,




i
=
k






0
,




i

k













Like in the embodiments described in the foregoing, each detector generates a measurement signal, which is a time-resolved representation of the light received along a set of light paths, i.e. from different emitters. There are different approaches for separating such a measurement signal into a set of emitter signals. The code-generation cycle is generally selected to comprise at least one period of the lowest modulation frequency.


In one approach, the measurement signal is processed by a frequency spectrum analyser to identify the light energy received from the different emitters. Such an approach is further exemplified in FIG. 22, which shows five emitters 2 that are all amplitude-modulated by a cosine function, but at separate frequencies 01-05. A detector 4 receives the light from the emitters 2. The detector 4 is sampled at a frequency that is at least twice the highest coding frequency, i.e. according to the Nyquist sampling theorem, to generate a measurement signal. The measurement signal is processed by a frequency spectrum analyser 14 to generate a power spectrum, e.g. by calculating the Fourier transform of the measurement signal, e.g. using a FFT (Fast Fourier Transform) algorithm. A value of the light energy received from each emitter 2 is then given by the intensity of the power spectrum at the respective frequency. In this coding scheme, it may be advantageous to choose modulation frequencies 01-05 that correspond to actual frequencies that the FFT will measure, such that the frequencies are given by ωk=2πn/N, with total number of emitters] and N being the total number of sampling points during a code-generation cycle. The frequency spectrum analyser 14 may be implemented as part of the processing element 5 or may be a separate unit.


In a second approach, the measurement signal is passed through a set of bandpass filters, each adapted to the frequency of a respective emitter. Such an approach is further exemplified in FIG. 23. Like in the embodiment of FIG. 22, a detector 4 is sampled to generate a measurement signal representing the received light from five emitters 2. A set of bandpass filters 15 are arranged to operate on the measurement signal, such that each bandpass filter removes frequencies outside a passband around the modulation frequency ω15 of the respective emitter 2. The output signal of each bandpass filter 15 will represent the light energy received from the respective emitter 2. The output signal is then passed to an amplitude detector or an integrator 16, which provides an emitter signal representative of the light energy. The bandpass filters 15 and amplitude detector/integrator 16 may be implemented by digital signal processing in the processing element 5, or by dedicated electronic circuitry that operates on analog measurement signals from the detector. The processing of analog signals obviates the need for sampling, and may thus enable the use of higher modulation frequencies. The use of higher frequencies may enable shorter code-generation cycles or increased SNR.


The use of frequency modulation has the additional advantage that any signal interference from ambient light or other noise sources may be removed, provided that the modulation frequencies are well-separated from the frequencies of such noise sources.


In yet another variant, the codes are embedded in the emitted light by phase modulation, such that different code values are represented by different phase shifts of a carrier wave, which may be any suitable waveform, including cosine/sine, square, triangle, sawtooth, etc.


In one embodiment, all emitters emit light modulated by a common carrier wave at a common frequency o, and the phases of the group of emitters are modulated according to a multiplexing scheme. In the following example, the multiplexing scheme uses the code values −1 and 1, wherein −1 is given by a 180° phase shift of the carrier wave. Thus, the phase modulation is so-called BPSK (Binary Phase Shift Keying). The light emitted from an emitter ek during a time interval i of a code-generation cycle may thus be given by:

ek,i=Ek·(1+mk,i·cos((ω·t))/2,


with mk,i being the code value of the emitter ek at time interval i. Thus, the code for each emitter is given by a vector mk consisting of the code values mk,i. As explained above, a multiplexing matrix M may be formed by the vectors mk for N emitters: M=[m1 m2 . . . mN], and the codes of the different emitters may be linearly independent, or even orthogonal. In this example, the multiplexing matrix can be a Hadamard matrix, as described above.


The detected signal ni at a detector during a time interval is the sum of light that reaches the detector. The light is de-modulated by multiplication with a reference signal, typically the original carrier wave:







η
i

=



1
T





t

t
+
T






(



k



e
k


)

·

cos


(

ω
·
t

)




dt



=



1

2

T






t

t
+
T






k




E
k





·

cos


(

ω
·
t

)






+



E
k

·

m

k
,
i


·

1
2




(

1
+

cos


(

2






ω
·
t


)



)


dt







By choosing the integration time T to be an even multiple of the carrier wave frequency ω, all terms involving cos(ω·t) and cos(2ω·t) vanish. Further, the integration time is chosen to be equal to a time interval in the code-generation cycle. The demodulation thus yields:







η
i

=


1
4





k




E
k

·

m

k
,
i









The above multiplication and integration (de-modulation) is carried out during each of the time intervals of a code-generation cycle, resulting in a measurement signal η. As described in the foregoing, the measurement signal can be separated into a set of emitter signals, using a multiplexing inversion scheme: Ê=M−1·η. If the codes are orthogonal, this operation may be further simplified, since MT=M−1 for an orthogonal (orthonormal) multiplexing matrix.


The de-modulation may be implemented by digital signal processing in the processing element, or by dedicated electronic circuitry that operates on analog measurement signals from the detector. The processing of analog signals obviates the need for sampling, and may thus enable the use of a higher modulation frequency. The use of higher frequencies may enable shorter code-generation cycles or increased SNR. The use of phase modulation has the additional advantage that any signal interference from ambient light or other noise sources may be removed, provided that the modulation frequency are well-separated from the frequencies of such noise sources.


It is to be noted that code values −1/1 is merely given as an example, and that any type of code values can be embedded in the emitted light using the phase modulation. Further, other types of phase-modulation techniques can be used, including but not limited to MSK (Minimum Shift Keying). Quadrature Phase-Shift Keying (QPSK) and Differential Phase-Shift Keying (DPSK).


The skilled person also realizes that certain embodiments/features are applicable for any type of emitter activation scheme, including operating the touch sensing apparatus without coding of the emitted light, e.g. by activating the emitters in sequence. For example, steps 22-24 of the decoding process (FIG. 17) can be used irrespective of method for obtaining the emitter signals. i.e. the received light energy on the different light paths. Likewise, the embodiments described above in the Section “Peripheral arrangements of detectors and emitters” are applicable to all types of emitter activation schemes.

Claims
  • 1. A touch sensing apparatus, comprising: a touch surface;a group of emitters arranged to emit light to illuminate at least part of the touch surface, each respective emitter in the group of emitters configured to transmit a frequency modulated emitter signal by way of the emitted light, the frequency modulated emitter signal identifying the respective emitter;a group of light detectors arranged to receive light from the group of emitters, the group of light detectors configured to generate an output signal; andat least one processor configured to: separate light signals received from individual emitters from the output signal based on the frequency modulated emitter signals, apply an algorithm on the separated light signals and determine a position of one or more objects interacting with the touch surface based on the application of the algorithm,wherein the algorithm comprises transmission tomography and wherein the algorithm for transmission tomography generates a two-dimensional model of an interaction between the one or more objects and the touch surface.
  • 2. The touch sensing apparatus of claim 1, wherein the algorithm comprises triangulation.
  • 3. The touch sensing apparatus of claim 1, wherein the frequency modulated emitter signals are transmitted concurrently.
  • 4. The touch sensing apparatus of claim 1, wherein at least two emitters in the group of emitters are controlled to emit light simultaneously during transmission of the frequency modulated emitter signals.
  • 5. The touch sensing apparatus of claim 4, wherein each frequency modulated emitter signal includes a sequence of binary values, each of the binary values represented by a frequency of the emitted light.
  • 6. The touch sensing apparatus of claim 1, wherein the frequency modulated emitter signals are selected such that approximately 50% of emitters in said group of emitters emit light simultaneously.
  • 7. The touch sensing apparatus of claim 1, wherein each frequency modulated emitter signal includes a sequence of binary values;said frequency modulated emitter signals form columns of a modulation matrix M; andthe at least one processor is further configured to operate an inverse M−1 of the modulation matrix M on the output signal to separate the light signals received from the individual emitters.
  • 8. The touch sensing apparatus of claim 7, wherein the modulation matrix M is a Hadamard matrix or a Sylvester matrix derived from a Hadamard matrix.
  • 9. The touch sensing apparatus of claim 1, wherein each respective emitter is configured to emit a diverging beam of light.
  • 10. The touch sensing apparatus of claim 1, further comprising: a total set of light detectors and a total set of emitters, whereineach light detector is configured to receive light from one or more groups of emitters, andeach respective emitter in the total set of emitters is included in at least one group.
  • 11. The touch sensing apparatus of claim 10, wherein the total set of emitters and the total set of light detectors are arranged around a periphery of the touch surface.
  • 12. The touch sensing apparatus of claim 11, wherein emitters in the total set of emitters are arranged to illuminate a space immediately above the touch surface; andthe at least one processor is further configured toidentify occlusions in the light received from each of said emitters, anddetermine the position of said one or more objects based on the identified occlusions.
  • 13. The touch sensing apparatus of claim 1, further comprising: a light transmissive element that defines the touch surface;wherein said light propagates inside the light transmissive element to illuminate the touch surface such that the one or more objects interacting with the touch surface cause an attenuation of the light propagating inside the light transmissive element; andwherein the at least one processor is further configured toidentify attenuations in the light received from each respective emitter, anddetermine the position of said one or more objects based on the identified attenuations.
  • 14. The touch sensing apparatus of claim 1, wherein the touch surface is polygonal, and at least one emitter is arranged at a corner of the touch surface.
  • 15. A method of operating a touch sensing apparatus, the touch sensing apparatus including a touch surface, a group of emitters arranged to emit light to illuminate at least part of the touch surface, and a group of light detectors arranged to receive light from the group of emitters, the group of light detectors configured to generate an output signal, said method comprising: controlling each respective emitter in the group of emitters to transmit a frequency modulated emitter signal by the emitted light, the frequency modulated emitter signal identifying the respective emitter; separating light signals received from individual emitters from the output signal based on the frequency modulated emitter signals; applying an algorithm on the separated light signals; and determining a position of one or more objects interacting with the touch surface based on the application of the algorithm, wherein the algorithm comprises transmission tomography and wherein the algorithm for transmission tomography generates a two-dimensional model of an interaction between the one or more objects and the touch surface.
  • 16. A non-transitory computer readable storage medium storing processing instructions that, when executed by at least one processor, cause the at least one processor to perform the method according to claim 15.
Priority Claims (1)
Number Date Country Kind
0802531 Dec 2008 SE national
CROSS-REFERENCE TO RELATED APPLICATIONS

The present application claims the benefit of Swedish patent application No. 0802531-4, filed on Dec. 5, 2008, U.S. provisional application No. 61/193,526, filed on Dec. 5, 2008, and U.S. provisional application No. 61/193,929, filed on Jan. 9, 2009, all of which are incorporated herein by reference.

US Referenced Citations (587)
Number Name Date Kind
3440426 Bush Apr 1969 A
3553680 Cooreman Jan 1971 A
3673327 Johnson et al. Jun 1972 A
4129384 Walker et al. Dec 1978 A
4180702 Sick et al. Dec 1979 A
4209255 Heynau et al. Jun 1980 A
4213707 Evans, Jr. Jul 1980 A
4254333 Bergström Mar 1981 A
4254407 Tipon Mar 1981 A
4294543 Apple et al. Oct 1981 A
4346376 Mallos Aug 1982 A
4420261 Barlow et al. Dec 1983 A
4484179 Kasday Nov 1984 A
4507557 Tsikos Mar 1985 A
4521112 Kuwabara et al. Jun 1985 A
4542375 Alles et al. Sep 1985 A
4550250 Mueller et al. Oct 1985 A
4593191 Alles Jun 1986 A
4673918 Adler et al. Jun 1987 A
4688933 Lapeyre Aug 1987 A
4688993 Ferris et al. Aug 1987 A
4692809 Beining et al. Sep 1987 A
4710760 Kasday Dec 1987 A
4736191 Matzke et al. Apr 1988 A
4737626 Hasegawa Apr 1988 A
4746770 McAvinney May 1988 A
4752655 Tajiri et al. Jun 1988 A
4772763 Garwin et al. Sep 1988 A
4782328 Denlinger Nov 1988 A
4812833 Shimauchi Mar 1989 A
4837430 Hasegawa Jun 1989 A
4868912 Doering Sep 1989 A
4891829 Deckman et al. Jan 1990 A
4916712 Bender Apr 1990 A
4933544 Tamaru Jun 1990 A
4949079 Loebner Aug 1990 A
4986662 Bures Jan 1991 A
4988983 Wehrer Jan 1991 A
5065185 Powers et al. Nov 1991 A
5073770 Lowbner Dec 1991 A
5105186 May Apr 1992 A
5159322 Loebner Oct 1992 A
5166668 Aoyagi Nov 1992 A
5227622 Suzuki Jul 1993 A
5248856 Mallicoat Sep 1993 A
5254407 Sergerie et al. Oct 1993 A
5345490 Finnigan et al. Sep 1994 A
5383022 Kaser Jan 1995 A
5483261 Yasutake Jan 1996 A
5484966 Segen Jan 1996 A
5499098 Ogawa Mar 1996 A
5502568 Ogawa et al. Mar 1996 A
5525764 Junkins et al. Jun 1996 A
5526422 Keen Jun 1996 A
5539514 Shishido et al. Jul 1996 A
5570181 Yasuo et al. Oct 1996 A
5572251 Ogawa Nov 1996 A
5577501 Flohr et al. Nov 1996 A
5600105 Fukuzaki et al. Feb 1997 A
5608550 Epstein et al. Mar 1997 A
5672852 Fukuzaki et al. Sep 1997 A
5679930 Katsurahira Oct 1997 A
5686942 Ball Nov 1997 A
5688933 Evans et al. Nov 1997 A
5729249 Yasutake Mar 1998 A
5736686 Perret, Jr. et al. Apr 1998 A
5740224 Müller et al. Apr 1998 A
5764223 Chang et al. Jun 1998 A
5767517 Hawkins Jun 1998 A
5775792 Wiese Jul 1998 A
5945980 Moissev et al. Aug 1999 A
5945981 Paull et al. Aug 1999 A
5959617 Bird et al. Sep 1999 A
6061177 Fujimoto May 2000 A
6067079 Shieh May 2000 A
6122394 Neukermans et al. Sep 2000 A
6141104 Schulz et al. Oct 2000 A
6172667 Sayag Jan 2001 B1
6175999 Sloan et al. Jan 2001 B1
6227667 Halldorsson et al. May 2001 B1
6229529 Yano et al. May 2001 B1
6333735 Anvekar Dec 2001 B1
6366276 Kunimatsu et al. Apr 2002 B1
6380732 Gilboa Apr 2002 B1
6380740 Laub Apr 2002 B1
6390370 Plesko May 2002 B1
6429857 Masters et al. Aug 2002 B1
6452996 Hsieh Sep 2002 B1
6476797 Kurihara et al. Nov 2002 B1
6492633 Nakazawa et al. Dec 2002 B2
6495832 Kirby Dec 2002 B1
6504143 Koops et al. Jan 2003 B2
6529327 Graindorge Mar 2003 B1
6538644 Muraoka Mar 2003 B1
6587099 Takekawa Jul 2003 B2
6648485 Colgan et al. Nov 2003 B1
6660964 Benderly Dec 2003 B1
6664498 Forsman et al. Dec 2003 B2
6664952 Iwamoto et al. Dec 2003 B2
6690363 Newton Feb 2004 B2
6707027 Liess et al. Mar 2004 B2
6710767 Hasegawa Mar 2004 B1
6738051 Boyd et al. May 2004 B2
6748098 Rosenfeld Jun 2004 B1
6784948 Kawashima et al. Aug 2004 B2
6799141 Stoustrup et al. Sep 2004 B1
6806871 Yasue Oct 2004 B1
6927384 Reime et al. Aug 2005 B2
6940286 Wang et al. Sep 2005 B2
6965836 Richardson Nov 2005 B2
6972753 Kimura et al. Dec 2005 B1
6985137 Kaikuranta Jan 2006 B2
7042444 Cok May 2006 B2
7084859 Pryor Aug 2006 B1
7087907 Lalovic et al. Aug 2006 B1
7133031 Wang et al. Nov 2006 B2
7176904 Satoh Feb 2007 B2
7199932 Sugiura Apr 2007 B2
7359041 Xie et al. Apr 2008 B2
7397418 Doerry et al. Jul 2008 B1
7432893 Ma et al. Oct 2008 B2
7435940 Eliasson et al. Oct 2008 B2
7436443 Hirunuma et al. Oct 2008 B2
7442914 Eliasson et al. Oct 2008 B2
7465914 Eliasson et al. Dec 2008 B2
7613375 Shimizu Nov 2009 B2
7629968 Miller et al. Dec 2009 B2
7646833 He et al. Jan 2010 B1
7653883 Hotelling et al. Jan 2010 B2
7655901 Idzik et al. Feb 2010 B2
7705835 Eikman Apr 2010 B2
7729056 Hwang et al. Jun 2010 B2
7847789 Kolmykov-Zotov et al. Dec 2010 B2
7855716 McCreary et al. Dec 2010 B2
7859519 Tulbert Dec 2010 B2
7924272 Boer et al. Apr 2011 B2
7932899 Newton et al. Apr 2011 B2
7969410 Kakarala Jun 2011 B2
7995039 Eliasson et al. Aug 2011 B2
8013845 Ostergaard et al. Sep 2011 B2
8031186 Ostergaard Oct 2011 B2
8077147 Krah et al. Dec 2011 B2
8093545 Leong et al. Jan 2012 B2
8094136 Eliasson et al. Jan 2012 B2
8094910 Xu Jan 2012 B2
8149211 Hayakawa et al. Apr 2012 B2
8218154 Østergaard et al. Jul 2012 B2
8274495 Lee Sep 2012 B2
8325158 Yatsuda et al. Dec 2012 B2
8339379 Goertz et al. Dec 2012 B2
8350827 Chung et al. Jan 2013 B2
8384010 Hong et al. Feb 2013 B2
8407606 Davidson et al. Mar 2013 B1
8441467 Han May 2013 B2
8445834 Hong et al. May 2013 B2
8466901 Yen et al. Jun 2013 B2
8482547 Cobon et al. Jul 2013 B2
8542217 Wassvik et al. Sep 2013 B2
8567257 Van Steenberge et al. Oct 2013 B2
8581884 Fåhraeus et al. Nov 2013 B2
8624858 Fyke et al. Jan 2014 B2
8686974 Christiansson et al. Apr 2014 B2
8692807 Føhraeus et al. Apr 2014 B2
8716614 Wassvik May 2014 B2
8727581 Saccomanno May 2014 B2
8745514 Davidson Jun 2014 B1
8780066 Christiansson et al. Jul 2014 B2
8830181 Clark et al. Sep 2014 B1
8860696 Wassvik et al. Oct 2014 B2
8872098 Bergström et al. Oct 2014 B2
8872801 Bergström et al. Oct 2014 B2
8884900 Wassvik Nov 2014 B2
8890843 Wassvik et al. Nov 2014 B2
8890849 Christiansson et al. Nov 2014 B2
8928590 El Dokor Jan 2015 B1
8963886 Wassvik Feb 2015 B2
8982084 Christiansson et al. Mar 2015 B2
9024916 Christiansson May 2015 B2
9035909 Christiansson May 2015 B2
9063617 Eliasson et al. Jun 2015 B2
9086763 Johansson et al. Jul 2015 B2
9134854 Wassvik et al. Sep 2015 B2
9158401 Christiansson Oct 2015 B2
9158415 Song et al. Oct 2015 B2
9213445 King et al. Dec 2015 B2
9274645 Christiansson et al. Mar 2016 B2
9317168 Christiansson et al. Apr 2016 B2
9323396 Han et al. Apr 2016 B2
9366565 Uvnäs Jun 2016 B2
9377884 Christiansson et al. Jun 2016 B2
9389732 Craven-Bartle Jul 2016 B2
9411444 Christiansson et al. Aug 2016 B2
9411464 Wallander et al. Aug 2016 B2
9430079 Christiansson et al. Aug 2016 B2
9442574 Fåhraeus et al. Sep 2016 B2
9547393 Christiansson et al. Jan 2017 B2
9552103 Craven-Bartle et al. Jan 2017 B2
9557846 Baharav et al. Jan 2017 B2
9588619 Christiansson et al. Mar 2017 B2
9594467 Christiansson et al. Mar 2017 B2
9618682 Yoon et al. Apr 2017 B2
9626018 Christiansson et al. Apr 2017 B2
9626040 Wallander et al. Apr 2017 B2
9639210 Wallander et al. May 2017 B2
9678602 Wallander Jun 2017 B2
9684414 Christiansson et al. Jun 2017 B2
9710101 Christiansson et al. Jul 2017 B2
9874978 Wall Jan 2018 B2
10013107 Christiansson et al. Jul 2018 B2
10019113 Christiansson et al. Jul 2018 B2
20010002694 Nakazawa et al. Jun 2001 A1
20010005004 Shiratsuki et al. Jun 2001 A1
20010005308 Oishi et al. Jun 2001 A1
20010030642 Sullivan et al. Oct 2001 A1
20020067348 Masters et al. Jun 2002 A1
20020075243 Newton Jun 2002 A1
20020118177 Newton Aug 2002 A1
20020158823 Zavracky et al. Oct 2002 A1
20020158853 Sugawara et al. Oct 2002 A1
20020163505 Takekawa Nov 2002 A1
20030016450 Bluemel et al. Jan 2003 A1
20030034439 Reime et al. Feb 2003 A1
20030034935 Amanai et al. Feb 2003 A1
20030048257 Mattila Mar 2003 A1
20030052257 Sumriddetchkajorn Mar 2003 A1
20030095399 Grenda et al. May 2003 A1
20030107748 Lee Jun 2003 A1
20030137494 Tulbert Jul 2003 A1
20030156100 Gettemy Aug 2003 A1
20030160155 Liess Aug 2003 A1
20030210537 Engelmann Nov 2003 A1
20030214486 Roberts Nov 2003 A1
20040027339 Schulz Feb 2004 A1
20040032401 Nakazawa et al. Feb 2004 A1
20040090432 Takahashi et al. May 2004 A1
20040130338 Wang et al. Jul 2004 A1
20040174541 Freifeld Sep 2004 A1
20040201579 Graham Oct 2004 A1
20040212603 Cok Oct 2004 A1
20040238627 Silverbrook et al. Dec 2004 A1
20040239702 Kang et al. Dec 2004 A1
20040245438 Payne et al. Dec 2004 A1
20040252091 Ma Dec 2004 A1
20040252867 Lan et al. Dec 2004 A1
20050012714 Russo et al. Jan 2005 A1
20050041013 Tanaka Feb 2005 A1
20050057903 Choi Mar 2005 A1
20050073508 Pittel et al. Apr 2005 A1
20050083293 Dixon Apr 2005 A1
20050128190 Ryynanen Jun 2005 A1
20050143923 Keers et al. Jun 2005 A1
20050156914 Lipman et al. Jul 2005 A1
20050162398 Eliasson et al. Jul 2005 A1
20050179977 Chui et al. Aug 2005 A1
20050200613 Kobayashi et al. Sep 2005 A1
20050212774 Ho et al. Sep 2005 A1
20050248540 Newton Nov 2005 A1
20050253834 Sakamaki et al. Nov 2005 A1
20050276053 Nortrup et al. Dec 2005 A1
20060001650 Robbins et al. Jan 2006 A1
20060001653 Smits Jan 2006 A1
20060007185 Kobayashi Jan 2006 A1
20060008164 Wu et al. Jan 2006 A1
20060017706 Cutherell et al. Jan 2006 A1
20060017709 Okano Jan 2006 A1
20060033725 Marggraff et al. Feb 2006 A1
20060038698 Chen Feb 2006 A1
20060061861 Munro et al. Mar 2006 A1
20060114237 Crockett et al. Jun 2006 A1
20060132454 Chen et al. Jun 2006 A1
20060139340 Geaghan Jun 2006 A1
20060158437 Blythe et al. Jul 2006 A1
20060170658 Nakamura et al. Aug 2006 A1
20060202974 Thielman Sep 2006 A1
20060227120 Eikman Oct 2006 A1
20060255248 Eliasson Nov 2006 A1
20060256092 Lee Nov 2006 A1
20060279558 Van Delden et al. Dec 2006 A1
20060281543 Sutton et al. Dec 2006 A1
20060290684 Giraldo et al. Dec 2006 A1
20070014486 Schiwietz et al. Jan 2007 A1
20070024598 Miller et al. Feb 2007 A1
20070034783 Eliasson et al. Feb 2007 A1
20070038691 Candes et al. Feb 2007 A1
20070052684 Gruhlke et al. Mar 2007 A1
20070070056 Sato et al. Mar 2007 A1
20070075648 Blythe et al. Apr 2007 A1
20070120833 Yamaguchi et al. May 2007 A1
20070125937 Eliasson et al. Jun 2007 A1
20070152985 Ostergaard et al. Jul 2007 A1
20070201042 Eliasson et al. Aug 2007 A1
20070296688 Nakamura et al. Dec 2007 A1
20080006766 Oon et al. Jan 2008 A1
20080007540 Ostergaard Jan 2008 A1
20080007541 Eliasson et al. Jan 2008 A1
20080007542 Eliasson et al. Jan 2008 A1
20080011944 Chua et al. Jan 2008 A1
20080029691 Han Feb 2008 A1
20080036743 Westerman et al. Feb 2008 A1
20080062150 Lee Mar 2008 A1
20080068691 Miyatake Mar 2008 A1
20080074401 Chung et al. Mar 2008 A1
20080088603 Eliasson et al. Apr 2008 A1
20080121442 Boer et al. May 2008 A1
20080122792 Izadi et al. May 2008 A1
20080122803 Izadi et al. May 2008 A1
20080130979 Run et al. Jun 2008 A1
20080133265 Silkaitis et al. Jun 2008 A1
20080150846 Chung et al. Jun 2008 A1
20080150848 Chung et al. Jun 2008 A1
20080151126 Yu Jun 2008 A1
20080158176 Land et al. Jul 2008 A1
20080189046 Eliasson et al. Aug 2008 A1
20080192025 Jaeger et al. Aug 2008 A1
20080238433 Joutsenoja et al. Oct 2008 A1
20080246388 Cheon et al. Oct 2008 A1
20080252619 Crockett et al. Oct 2008 A1
20080266266 Kent et al. Oct 2008 A1
20080278460 Arnett et al. Nov 2008 A1
20080284925 Han Nov 2008 A1
20080291668 Aylward et al. Nov 2008 A1
20080297482 Weiss Dec 2008 A1
20090002340 Van Genechten Jan 2009 A1
20090006292 Block Jan 2009 A1
20090040786 Mori Feb 2009 A1
20090066647 Kerr et al. Mar 2009 A1
20090067178 Huang et al. Mar 2009 A1
20090073142 Yamashita et al. Mar 2009 A1
20090077501 Partridge et al. Mar 2009 A1
20090085894 Gandhi et al. Apr 2009 A1
20090091554 Keam Apr 2009 A1
20090115919 Tanaka et al. May 2009 A1
20090122020 Eliasson et al. May 2009 A1
20090122027 Newton May 2009 A1
20090128508 Sohn et al. May 2009 A1
20090135162 Van De Wijdeven et al. May 2009 A1
20090143141 Wells et al. Jun 2009 A1
20090153519 Suarez Rovere Jun 2009 A1
20090161026 Wu et al. Jun 2009 A1
20090168459 Holman et al. Jul 2009 A1
20090187842 Collins et al. Jul 2009 A1
20090189857 Benko et al. Jul 2009 A1
20090189874 Chene et al. Jul 2009 A1
20090189878 Goertz et al. Jul 2009 A1
20090219256 Newton Sep 2009 A1
20090229892 Fisher et al. Sep 2009 A1
20090251439 Westerman et al. Oct 2009 A1
20090256817 Perlin et al. Oct 2009 A1
20090259967 Davidson et al. Oct 2009 A1
20090267919 Chao et al. Oct 2009 A1
20090273794 Ostergaard Nov 2009 A1
20090278816 Colson Nov 2009 A1
20090297009 Xu et al. Dec 2009 A1
20100033444 Kobayashi Feb 2010 A1
20100045629 Newton Feb 2010 A1
20100059295 Hotelling Mar 2010 A1
20100060896 Van De Wijdeven et al. Mar 2010 A1
20100066016 Van De Wijdeven et al. Mar 2010 A1
20100066704 Kasai Mar 2010 A1
20100073318 Hu et al. Mar 2010 A1
20100078545 Leong et al. Apr 2010 A1
20100079407 Suggs et al. Apr 2010 A1
20100079408 Leong et al. Apr 2010 A1
20100097345 Jang et al. Apr 2010 A1
20100097348 Park et al. Apr 2010 A1
20100097353 Newton Apr 2010 A1
20100125438 Audet May 2010 A1
20100127975 Jensen May 2010 A1
20100134435 Kimura et al. Jun 2010 A1
20100142823 Wang et al. Jun 2010 A1
20100187422 Kothari et al. Jul 2010 A1
20100193259 Wassvik Aug 2010 A1
20100229091 Homma et al. Sep 2010 A1
20100238139 Goertz et al. Sep 2010 A1
20100245292 Wu Sep 2010 A1
20100265170 Norieda Oct 2010 A1
20100277436 Feng et al. Nov 2010 A1
20100283785 Satulovsky Nov 2010 A1
20100284596 Miao et al. Nov 2010 A1
20100289754 Sleeman et al. Nov 2010 A1
20100295821 Chang et al. Nov 2010 A1
20100302196 Han et al. Dec 2010 A1
20100302209 Large Dec 2010 A1
20100302210 Han et al. Dec 2010 A1
20100302240 Lettvin Dec 2010 A1
20100315379 Allard et al. Dec 2010 A1
20100321328 Chang et al. Dec 2010 A1
20100322550 Trott Dec 2010 A1
20110043490 Powell et al. Feb 2011 A1
20110049388 Delaney et al. Mar 2011 A1
20110050649 Newton et al. Mar 2011 A1
20110051394 Bailey Mar 2011 A1
20110068256 Hong et al. Mar 2011 A1
20110069039 Lee et al. Mar 2011 A1
20110069807 Dennerlein et al. Mar 2011 A1
20110074725 Westerman et al. Mar 2011 A1
20110074734 Wassvik et al. Mar 2011 A1
20110074735 Wassvik et al. Mar 2011 A1
20110090176 Christiansson et al. Apr 2011 A1
20110102374 Wassvik et al. May 2011 A1
20110115748 Xu May 2011 A1
20110121323 Wu et al. May 2011 A1
20110122075 Seo et al. May 2011 A1
20110122091 King et al. May 2011 A1
20110122094 Tsang et al. May 2011 A1
20110134079 Stark Jun 2011 A1
20110147569 Drumm Jun 2011 A1
20110157095 Drumm Jun 2011 A1
20110157096 Drumm Jun 2011 A1
20110163996 Wassvik et al. Jul 2011 A1
20110163997 Kim Jul 2011 A1
20110163998 Goertz et al. Jul 2011 A1
20110169780 Goertz et al. Jul 2011 A1
20110175852 Goertz et al. Jul 2011 A1
20110205186 Newton et al. Aug 2011 A1
20110216042 Wassvik et al. Sep 2011 A1
20110221705 Yi et al. Sep 2011 A1
20110221997 Kim et al. Sep 2011 A1
20110227036 Vaufrey Sep 2011 A1
20110227874 Fåhraeus et al. Sep 2011 A1
20110234537 Kim et al. Sep 2011 A1
20110254864 Tsuchikawa et al. Oct 2011 A1
20110261020 Song et al. Oct 2011 A1
20110267296 Noguchi et al. Nov 2011 A1
20110291989 Lee Dec 2011 A1
20110298743 Machida et al. Dec 2011 A1
20110309325 Park et al. Dec 2011 A1
20110310045 Toda et al. Dec 2011 A1
20120019448 Pitkanen et al. Jan 2012 A1
20120026408 Lee et al. Feb 2012 A1
20120038593 Rönkä et al. Feb 2012 A1
20120062474 Weishaupt et al. Mar 2012 A1
20120068973 Christiansson et al. Mar 2012 A1
20120086673 Chien et al. Apr 2012 A1
20120089348 Perlin et al. Apr 2012 A1
20120110447 Chen May 2012 A1
20120131490 Lin et al. May 2012 A1
20120141001 Zhang et al. Jun 2012 A1
20120146930 Lee Jun 2012 A1
20120153134 Bergström et al. Jun 2012 A1
20120154338 Bergström et al. Jun 2012 A1
20120162142 Christiansson et al. Jun 2012 A1
20120162144 Fåhraeus et al. Jun 2012 A1
20120169672 Christiansson Jul 2012 A1
20120181419 Momtahan Jul 2012 A1
20120182266 Han Jul 2012 A1
20120188206 Sparf et al. Jul 2012 A1
20120191993 Drader et al. Jul 2012 A1
20120200532 Powell et al. Aug 2012 A1
20120200538 Christiansson et al. Aug 2012 A1
20120212441 Christiansson et al. Aug 2012 A1
20120217882 Wong et al. Aug 2012 A1
20120218229 Drumm Aug 2012 A1
20120249478 Chang et al. Oct 2012 A1
20120256882 Christiansson et al. Oct 2012 A1
20120268403 Christiansson Oct 2012 A1
20120268427 Slobodin Oct 2012 A1
20120274559 Mathai et al. Nov 2012 A1
20120305755 Hong et al. Dec 2012 A1
20130021300 Wassvik Jan 2013 A1
20130021302 Drumm Jan 2013 A1
20130027404 Sarnoff Jan 2013 A1
20130044073 Christiansson et al. Feb 2013 A1
20130055080 Komer et al. Feb 2013 A1
20130076697 Goertz et al. Mar 2013 A1
20130082980 Gruhlke et al. Apr 2013 A1
20130107569 Suganuma May 2013 A1
20130113715 Grant et al. May 2013 A1
20130120320 Liu et al. May 2013 A1
20130125016 Pallakoff et al. May 2013 A1
20130127790 Wassvik May 2013 A1
20130135258 King et al. May 2013 A1
20130135259 King et al. May 2013 A1
20130141388 Ludwig et al. Jun 2013 A1
20130141395 Holmgren et al. Jun 2013 A1
20130154983 Christiansson et al. Jun 2013 A1
20130155027 Holmgren et al. Jun 2013 A1
20130158504 Ruchti et al. Jun 2013 A1
20130181896 Gruhlke et al. Jul 2013 A1
20130187891 Eriksson et al. Jul 2013 A1
20130201142 Suarez Rovere Aug 2013 A1
20130222346 Chen et al. Aug 2013 A1
20130241887 Sharma Sep 2013 A1
20130249833 Christiansson et al. Sep 2013 A1
20130269867 Trott Oct 2013 A1
20130275082 Follmer et al. Oct 2013 A1
20130285920 Colley Oct 2013 A1
20130285968 Christiansson et al. Oct 2013 A1
20130300716 Craven-Bartle et al. Nov 2013 A1
20130307795 Suarez Rovere Nov 2013 A1
20130342490 Wallander et al. Dec 2013 A1
20140002400 Christiansson et al. Jan 2014 A1
20140028575 Parivar et al. Jan 2014 A1
20140028604 Morinaga et al. Jan 2014 A1
20140028629 Drumm et al. Jan 2014 A1
20140036203 Guillou et al. Feb 2014 A1
20140055421 Christiansson et al. Feb 2014 A1
20140063853 Nichol et al. Mar 2014 A1
20140071653 Thompson et al. Mar 2014 A1
20140085241 Christiansson et al. Mar 2014 A1
20140092052 Grunthaner et al. Apr 2014 A1
20140098032 Ng et al. Apr 2014 A1
20140098058 Baharav et al. Apr 2014 A1
20140109219 Rohrweck et al. Apr 2014 A1
20140125633 Fåhraeus et al. May 2014 A1
20140139467 Ghosh et al. May 2014 A1
20140160762 Dudik et al. Jun 2014 A1
20140192023 Hoffman Jul 2014 A1
20140232669 Ohlsson et al. Aug 2014 A1
20140237401 Krus et al. Aug 2014 A1
20140237408 Ohlsson et al. Aug 2014 A1
20140237422 Ohlsson et al. Aug 2014 A1
20140253831 Craven-Bartle Sep 2014 A1
20140267124 Christiansson et al. Sep 2014 A1
20140292701 Christiansson et al. Oct 2014 A1
20140300572 Ohlsson et al. Oct 2014 A1
20140320460 Johansson et al. Oct 2014 A1
20140347325 Wallander et al. Nov 2014 A1
20140362046 Yoshida Dec 2014 A1
20140368471 Christiansson et al. Dec 2014 A1
20140375607 Christiansson et al. Dec 2014 A1
20150002386 Mankowski et al. Jan 2015 A1
20150009687 Lin Jan 2015 A1
20150015497 Leigh Jan 2015 A1
20150035774 Christiansson et al. Feb 2015 A1
20150035803 Wassvik et al. Feb 2015 A1
20150053850 Uvnäs Feb 2015 A1
20150054759 Christiansson et al. Feb 2015 A1
20150083891 Wallander Mar 2015 A1
20150103013 Huang Apr 2015 A9
20150130769 Björklund May 2015 A1
20150138105 Christiansson et al. May 2015 A1
20150138158 Wallander et al. May 2015 A1
20150138161 Wassvik May 2015 A1
20150205441 Bergström et al. Jul 2015 A1
20150215450 Seo et al. Jul 2015 A1
20150242055 Wallander Aug 2015 A1
20150271481 Guthrie et al. Sep 2015 A1
20150286698 Gagnier et al. Oct 2015 A1
20150317036 Johansson et al. Nov 2015 A1
20150324028 Wassvik et al. Nov 2015 A1
20150331544 Bergström et al. Nov 2015 A1
20150331545 Wassvik et al. Nov 2015 A1
20150331546 Craven-Bartle et al. Nov 2015 A1
20150331547 Wassvik et al. Nov 2015 A1
20150332655 Krus et al. Nov 2015 A1
20150346856 Wassvik Dec 2015 A1
20150346911 Christiansson Dec 2015 A1
20150363042 Krus et al. Dec 2015 A1
20160026337 Wassvik et al. Jan 2016 A1
20160034099 Christiansson et al. Feb 2016 A1
20160050746 Wassvik et al. Feb 2016 A1
20160070415 Christiansson et al. Mar 2016 A1
20160070416 Wassvik Mar 2016 A1
20160124546 Chen et al. May 2016 A1
20160124551 Christiansson et al. May 2016 A1
20160154531 Wall Jun 2016 A1
20160154532 Campbell Jun 2016 A1
20160202841 Christiansson et al. Jul 2016 A1
20160216844 Bergström Jul 2016 A1
20160224144 Klinghult et al. Aug 2016 A1
20160299593 Christiansson et al. Oct 2016 A1
20160328090 Klinghult Nov 2016 A1
20160328091 Wassvik et al. Nov 2016 A1
20160334942 Wassvik Nov 2016 A1
20160342282 Wassvik Nov 2016 A1
20160357348 Wallander Dec 2016 A1
20170010688 Fahraeus et al. Jan 2017 A1
20170090090 Craven-Bartle et al. Mar 2017 A1
20170102827 Christiansson et al. Apr 2017 A1
20170115235 Ohlsson et al. Apr 2017 A1
20170115823 Huang et al. Apr 2017 A1
20170139541 Christiansson et al. May 2017 A1
20170177163 Wallander et al. Jun 2017 A1
20170185230 Wallander et al. Jun 2017 A1
20170293392 Christiansson et al. Oct 2017 A1
20170344185 Ohlsson et al. Nov 2017 A1
20180031753 Craven-Bartle et al. Feb 2018 A1
20180129354 Christiansson et al. May 2018 A1
20180210572 Wallander et al. Jul 2018 A1
20180225006 Wall Aug 2018 A1
20180253187 Christiansson et al. Sep 2018 A1
20180267672 Wassvik et al. Sep 2018 A1
20180275788 Christiansson et al. Sep 2018 A1
20180275830 Christiansson et al. Sep 2018 A1
20180275831 Christiansson et al. Sep 2018 A1
20190050074 Kocovski Feb 2019 A1
Foreign Referenced Citations (121)
Number Date Country
201233592 May 2009 CN
101644854 Feb 2010 CN
201437963 Apr 2010 CN
101019071 Jun 2012 CN
101206550 Jun 2012 CN
101075168 Apr 2014 CN
3511330 May 1988 DE
68902419 Mar 1993 DE
69000920 Jun 1993 DE
19809934 Sep 1999 DE
10026201 Dec 2000 DE
102010000473 Aug 2010 DE
0845812 Jun 1998 EP
0600576 Oct 1998 EP
0931731 Jul 1999 EP
1798630 Jun 2007 EP
0897161 Oct 2007 EP
2088501 Aug 2009 EP
1512989 Sep 2009 EP
2077490 Jan 2010 EP
1126236 Dec 2010 EP
2314203 Apr 2011 EP
2339437 Oct 2011 EP
2442180 Apr 2012 EP
2466429 Jun 2012 EP
2479642 Jul 2012 EP
1457870 Aug 2012 EP
2778849 Sep 2014 EP
2172828 Oct 1973 FR
2617619 Jan 1990 FR
2614711 Mar 1992 FR
2617620 Sep 1992 FR
2676275 Nov 1992 FR
1380144 Jan 1975 GB
2131544 Mar 1986 GB
2204126 Nov 1988 GB
2000506655 May 2000 JP
2000172438 Jun 2000 JP
2000259334 Sep 2000 JP
2000293311 Oct 2000 JP
2003330603 Nov 2003 JP
2005004278 Jan 2005 JP
2008506173 Feb 2008 JP
2011530124 Dec 2011 JP
100359400 Jul 2001 KR
100940435 Feb 2010 KR
WO 1984003186 Aug 1984 WO
WO 1999046602 Sep 1999 WO
WO 01127867 Apr 2001 WO
WO 0184251 Nov 2001 WO
WO 0235460 May 2002 WO
WO 02077915 Oct 2002 WO
WO 02095668 Nov 2002 WO
WO 03076870 Sep 2003 WO
WO 2004032210 Apr 2004 WO
WO 2004081502 Sep 2004 WO
WO 2004081956 Sep 2004 WO
WO 2005026938 Mar 2005 WO
WO 2005029172 Mar 2005 WO
WO 2005029395 Mar 2005 WO
WO 2005125011 Dec 2005 WO
WO 2006095320 Sep 2006 WO
WO 2006124551 Nov 2006 WO
WO 2007003196 Jan 2007 WO
WO 2007058924 May 2007 WO
WO 2007112742 Oct 2007 WO
WO 2008004103 Jan 2008 WO
WO 2008007276 Jan 2008 WO
WO 2008017077 Feb 2008 WO
WO 2008039006 Apr 2008 WO
WO 2008068607 Jun 2008 WO
WO 2006124551 Jul 2008 WO
WO 2008017077 Feb 2009 WO
WO 2009048365 Apr 2009 WO
WO 2009077962 Jun 2009 WO
WO 2009102681 Aug 2009 WO
WO 2009137355 Nov 2009 WO
WO 2010006882 Jan 2010 WO
WO 2010006883 Jan 2010 WO
WO 2010006884 Jan 2010 WO
WO 2010006885 Jan 2010 WO
WO 2010006886 Jan 2010 WO
WO 2010015408 Feb 2010 WO
WO 2010046539 Apr 2010 WO
WO 2010056177 May 2010 WO
WO 2010064983 Jun 2010 WO
WO 2010081702 Jul 2010 WO
WO 2010112404 Oct 2010 WO
WO 2010123809 Oct 2010 WO
WO 2010134865 Nov 2010 WO
WO 2011028169 Mar 2011 WO
WO 2011028170 Mar 2011 WO
WO 2011049511 Apr 2011 WO
WO 2011049512 Apr 2011 WO
WO 2011049513 Apr 2011 WO
WO 2011057572 May 2011 WO
WO 2011078769 Jun 2011 WO
WO 2011082477 Jul 2011 WO
WO 2011139213 Nov 2011 WO
WO 2012002894 Jan 2012 WO
WO 2012010078 Jan 2012 WO
WO 2012050510 Apr 2012 WO
WO 2012082055 Jun 2012 WO
WO 2012105893 Aug 2012 WO
WO 2012121652 Sep 2012 WO
WO 2012158105 Nov 2012 WO
WO 2012172302 Dec 2012 WO
WO 2012176801 Dec 2012 WO
WO 2013036192 Mar 2013 WO
WO 2013048312 Apr 2013 WO
WO 2013055282 Apr 2013 WO
WO 2013062471 May 2013 WO
WO 2013089622 Jun 2013 WO
WO 2013115710 Aug 2013 WO
WO 2013133756 Sep 2013 WO
WO 2013133757 Sep 2013 WO
WO 2013176613 Nov 2013 WO
WO 2013176614 Nov 2013 WO
WO 2013176615 Nov 2013 WO
WO 2014055809 Apr 2014 WO
WO 2014098744 Jun 2014 WO
Non-Patent Literature Citations (16)
Entry
Ahn, Y., et al., “A slim and wide multi-touch tabletop interface and its applications,” BigComp2014, IEEE, 2014, in 6 pages.
Chou, N., et al., “Generalized pseudo-polar Fourier grids and applications in regfersting optical coherence tomography images,” 43rd Asilomar Conference on Signals, Systems and Computers, Nov. 2009, in 5 pages.
Fihn, M., “Touch Panel—Special Edition,” Veritas et Visus, Nov. 2011, in 1 page.
Fourmont, K., “Non-Equispaced Fast Fourier Transforms with Applications to Tomography,” Journal of Fourier Analysis and Applications, vol. 9, Issue 5, 2003, in 20 pages.
Iizuka, K., “Boundaries, Near-Field Optics, and Near-Field Imaging,” Elements of Photonics, vol. 1: in Free Space and Special Media, Wiley & Sons, 2002, in 57 pages.
International Search Report for International App. No. PCT/SE2017/050102, dated Apr. 5, 2017, in 4 pages.
Johnson, M., “Enhanced Optical Touch Input Panel”, IBM Technical Discolusre Bulletin, 1985, in 3 pages.
Kak, et al., “Principles of Computerized Tomographic Imaging”, Institute of Electrical Engineers, Inc., 1999, in 333 pages.
The Laser Wall, MIT, 1997, http://web.media.mit.edu/˜joep/SpectrumWeb/captions/Laser.html.
Liu, J., et al. “Multiple touch points identifying method, involves starting touch screen, driving specific emission tube, and computing and transmitting coordinate of touch points to computer system by direct lines through interface of touch screen,” 2007, in 25 pages.
Natterer, F., “The Mathematics of Computerized Tomography”, Society for Industrial and Applied Mathematics, 2001, in 240 pages.
Natterer, F., et al. “Fourier Reconstruction,” Mathematical Methods in Image Reconstruction, Society for Industrial and Applied Mathematics, 2001, in 12 pages.
Paradiso, J.A., “Several Sensor Approaches that Retrofit Large Surfaces for Interactivity,” ACM Ubicomp 2002 Workshop on Collaboration with Interactive Walls and Tables, 2002, in 8 pages.
Tedaldi, M., et al. “Refractive index mapping of layered samples using optical coherence refractometry,” Proceedings of SPIE, vol. 7171, 2009, in 8 pages.
Supplementary European Search Report for European App. No. EP 16759213, dated Oct. 4, 2018, in 9 pages.
Extended European Search Report for European App. No. 16743795.3, dated Sep. 11, 2018, in 5 pages.
Related Publications (1)
Number Date Country
20190094990 A1 Mar 2019 US
Provisional Applications (2)
Number Date Country
61193929 Jan 2009 US
61193526 Dec 2008 US
Divisions (1)
Number Date Country
Parent 14052101 Oct 2013 US
Child 15244390 US
Continuations (2)
Number Date Country
Parent 15244390 Aug 2016 US
Child 16008616 US
Parent 12998771 US
Child 14052101 US