This application is a divisional of U.S. application Ser. No. 10/284,451, filed Oct. 31, 2002, now U.S. Pat. No. 7,133,563, of the same title.
The present invention relates to interacting with paper using a digital pen. More particularly, the present invention relates to determining the location of annotations made on paper by a digital pen.
Computer users are accustomed to using a mouse and keyboard as a way of interacting with a personal computer. While personal computers provide a number of advantages over written documents, most users continue to perform certain functions using printed paper. Some of these functions include reading and annotating written documents. In the case of annotations, the printed document assumes a greater significance because of the annotations placed on it by the user. One of the difficulties, however, with having a printed document with annotations is the later need to have the annotations entered back into the electronic form of the document. This requires the original user or another user to wade through the annotations and enter them into a personal computer. In some cases, a user will scan in the annotations and the original text, thereby creating a new document. These multiple steps make the interaction between the printed document and the electronic version of the document difficult to handle on a repeated basis. Further, scanned-in images are frequently non-modifiable. There may be no way to separate the annotations from the original text. This makes using the annotations difficult. Accordingly, an improved way of handling annotations is needed.
Aspects of the present invention provide solutions to at least one of the issues mentioned above, thereby enabling one to locate a position or positions on a viewed image. Knowledge of these positions permits a user to write annotations on a physical document and have those annotations associated with an electronic version of the physical document. Some aspects of the invention relate to the various techniques used to encode the physical document. Other aspects relate to the organization of the encoded document in searchable form.
These and other aspects of the present invention will become known through the following drawings and associated description.
The foregoing summary of the invention, as well as the following detailed description of preferred embodiments, is better understood when read in conjunction with the accompanying drawings, which are included by way of example, and not by way of limitation with regard to the claimed invention.
Aspects of the present invention relate to determining the location of a captured image in relation to a larger image. The location determination method and system described herein may be used in combination with a multi-function pen. This multifunction pen provides the ability to capture handwritten annotations that are made on a fixed document, then having the annotations locatable with the information on the fixed document. The fixed document may be a printed document or may be a document rendered on a computer screen.
The following is arranged into a number of subsections to assist the reader in understanding the various aspects of the invention. The subsections include: terms, general purpose computer; locating captured image; encoding; codebook generation; and candidate search.
Terms
Pen—any writing implement that may or may not include the ability to store ink. In some examples a stylus with no ink capability may be used as a pen in accordance with embodiments of the present invention.
Camera—an image capture system.
Encoding—a process by taking an image (either scanned in from a physical paper form or rendered from an electronic form) or from a camera and modifying it in some way.
Codebook—a storage that stores an encoded image or encoded sub-images.
General Purpose Computer
A basic input/output system 160 (BIOS), containing the basic routines that help to transfer information between elements within the computer 100, such as during start-up, is stored in the ROM 140. The computer 100 also includes a hard disk drive 170 for reading from and writing to a hard disk (not shown), a magnetic disk drive 180 for reading from or writing to a removable magnetic disk 190, and an optical disk drive 191 for reading from or writing to a removable optical disk 199 such as a CD ROM or other optical media. The hard disk drive 170, magnetic disk drive 180, and optical disk drive 191 are connected to the system bus 130 by a hard disk drive interface 192, a magnetic disk drive interface 193, and an optical disk drive interface 194, respectively. The drives and their associated computer-readable media provide nonvolatile storage of computer readable instructions, data structures, program modules and other data for the personal computer 100. It will be appreciated by those skilled in the art that other types of computer readable media that can store data that is accessible by a computer, such as magnetic cassettes, flash memory cards, digital video disks, Bernoulli cartridges, random access memories (RAMs), read only memories (ROMs), and the like, may also be used in the example operating environment.
A number of program modules can be stored on the hard disk drive 170, magnetic disk 190, optical disk 199, ROM 140 or RAM 150, including an operating system 195, one or more application programs 196, other program modules 197, and program data 198. A user can enter commands and information into the computer 100 through input devices such as a keyboard 101 and pointing device 102. Other input devices (not shown) may include a microphone, joystick, game pad, satellite dish, scanner or the like. These and other input devices are often connected to the processing unit 110 through a serial port interface 106 that is coupled to the system bus, but may be connected by other interfaces, such as a parallel port, game port or a universal serial bus (USB). Further still, these devices may be coupled directly to the system bus 130 via an appropriate interface (not shown). A monitor 107 or other type of display device is also connected to the system bus 130 via an interface, such as a video adapter 108. In addition to the monitor, personal computers typically include other peripheral output devices (not shown), such as speakers and printers. In a preferred embodiment, a pen digitizer 165 and accompanying pen or stylus 166 are provided in order to digitally capture freehand input. Although a direct connection between the pen digitizer 165 and the serial port is shown, in practice, the pen digitizer 165 may be coupled to the processing unit 110 directly, via a parallel port or other interface and the system bus 130 as known in the art. Furthermore, although the digitizer 165 is shown apart from the monitor 107, it is preferred that the usable input area of the digitizer 165 be co-extensive with the display area of the monitor 107. Further still, the digitizer 165 may be integrated in the monitor 107, or may exist as a separate device overlaying or otherwise appended to the monitor 107.
The computer 100 can operate in a networked environment using logical connections to one or more remote computers, such as a remote computer 109. The remote computer 109 can be a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to the computer 100, although only a memory storage device 111 has been illustrated in
When used in a LAN networking environment, the computer 100 is connected to the local network 112 through a network interface or adapter 114. When used in a WAN networking environment, the personal computer 100 typically includes a modem 115 or other means for establishing a communications over the wide area network 113, such as the Internet. The modem 115, which may be internal or external, is connected to the system bus 130 via the serial port interface 106. In a networked environment, program modules depicted relative to the personal computer 100, or portions thereof, may be stored in the remote memory storage device.
It will be appreciated that the network connections shown are illustrative and other techniques for establishing a communications link between the computers can be used. The existence of any of various well-known protocols such as TCP/IP, Ethernet, FTP, HTTP and the like is presumed, and the system can be operated in a client-server configuration to permit a user to retrieve web pages from a web-based server. Any of various conventional web browsers can be used to display and manipulate data on web pages.
Locating Captured Image
Aspects of the present invention include storing and encoded version of a document in a searchable form. When an annotating device (for example, a pen with a camera attached for capturing a sub-image of a document) is used to write annotations, the system permits a determination of the location of the camera. This determination of the location of the camera may be used to determine the location of where the annotation is located. In some aspects of the present invention, the pen may be an ink and writing on paper. In other aspects, the pen may be a stylus with the user writing on the surface of a computer display. In this latter example, the annotations written on the computer screen may be provided back to the system supporting the document displayed on the computer screen. By repeatedly capturing the location of the camera, the system can track movement of the stylus being controlled by the user.
To determine location of a captured image, three processes may be used. In practice, however, aspects of these three processes may be combined into less than three processes or separated into more than three processes. The first process relates to encoding the image into a searchable form. In one example, the image is encoded into a searchable form and associated with a location of the image (for example, the center coordinates of the image). The location of the center of a captured image may be in any position of the document image. Any sub-image (with a center in any position of the document image) may be encoded. This provides the benefit that the various positions of the document image may be encoded so that any possible position (at which the captured image can be located) is stored in a codebook and may be searched.
The second process relates to compiling the encoded image or sub-image into a searchable structure. The third process relates to searching the encoded sets of information to determine a location of a camera's image with respect to the original document. Subsequent processing may then be used to determine the location of a stylus pen tip in relation to the image from the camera.
Referring to
In step 204, the image is parsed into sub-images. In step 205, the sub-images are converted into a searchable form with their corresponding location attached, namely, some position-code pairs are obtained. Accordingly, each location with integer pixel units has a corresponding code. In step 206, the encoded datasets with position-code pairs from step 205 are arranged in a searchable codebook, which is indexed with the property of codes.
Referring to
Next, in step 305, the location candidates of the found N codes (exceeding the threshold). The code obtained from step 303 is compared with the codes in the codebook, and the codes best matched with the code from step 303 are kept. From the location of the image as determined in step 305, the location of the pen tip is determined in step 306. Optionally, as shown in broken boxes, new annotations may be processed in step 307 and the codebook updated in step 308. The process of adding back the annotations may improve the ability of the system to locate a camera frame when the user is writing on or near preexisting annotations.
This determination of the location of a captured image may be used to determine the location of a user's interaction with the paper, medium, or display screen. In some aspects of the present invention, the pen may be an ink pen writing on paper. In other aspects, the pen may be a stylus with the user writing on the surface of a computer display. Any interaction may be provided back to the system with knowledge of the encoded watermark on the document or supporting the document displayed on the computer screen. By repeatedly capturing the location of the camera, the system can track movement of the stylus being controlled by the user.
The input to the pen 401 from the camera 403 may be defined as a sequence of image frames {Ii}, i=1, 2, . . . , A, where Ii is captured by the pen 401 at sampling time ti. The sampling rate may be fixed or may be variable based on the size of the document. The size of the captured image frame may be large or small, depending on the size of the document and the degree of exactness required. Also, the camera image size may be determined based on the size of the document to be searched.
The image captured by camera 403 may be used directly by the processing system or may undergo pre-filtering. This pre-filtering may occur in pen 401 or may occur outside of pen 401 (for example, in a personal computer).
The image size of
The output of camera 403 may be compared with encoded information in the codebook. The codebook may be created from a color, grayscale, or black and white scan of an image. Alternatively, the codebook may be generated from an image output by an application or a received image. The output of the comparison of the codebook with sequence {Ii} may be represented as a sequence {Pi}, i=1, 2, . . . , A, where Pi represents all possible position candidates of pen tip 402 in document bitmap at sampling time ti.
The image sensor 411 may be large enough to capture the image 410. Alternatively, the image sensor 411 may be large enough to capture an image of the pen tip 402 at location 412. For reference, the image at location 412 is referred to as the virtual pen tip. It is noted that the virtual pen tip location with respect to image sensor 411 is fixed because of the constant relationship between the pen tip 402, the lens 408, and the image sensor 411. Because the transformation from the location of the virtual pen tip 412 (represented by Lvirtual-pentip) to the location of the real pen tip 402 (represented by Lpentip), one can determine the location of the real pen tip 402 in relation to a captured image 410.
The following transformation FS→P transforms the image captured by camera to the real image on the paper:
Lpaper=FS→P(LSensor)
During writing, the pen tip and the paper are on the same plane. Accordingly, the transformation from the virtual pen tip to the real pen tip is also FS→P:
Lpentip=FS→P(Lvirtual-pentip)
The transformation FS→P may be referred to as a perspective transformation. This simplifies as:
as the estimation of FS→P, in which θ, sx, and sy are the rotation and scale of two orientations of the pattern captured at location 404. Further, one can refine F′S→P to FS→P by matching the captured image with the corresponding background image on paper. Further, one can refine F′S→P to FS→P by matching the captured image with the corresponding background image on paper. “Refine” means to get a more precise perspective matrix FS→P (8 parameters) by a kind of optimization algorithm referred to as a recursive method. The recursive method treats the matrix F′S→P as the initial value. FS→P describes the transformation between S and P more precisely than F′S→P.
Next, one can determine the location of virtual pen tip by calibration.
One places the pen tip 402 on a known location Lpentip on paper. Next, one tilts the pen, allowing the camera 403 to capture a series of images with different pen poses. For each image captured, one may receive the transform FS→P. From this transform, one can obtain the location of the virtual image of pen tip Lvirtual-pentip:
Lvirtual-pentip=FP→S(Lpentip)
And,
FP→S=1/FS→P
By averaging the Lvirtual-pentip received from every image, an accurate location of the virtual pen tip Lvirtual-pentip may be determined.
The location of the virtual pen tip Lvirtual-pentip is now known. One can also obtain the transformation FS→P from image captured. Finally, one can use this information to determine the location of the real pen tip Lpentip:
Lpentip=FS→P(Lvirtual-pentip)
Encoding
The image to be encoded may come from a scanner scanning a document. Alternatively, the image to be encoded may be in a fixed, electronic form (for example, an electronic file with a fixed display size with only read-only privileges). While the document may indeed be modifiable, for locating a position of a camera, the version encoded may generally correspond to the document viewed by camera 403. Otherwise, correlation may still occur but the codebook may not be as accurate as with a similarity between the scanned document and the images from the camera.
The image is parsed into sub-images. Each sub-image is encoded into an easily searchable form. The sub-images may be set to the same size as the output from camera 403. This provides the benefit of roughly equal search comparisons between the output from camera 403 and the information stored in the codebook. Subsequent processing may be performed on the resulting location to locate annotations on the input image and/or control other operations of a computing system based in the scanned image.
The following presents a number of different coding options, which may be used to generate a codebook. The image captured from the camera while using pen 401 may also be encoded using one of the following encoding methods so as to facilitate comparison of the information in the codebook and the encoded information from the camera. A variety of encoding systems are possible. Three are provided here; however, other encoding techniques may be used.
A first type of coding includes using the parsed sub-images from a received image themselves to act as the code.
Another encoding example includes reducing the size of each sub-image. First, image 601 is received. Image 601 may be represented as image block I. Image block I, in some cases, may be converted into a binary image Ib by applying a threshold to image block I. Next, the sub-images may be represented generically by Kx*Ky with each sub-image being of size m*m as in 602-603. In the example of
Further, the subdivisions m of each sub-image do not need to directly correspond to the image resolution of a camera. For example, the subdivisions m may be combinations or partial combinations of smaller subdivisions. Here, if the smaller subdivisions number 32, and m=4 (from
Next, a rule-based analysis or threshold 604 (e.g., a true/false test) of the content of the subdivisions may be applied. An example of a threshold or rule to be applied to each subdivision in sub-image 603 may include a determination whether there are three contiguous white columns in each subdivision. Of course, other rule-based analyses may be used in place of or in conjunction with this example.
Based on the outcome of this analysis 604, a matrix 605 may be generated. Matrix 605 shows an example of the outcome of threshold 604 as applied to each subdivision in sub-image 603. The K matrix (Kx wide by Ky tall) has values 0/1 where 1 means true and 0 means false. The resulting K matrix may be expressed as CIk-win of image block I, where k-win is another way of referring to the present coding method. CIk-win may be represented as equation 1 with an example of a sub-image map as matrix 605.
This type of coding may be searched by determining the distance between a first matrix and a matrix formed from an image from camera 403. The distance between two matrixes is the hamming distance of two matrixes as represented by equation 2.
Distham(C1, C2)=Ham(C1, C2), (2)
where Ham (a, b) is the hamming distance between matrix a and matrix b.
Another type of coding, referred to as radial coding, is described with respect
The center pixel (or region) of the sub-image in 703 is treated as the origin, the sampling angle is S, and the magnitude of the sampling vector is T. For a 32 by 32 pixel region, S=32 and T=16.
For each sampling point (t, s), t=0, 1, . . . , T−1, s=0, 1, . . . , S−1, its position in Cartesian coordinate is (xt,s, yt,s), where xt,s and yt,s are represented by Equations 3 and 4, respectively.
The gray level value of point (t, s) is represented by equation 5,
Gt,s=F(xt,s, yt,s), (5)
where F is a 2-D Gaussian filter represented by equation 6.
where P(x, y) means the gray level value of the pixel in position (x, y); the brackets “[ ]” means the nearest integers of a real value; σ and q are the filter parameters. Examples of the filter parameters may include σ=1.15, q=1. The values are determined by empirically testing the algorithms to determine which values work best for a given environment.
As polar coordinates are used to analyze the sub-image block as shown in 703, the resulting analysis has a higher degree of robustness in handling rotation differences between the camera frame and the sub-images. The rotation of camera image to be compared with the information in the codebook is not a complex issue as rotation of the captured camera image translates to shifting of the image in 704.
The image in 704 may be converted to its binary representation for each angle S across vector T as shown in table 705. The degree is the value 2π·s/S as s ranges from 0 to S−1. The image or sub-images (701, 702, 703, 704) may be converted at a number of locations to a binary (black and white) image, if not previously converted to binary image when initially scanned, received or captured.
The grey level value matrix {Gt,s} (where t=0, 1, . . . , T−1, s=0, 1, . . . , S−1) may be converted to a binary matrix CIrad (as shown in equation 7) by applying a threshold to the values in the grey level matrix.
This code may then be compiled into a codebook with information relating to the location of the sub-images in the larger image.
To determine the location of a camera image with the different codes in the codebook, one may determine the distance between the camera image and the other code representations. The smallest distance or set of smallest distances to candidates may represent the best choice of the various locations. This distance may be computed by the hamming distance between the camera image and the current sub-images.
As set forth above, the distance from the captured image from the camera may be compared with one or more the code segments from the codebook. At least two types of distance may be used for the radial code: common hamming distance and rotation-invariant distance. Other distance determinations may be used.
The common hamming distance may be used as represented in equation 8 to determine the distance between a codebook code and a code associated with a camera image.
Distham(C1, C2)=Ham(C1, C2), (8)
Another type of distance that may be used includes a rotation-invariant distance. The benefit of using a rotation invariant distance is that the rotation of the camera is addressed as shown in equation 9.
Distr−i(C1, C2)=mind=0, . . . ,S−1(Ham(C1,Rot(C2,d))) (9)
where Rot(CIrad,d) is defined as set forth in equation 10.
Codebook Generation
The codebook stores the codes relating to sub-images taken from an image and associated with their locations in the image. The codebook may be created before capturing images with camera 403. Alternatively, the codebook may be created or at least augmented during use. For example, the camera may pick up images during operation. If the user only writes on existing information as shown in Figure elements 501, 601, and 701, then the codebook may be used as presently described to determine the location of the image captured by the camera. However if the user writes over new annotations, the codebook will not be as correct as it could be. Accordingly, when new annotations are added by pen 401, these annotations may be incorporated back into the codebook so future annotations will be more accurately correlated with their on-screen representation.
Codebook generation may be as follows. The sub-images shown in 503, 603, and 703 are encoded by an encoding method. Next, the position-code pairs are organized to create the codebook. At least two types of organization methods may be used to create the codebook. Of course other methods may be used to create the codebook as well. These two methods are given as illustrative examples only.
The first method is to place the position-code pairs into a linear list in which each node contains a code and a position sequence where all positions are mapped to the code. The code book then may be represented as equation 11:
Ω={ψi,i=1,2, . . . ,NΩ} (11)
where ψ is defined as ψ={Cψ, Pψ}, Pψ is the set of all positions in the document bitmap where its code is C is shown in equation 12:
Pψ={pi| the code at position pi is Cψ,i=1,2, . . . }. (12)
Next, the set Ω may be sorted by the code of each member ψ alphabetically, and then the codebook of the linear list type is obtained.
The second method is to place the codes with their corresponding locations into a binary tree.
The binary tree may be based on the Hamming distance between codes as represented by
The code set C is then split into two subsets: Ω0 and Ω1. The contents of Ω0 may be represented by equation 12 and the contents of Ω1 represented by equation 13.
Ω0={ψi|Dist(Cψ
Ω1={ψi|ψi∉Ω0} (13)
Next, for subsets Ω0 and Ω1, the steps of founding the centroid, finding the code with the maximum distance to the centroid, finding the code with the maximum distance to the code farthest away from the centroid, then splitting the subset is repeated until the number of members of the subset is 1.
Candidate Search
The position candidates for a captured frame from the camera may be determined by searching the codebook. For each camera captured frame Ii, the candidate positions of the pen tip 402 may be determined as follows:
The choice for best candidates may include a number of different analyses. First, the candidate most closely matching (minimum distance to code EIi) may be selected. Alternatively, the most recent set of candidates may be compared with a recent sequence of candidates and compare the frames over time. The resultant comparison should result in a series of position locations that are closest to each other. This result is expected as it indicates that the pen tip moved as little as possible over time. The alternative result may indicate that the pen tip jumped around the page in a very short time (which is less probable).
For searching the codebook, the following may be used. For each captured image I, with code CI, the best matched code set S(CI) is defined as equation 14.
S(CI)={ψi|Dist(CI,Cψ
if the radial code is used, the distance function should be Distr−i(·,·).
Only Nthresh codes with less distance in S(CI) are kept, if NS>Nthresh. dthresh and Nthresh are selected based on the camera performance.
The set of position candidates for image I may be represented by equation (15).
Although the invention has been defined using the appended claims, these claims are illustrative in that the invention is intended to include the elements and steps described herein in any combination or sub combination. Accordingly, there are any number of alternative combinations for defining the invention, which incorporate one or more elements from the specification, including the description, claims, and drawings, in various combinations or sub combinations. It will be apparent to those skilled in the relevant technology, in light of the present specification, that alternate combinations of aspects of the invention, either alone or in combination with one or more elements or steps defined herein, may be utilized as modifications or alterations of the invention or as part of the invention. It may be intended that the written description of the invention contained herein covers all such modifications and alterations.
Number | Name | Date | Kind |
---|---|---|---|
4686329 | Joyce | Aug 1987 | A |
4742558 | Ishibashi et al. | May 1988 | A |
4745269 | Van Gils et al. | May 1988 | A |
4829583 | Monroe et al. | May 1989 | A |
4941124 | Skinner, Jr. | Jul 1990 | A |
5032924 | Brown et al. | Jul 1991 | A |
5051736 | Bennett et al. | Sep 1991 | A |
5073966 | Sato | Dec 1991 | A |
5146552 | Cassorla et al. | Sep 1992 | A |
5153928 | Iizuka | Oct 1992 | A |
5181257 | Steiner et al. | Jan 1993 | A |
5196875 | Stuckler | Mar 1993 | A |
5235654 | Anderson et al. | Aug 1993 | A |
5243149 | Comerford et al. | Sep 1993 | A |
5247137 | Epperson | Sep 1993 | A |
5253336 | Yamada | Oct 1993 | A |
5288986 | Pine et al. | Feb 1994 | A |
5294792 | Lewis et al. | Mar 1994 | A |
5335150 | Huang | Aug 1994 | A |
5365598 | Sklarew | Nov 1994 | A |
5394487 | Burger et al. | Feb 1995 | A |
5398082 | Henderson et al. | Mar 1995 | A |
5414227 | Schubert et al. | May 1995 | A |
5442147 | Burns et al. | Aug 1995 | A |
5448372 | Axman et al. | Sep 1995 | A |
5450603 | Davies | Sep 1995 | A |
5454054 | Iizuka | Sep 1995 | A |
5471533 | Wang et al. | Nov 1995 | A |
5477012 | Sekendur | Dec 1995 | A |
5511156 | Nagasaka | Apr 1996 | A |
5546515 | Mochizuki | Aug 1996 | A |
5581637 | Cass et al. | Dec 1996 | A |
5581682 | Anderson et al. | Dec 1996 | A |
5587558 | Matsushima | Dec 1996 | A |
5612524 | Sant'anselmo et al. | Mar 1997 | A |
5626620 | Kieval et al. | May 1997 | A |
5629499 | Flickinger et al. | May 1997 | A |
5635697 | Shellhammer et al. | Jun 1997 | A |
5644652 | Bellegarda et al. | Jul 1997 | A |
5652412 | Lazzouni et al. | Jul 1997 | A |
5661291 | Ahearn et al. | Aug 1997 | A |
5661506 | Lazzouni et al. | Aug 1997 | A |
5670897 | Kean | Sep 1997 | A |
5686718 | Iwai et al. | Nov 1997 | A |
5692073 | Cass | Nov 1997 | A |
5719884 | Roth et al. | Feb 1998 | A |
5721940 | Luther et al. | Feb 1998 | A |
5726435 | Hara et al. | Mar 1998 | A |
5727098 | Jacobson | Mar 1998 | A |
5748808 | Taguchi et al. | May 1998 | A |
5754280 | Kato et al. | May 1998 | A |
5756981 | Roustaei et al. | May 1998 | A |
5765176 | Bloomberg | Jun 1998 | A |
5774602 | Taguchi et al. | Jun 1998 | A |
5817992 | D'Antonio | Oct 1998 | A |
5818436 | Imai et al. | Oct 1998 | A |
5822436 | Rhoads | Oct 1998 | A |
5822465 | Normile et al. | Oct 1998 | A |
5825015 | Chan et al. | Oct 1998 | A |
5825892 | Braudaway et al. | Oct 1998 | A |
5850058 | Tano et al. | Dec 1998 | A |
5852434 | Sekendur | Dec 1998 | A |
5855483 | Collins et al. | Jan 1999 | A |
5855594 | Olive et al. | Jan 1999 | A |
5875264 | Carlstrom | Feb 1999 | A |
5890177 | Moody et al. | Mar 1999 | A |
5897648 | Henderson | Apr 1999 | A |
5898166 | Fukuda et al. | Apr 1999 | A |
5902968 | Sato et al. | May 1999 | A |
5937110 | Petrie et al. | Aug 1999 | A |
5939703 | Hecht et al. | Aug 1999 | A |
5960124 | Taguchi et al. | Sep 1999 | A |
5961571 | Gorr et al. | Oct 1999 | A |
5995084 | Chan et al. | Nov 1999 | A |
6000614 | Yang et al. | Dec 1999 | A |
6000621 | Hecht et al. | Dec 1999 | A |
6000946 | Snyders et al. | Dec 1999 | A |
6005973 | Seybold et al. | Dec 1999 | A |
6041335 | Merritt et al. | Mar 2000 | A |
6044165 | Perona et al. | Mar 2000 | A |
6044301 | Hartlaub et al. | Mar 2000 | A |
6052481 | Grajski et al. | Apr 2000 | A |
6054990 | Tran | Apr 2000 | A |
6076734 | Dougherty et al. | Jun 2000 | A |
6081261 | Wolff et al. | Jun 2000 | A |
6108453 | Acharya | Aug 2000 | A |
6141014 | Endo et al. | Oct 2000 | A |
6151424 | Hsu | Nov 2000 | A |
6157935 | Tran et al. | Dec 2000 | A |
6181329 | Stork et al. | Jan 2001 | B1 |
6186405 | Yoshioka | Feb 2001 | B1 |
6188392 | O'Connor et al. | Feb 2001 | B1 |
6192380 | Light et al. | Feb 2001 | B1 |
6202060 | Tran | Mar 2001 | B1 |
6208771 | Jared et al. | Mar 2001 | B1 |
6208894 | Schulman et al. | Mar 2001 | B1 |
6219149 | Kawata et al. | Apr 2001 | B1 |
6226636 | Abdel-Mottaleb et al. | May 2001 | B1 |
6230304 | Groeneveld et al. | May 2001 | B1 |
6243071 | Shwarts et al. | Jun 2001 | B1 |
6249614 | Kolesnik et al. | Jun 2001 | B1 |
6254253 | Daum et al. | Jul 2001 | B1 |
6256398 | Chang | Jul 2001 | B1 |
6259827 | Nichani | Jul 2001 | B1 |
6278968 | Franz et al. | Aug 2001 | B1 |
6294775 | Seibel et al. | Sep 2001 | B1 |
6310988 | Flores et al. | Oct 2001 | B1 |
6327395 | Hecht et al. | Dec 2001 | B1 |
6330976 | Dymetman et al. | Dec 2001 | B1 |
6335727 | Morishita et al. | Jan 2002 | B1 |
6396598 | Kashiwagi et al. | May 2002 | B1 |
6408330 | DeLaHuerga | Jun 2002 | B1 |
6441920 | Smith | Aug 2002 | B1 |
6479768 | How | Nov 2002 | B1 |
6492981 | Stork et al. | Dec 2002 | B1 |
6517266 | Saund | Feb 2003 | B2 |
6522928 | Whitehurst et al. | Feb 2003 | B2 |
6529638 | Westerman | Mar 2003 | B1 |
6532152 | White et al. | Mar 2003 | B1 |
6538187 | Beigi | Mar 2003 | B2 |
6546136 | Hull | Apr 2003 | B1 |
6551357 | Madduri | Apr 2003 | B1 |
6560741 | Gerety et al. | May 2003 | B1 |
6570104 | Ericson et al. | May 2003 | B1 |
6570997 | Noguchi | May 2003 | B2 |
6573887 | O'Donnell, Jr. | Jun 2003 | B1 |
6577299 | Schiller et al. | Jun 2003 | B1 |
6580424 | Krumm | Jun 2003 | B1 |
6584052 | Phillips et al. | Jun 2003 | B1 |
6585154 | Ostrover et al. | Jul 2003 | B1 |
6592039 | Smith et al. | Jul 2003 | B1 |
6603464 | Rabin | Aug 2003 | B1 |
6625313 | Morita et al. | Sep 2003 | B1 |
6628267 | Karidis et al. | Sep 2003 | B2 |
6650320 | Zimmerman | Nov 2003 | B1 |
6651894 | Nimura et al. | Nov 2003 | B2 |
6655597 | Swartz et al. | Dec 2003 | B1 |
6661920 | Skinner | Dec 2003 | B1 |
6663008 | Pettersson et al. | Dec 2003 | B1 |
6671386 | Shimizu et al. | Dec 2003 | B1 |
6674427 | Pettersson et al. | Jan 2004 | B1 |
6681045 | Lapstun et al. | Jan 2004 | B1 |
6686910 | O'Donnell, Jr. | Feb 2004 | B2 |
6689966 | Wiebe | Feb 2004 | B2 |
6693615 | Hill et al. | Feb 2004 | B2 |
6697056 | Bergelson et al. | Feb 2004 | B1 |
6728000 | Lapstun et al. | Apr 2004 | B1 |
6729543 | Arons et al. | May 2004 | B1 |
6731271 | Tanaka et al. | May 2004 | B1 |
6732927 | Olsson et al. | May 2004 | B2 |
6738053 | Borgstrom et al. | May 2004 | B1 |
6744967 | Kaminski et al. | Jun 2004 | B2 |
6752317 | Dymetman et al. | Jun 2004 | B2 |
6760009 | Omura et al. | Jul 2004 | B2 |
6783069 | Hecht et al. | Aug 2004 | B1 |
6819776 | Chang | Nov 2004 | B2 |
6831273 | Jenkins et al. | Dec 2004 | B2 |
6832724 | Yavid et al. | Dec 2004 | B2 |
6834337 | Mitchell et al. | Dec 2004 | B1 |
6847356 | Hasegawa et al. | Jan 2005 | B1 |
6856712 | Fauver et al. | Feb 2005 | B2 |
6862371 | Mukherjee | Mar 2005 | B2 |
6865325 | Ide et al. | Mar 2005 | B2 |
6870966 | Silverbrook | Mar 2005 | B1 |
6879731 | Kang et al. | Apr 2005 | B2 |
6880124 | Moore | Apr 2005 | B1 |
6880755 | Gorbet et al. | Apr 2005 | B2 |
6898297 | Katsura et al. | May 2005 | B2 |
6919892 | Cheiky et al. | Jul 2005 | B1 |
6929183 | Pettersson | Aug 2005 | B2 |
6935562 | Hecht et al. | Aug 2005 | B2 |
6938222 | Hullender et al. | Aug 2005 | B2 |
6956968 | O'Dell et al. | Oct 2005 | B1 |
6960777 | Soar | Nov 2005 | B2 |
6964483 | Wang et al. | Nov 2005 | B2 |
6970183 | Monroe | Nov 2005 | B1 |
6975334 | Barrus | Dec 2005 | B1 |
6976220 | Lapstun et al. | Dec 2005 | B1 |
6992655 | Ericson et al. | Jan 2006 | B2 |
6999622 | Komatsu | Feb 2006 | B2 |
7003150 | Trajkovi | Feb 2006 | B2 |
7009594 | Wang et al. | Mar 2006 | B2 |
7012621 | Crosby et al. | Mar 2006 | B2 |
7024429 | Ngo et al. | Apr 2006 | B2 |
7036938 | Wang et al. | May 2006 | B2 |
7048198 | Ladas et al. | May 2006 | B2 |
7092122 | Iwaki | Aug 2006 | B2 |
7110604 | Olsson et al. | Sep 2006 | B2 |
7111230 | Euchner et al. | Sep 2006 | B2 |
7116840 | Wang et al. | Oct 2006 | B2 |
7119816 | Zhang et al. | Oct 2006 | B2 |
7123742 | Chang | Oct 2006 | B2 |
7133031 | Wang et al. | Nov 2006 | B2 |
7133563 | Wang et al. | Nov 2006 | B2 |
7136054 | Wang et al. | Nov 2006 | B2 |
7139740 | Ayala | Nov 2006 | B2 |
7142197 | Wang et al. | Nov 2006 | B2 |
7142257 | Callison et al. | Nov 2006 | B2 |
7145556 | Pettersson | Dec 2006 | B2 |
7167164 | Ericson et al. | Jan 2007 | B2 |
7190843 | Wei et al. | Mar 2007 | B2 |
7222799 | Silverbrook | May 2007 | B2 |
7225979 | Silverbrook et al. | Jun 2007 | B2 |
7262764 | Wang et al. | Aug 2007 | B2 |
7263224 | Wang et al. | Aug 2007 | B2 |
7289103 | Lapstun et al. | Oct 2007 | B2 |
7292370 | Iwaki | Nov 2007 | B2 |
7293240 | Lapstun et al. | Nov 2007 | B2 |
7295193 | Fahraeus | Nov 2007 | B2 |
7330605 | Wang et al. | Feb 2008 | B2 |
7386191 | Wang et al. | Jun 2008 | B2 |
7400777 | Wang et al. | Jul 2008 | B2 |
7403658 | Lin et al. | Jul 2008 | B2 |
7421439 | Wang et al. | Sep 2008 | B2 |
7430497 | Wang et al. | Sep 2008 | B2 |
7440134 | Natori | Oct 2008 | B2 |
7440583 | Tohne et al. | Oct 2008 | B2 |
7463784 | Kugo | Dec 2008 | B2 |
7477784 | Wang et al. | Jan 2009 | B2 |
7486822 | Wang et al. | Feb 2009 | B2 |
7486823 | Wang et al. | Feb 2009 | B2 |
7502508 | Wang et al. | Mar 2009 | B2 |
7505982 | Wang et al. | Mar 2009 | B2 |
7528848 | Xu et al. | May 2009 | B2 |
7532366 | Yang et al. | May 2009 | B1 |
7536051 | Lin et al. | May 2009 | B2 |
7542976 | Wang et al. | Jun 2009 | B2 |
7570813 | Wang et al. | Aug 2009 | B2 |
7580576 | Wang et al. | Aug 2009 | B2 |
7583842 | Lin et al. | Sep 2009 | B2 |
20010023896 | He et al. | Sep 2001 | A1 |
20010024193 | Fahraeus | Sep 2001 | A1 |
20010038383 | Ericson et al. | Nov 2001 | A1 |
20010038711 | Williams | Nov 2001 | A1 |
20010053238 | Katsura et al. | Dec 2001 | A1 |
20020000981 | Hugosson et al. | Jan 2002 | A1 |
20020020750 | Dymetman et al. | Feb 2002 | A1 |
20020028018 | Hawkins et al. | Mar 2002 | A1 |
20020031622 | Ippel et al. | Mar 2002 | A1 |
20020048404 | Fahraeus et al. | Apr 2002 | A1 |
20020050982 | Ericson | May 2002 | A1 |
20020069220 | Tran | Jun 2002 | A1 |
20020071488 | Kim et al. | Jun 2002 | A1 |
20020148655 | Cho et al. | Oct 2002 | A1 |
20020163510 | Williams et al. | Nov 2002 | A1 |
20020163511 | Sekendur | Nov 2002 | A1 |
20020179717 | Cummings et al. | Dec 2002 | A1 |
20030001020 | Kardach | Jan 2003 | A1 |
20030009725 | Reichenbach | Jan 2003 | A1 |
20030030638 | Astrom et al. | Feb 2003 | A1 |
20030034961 | Kao | Feb 2003 | A1 |
20030050803 | Marchosky | Mar 2003 | A1 |
20030063045 | Fleming | Apr 2003 | A1 |
20030063072 | Brandenberg et al. | Apr 2003 | A1 |
20030081000 | Watanabe et al. | May 2003 | A1 |
20030088781 | ShamRao | May 2003 | A1 |
20030090475 | Paul et al. | May 2003 | A1 |
20030117378 | Carro | Jun 2003 | A1 |
20030118233 | Olsson | Jun 2003 | A1 |
20030128194 | Pettersson | Jul 2003 | A1 |
20030146883 | Zelitt | Aug 2003 | A1 |
20030159044 | Doyle et al. | Aug 2003 | A1 |
20030179906 | Baker et al. | Sep 2003 | A1 |
20030214553 | Dodge | Nov 2003 | A1 |
20030214669 | Saitoh | Nov 2003 | A1 |
20040032393 | Brandenberg et al. | Feb 2004 | A1 |
20040046744 | Rafii et al. | Mar 2004 | A1 |
20040085302 | Wang et al. | May 2004 | A1 |
20040086181 | Wang et al. | May 2004 | A1 |
20040090429 | Geaghan et al. | May 2004 | A1 |
20040128264 | Leung et al. | Jul 2004 | A1 |
20040128511 | Sun et al. | Jul 2004 | A1 |
20040153649 | Rhoads et al. | Aug 2004 | A1 |
20040212553 | Wang et al. | Oct 2004 | A1 |
20040233163 | Lapstun et al. | Nov 2004 | A1 |
20050024324 | Tomasi et al. | Feb 2005 | A1 |
20050044164 | O'Farrell et al. | Feb 2005 | A1 |
20050052700 | Mackenzie et al. | Mar 2005 | A1 |
20050104909 | Okamura et al. | May 2005 | A1 |
20050106365 | Palmer et al. | May 2005 | A1 |
20050146518 | Wang et al. | Jul 2005 | A1 |
20050147281 | Wang et al. | Jul 2005 | A1 |
20050193292 | Lin et al. | Sep 2005 | A1 |
20060082557 | Ericson et al. | Apr 2006 | A1 |
20060109263 | Wang et al. | May 2006 | A1 |
20060123049 | Wang et al. | Jun 2006 | A1 |
20060125805 | Marggraff | Jun 2006 | A1 |
20060182343 | Lin et al. | Aug 2006 | A1 |
20060190818 | Wang et al. | Aug 2006 | A1 |
20060215913 | Wang et al. | Sep 2006 | A1 |
20060242560 | Wang et al. | Oct 2006 | A1 |
20060242562 | Wang et al. | Oct 2006 | A1 |
20060242622 | Wang et al. | Oct 2006 | A1 |
20060267965 | Clary | Nov 2006 | A1 |
20060274948 | Wang et al. | Dec 2006 | A1 |
20070001950 | Zhang et al. | Jan 2007 | A1 |
20070003150 | Xu et al. | Jan 2007 | A1 |
20070041654 | Wang et al. | Feb 2007 | A1 |
20070042165 | Wang et al. | Feb 2007 | A1 |
20080025612 | Wang et al. | Jan 2008 | A1 |
20090027241 | Wang | Jan 2009 | A1 |
20090067743 | Wang et al. | Mar 2009 | A1 |
20090110308 | Wang et al. | Apr 2009 | A1 |
20090119573 | Wang et al. | May 2009 | A1 |
Number | Date | Country |
---|---|---|
1303494 | Jul 2001 | CN |
1352778 | Jun 2002 | CN |
3143455 | Sep 2003 | CN |
200610092487 | Sep 2003 | CN |
0407734 | Jan 1991 | EP |
0439682 | Aug 1991 | EP |
0564708 | Oct 1993 | EP |
0670555 | Sep 1995 | EP |
0694870 | Jan 1996 | EP |
0717368 | Jun 1996 | EP |
0732666 | Sep 1996 | EP |
0865166 | Sep 1998 | EP |
1154377 | Nov 2001 | EP |
1158456 | Nov 2001 | EP |
1168231 | Jan 2002 | EP |
1276073 | Jan 2003 | EP |
1416435 | May 2004 | EP |
2393149 | Mar 2004 | GB |
63165584 | Jul 1988 | JP |
04253087 | Sep 1992 | JP |
06006316 | Jan 1994 | JP |
06209482 | Jul 1994 | JP |
06230886 | Aug 1994 | JP |
07020812 | Jan 1995 | JP |
07225564 | Aug 1995 | JP |
10215450 | Aug 1998 | JP |
11308112 | Nov 1999 | JP |
2000131640 | May 2000 | JP |
2002529796 | Sep 2000 | JP |
2002082763 | Mar 2002 | JP |
2002108551 | Apr 2002 | JP |
9630217 | Oct 1996 | WO |
WO-9960469 | Nov 1999 | WO |
WO-9965568 | Dec 1999 | WO |
0025293 | May 2000 | WO |
WO-0072247 | Nov 2000 | WO |
0073983 | Dec 2000 | WO |
WO-0126032 | Apr 2001 | WO |
0148685 | Jul 2001 | WO |
0171654 | Sep 2001 | WO |
02077870 | Oct 2002 | WO |
WO-03038741 | May 2003 | WO |
WO-2005106638 | Oct 2005 | WO |
Number | Date | Country | |
---|---|---|---|
20060182309 A1 | Aug 2006 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 10284451 | Oct 2002 | US |
Child | 11385869 | US |