1. Field of the Invention
This invention relates to the field of warped video image processing. Specifically, embodiments of this invention include methods, apparatus, and computer program products for specifying objects in a warped video image and tracking the specified objects as they move through the video over time.
2. Background
A warped video image includes a set of video frames each frame containing non-rectilinear data. Such a warped video image can be created by capturing an image through a wide-angle lens, a catadioptric lens or some other distorting lens. The warped video image can be sent to a user/viewer who can then select portions of the data for transformation into a rectilinear form for presentation (such as by display).
Most video cameras only record a view within a small viewing angle. Thus, a typical video camera only captures an image in the direction that the camera is aimed. Such conventional cameras force viewers to look only at what the camera operator chooses to focus upon.
A video camera equipped with a wide-angle lens captures warped video images. For example, a panoramic camera constructed with a catadioptric lens captures a warped annular image that represents a substantially 360-degree scene that extends through a horizon line. Other wide-angle lenses (for example, a fish-eye lens) can generate warped circular images and may capture substantially a hemisphere (180-degree) view.
A portion of the warped video image can be transformed to provide a rectilinear representation of that portion of the warped video image. Thus, a viewer/user of the warped video image can individually select what portion of the warped video image is to be presented. As the user/viewer focuses on a moving object, the user/viewer must constantly adjust the presented view to track the moving object as it moves through the warped video image.
In some circumstances (for example, in a television broadcast control room), a user selects a view for presentation to other viewers. In this case, the viewer is unable to independently select the view because the user has done so.
There are many known techniques for tracking moving objects in a video. Some of these are described by U.S. Pat. No. 5,548,659, by Okamoto and entitled Method and Apparatus for Detecting Changes in Dynamic Images; by U.S. Pat. No. 5,877,804, by Otsuki et al. and entitled Method and Apparatus for Moving Object Detection; by U.S. Pat. No. 5,537,155, by O'Connell et al. and entitled Method for estimating Motion in a Video Sequence; U.S. Pat. No. 6,072,494, by Nguyen and entitled Method and Apparatus for Real-Time Gesture Recognition; by U.S. Pat. No. 5,434,617, and by Bianchi and entitled Automatic Tracking Camera Control System. In addition, other teachings are from Statistical Background Modeling for Tracking With a Virtual Camera, by Rowe and Blake; from Divide and Conquer: Using Approximate World Models to Control View-Based Algorithms, by Bobick and Pinhanez, and from Approximate World Models: Incorporating Qualitative and Linguistic Information into Vision Systems, by Pinhanez and Bobick. However, these techniques are not known to have been used to track movement or objects through a warped video image, nor to other aspects of the inventions disclosed within.
Conventional images are often delivered by electronic means. For example, television and the Internet deliver conventional images across wired and wireless electronic media. However, there are no standard means of delivering real-time panoramic images or wide-angle images electronically. Since panoramic images are so large and include so much data, it is difficult to deliver these images using conventional image transmission techniques. To further compound the problem, real-time motion panoramic images require a very high bandwidth channel for electronic distribution.
It would be advantageous to provide methods, apparatus, systems and program product solutions that allow a user/viewer to select an object-of-interest and to track it in real-time as the object-of-interest moves through a warped video image. In addition it would be advantageous to allocate bandwidth to the portions of the warped video image that include the tracked object-of-interest. Furthermore, it would be advantageous to monitor when the object-of-interest moves into an area-of-interest and to respond to such an occurrence.
Apparatus, methods and computer program products are disclosed that track movement or a moving object through a warped video image. The warped video image can result from a video camera attached to a warping lens such as a wide-angle lens or panoramic lens. Some embodiments allow a user to select the portion of the image that interests the user for tracking. Other embodiments automatically select and track movement through the warped video image without input from a user. Still other embodiments track when movement comes into proximity with an area-of-interest and will raise an alarm (a signal) that can be used to start recording of the warped video image or to trigger other alarm or signal responses (the alarm can be used to trigger typical surveillance systems, monitor traffic or interest generated by a display or attraction, or perform other operations responsive to the alarm response). Yet other embodiments change the bandwidth allocated to portions of the warped video image sent over a network responsive to the tracked movement so that the movement stays in a quality view.
One preferred embodiment is a computer-controlled method that includes a step of selecting a first view into a warped video image. The warped video image includes a plurality of frames. The method also includes the step of identifying an object-of-interest in a first set of frames of the plurality of frames and the step of tracking the object-of-interest through a subsequent set of frames of the plurality of frames.
Another preferred embodiment is a computing device that performs the steps of the computer-controlled method above.
Yet another preferred embodiment is a computer program product that provides computer readable data that can be executed by the computer to perform the steps of the computer-controlled method above.
Versions of these embodiments respond to where the object-of-interest is in the view and can respond by triggering an alarm or signal condition or by changing the bandwidth allocated to portions of the warped video image.
The foregoing and many other aspects of the present invention will no doubt become obvious to those of ordinary skill in the art after having read the following detailed description of a preferred embodiment that is illustrated in the various drawing figures.
Notations and Nomenclature
The following ‘notations and nomenclature’ are provided to assist in the understanding of the present invention and the preferred embodiments thereof.
Procedure—A procedure is a self-consistent sequence of computerized steps that lead to a desired result. These steps are defined by one or more computer instructions. These steps can be performed by a computer executing the instructions that define the steps. Thus, the term “procedure” can refer (for example, but without limitation) to a sequence of instructions, a sequence of instructions organized within a programmed-procedure or programmed-function, or a sequence of instructions organized within programmed-processes executing in one or more computers. Such a procedure can also be implemented directly in circuitry that performs the steps.
Overview
U.S. Pat. No. 6,043,837 is included by reference in its entirety. This patent discloses, among other things, at least one embodiment of a panoramic lens, the annular image captured by the panoramic lens, and at least one embodiment for sending a warped video image from a data source (such as a server computer) to a client such as (a client computer).
Detailed Description
One skilled in the art will understand that not all of the displayed features of the computer system 100 are required in some of the embodiments. Such a one will also understand that computers are used in many devices to cause the device to operate. These extend from toasters to navigational devices to televisions and beyond. The inventors' use of the term “computer system” includes any system that has circuitry that performs the steps of the invention as well as any general-purpose programmed computer. In addition, aspects of the network 123 can include a broadcast network, a wireless network, the Internet, a LAN, a WAN, a telephone network or other network.
One skilled in the art will understand that the program and data 119 can also reside on the DVD medium 127.
One preferred embodiment is used to track images through a panoramic video image. One skilled in the art will understand that the techniques described within can also be applied to other warped video images.
The tracking process 200 can be used by a viewer of the warped video image and/or it can be used by a human or machine user. A user may or may not also be a viewer. However, a user can select an object for subsequent tracking for viewing by viewers who do not have access to the tracking process 200.
The tracking process 200 initiates at a ‘start’ terminal 201 and continues to an ‘initialization’ procedure 203 that initializes the tracking process 200. After initialization, the tracking process 200 continues to a ‘receive panoramic video frame’ procedure 205. The ‘receive panoramic video frame’ procedure 205 receives a video frame that contains panoramic data. The panoramic data can be in a warped annular form, in an un-warped form, or some other form that contains information from a portion of and/or the entirety of the panorama. Portions of the panoramic data can be of differing quality as discussed in the parent application. The video frame can be received over the network 123, from reading a CD or DVD on the DVD/CD-ROM drive unit 115, from the video camera 133, or from any other device that can provide panoramic video frames. After receiving the video frame, the tracking process 200 continues to a ‘tracking decision’ procedure 207.
The ‘tracking decision’ procedure 207 determines whether automatic tracking is enabled. Automatic tracking is enabled by operations performed by a user/viewer as is subsequently described with respect to FIG. 3A. If automatic tracking is not enabled, the tracking process 200 continues to an ‘apply user/viewer transformations’ procedure 209 that applies one or more user/viewer specified transformations to the warped video frame. Examples of such transformations include coordinate transformations, pan, rotation, zoom or other transformations that generate a view into the warped video frame.
Once the transformations are applied, a ‘store view in video frame’ procedure 211 stores the transformed data into a video frame suitable for presentation. Once stored in a video frame, the view can be (for example, but without limitation) presented to the viewer, transmitted to the client computer for presentation or storage, or stored on some computer-readable or video-readable media.
An ‘accept user/viewer commands’ procedure 213 accepts any commands that the user/viewer may have input. These commands can be input to the server computer or the client computer by manipulation of devices that communicate to the computer or by programs executed by a computer. These commands can include presentation commands that create or modify transformations that are applied to the warped video frame to generate a view. Examples (but without limitation) of such commands include pan, rotation and zoom commands. Once the user command is accepted, it is processed by either a ‘process presentation command’ procedure 215 or a ‘select view; enable tracking’ procedure 217. One skilled in the art will understand that the ‘process presentation command’ procedure 215 and the ‘select view; enable tracking’ procedure 217 are both procedures for processing commands and that they are illustrated separately to simplify the following discussion.
The ‘process presentation command’ procedure 215 parses the user's command and creates and/or modifies transformations that are applied to the warped video frame to generate a view.
The ‘select view; enable tracking’ procedure 217 allows the user/viewer to select a view containing an object-of-interest and to enable tracking as is subsequently described with respect to FIG. 3A and FIG. 3B.
An ‘ancillary operations’ procedure 218 can be used to perform operations responsive to the current view. As is subsequently described with respect to
The tracking process 200 continues to the ‘receive panoramic video frame’ procedure 205 for processing of the next warped video frame. The tracking process 200 continues until all the warped video frames are processed, the user/viewer disables or terminates the tracking process 200, or some other termination condition occurs.
Looking again at the ‘tracking decision’ procedure 207, if tracking is enabled, the tracking process 200 continues to a ‘generate tracker transformations’ procedure 219. As is subsequently described with respect to
One skilled in the art will understand that the tracking process 200 can be implemented using (for example, but without limitation) object-oriented programming practices, procedural programming practices, single or multiple threads of execution; and can be implemented using a general purpose computer, specialized circuitry or some combination of each.
In another preferred embodiment, the ‘select view and enable tracking’ process 300 can be configured to select moving pixels without user/viewer selection. In this embodiment, each frame is scanned to determine which pixels moved over time and one or more groups of pixels are automatically selected for tracking.
Once the view is selected (with or without the object box) the ‘select view and enable tracking’ process 300 continues to a ‘capture view model’ procedure 305 that processes the view and initializes one or more models that are used to track the object-of-interest as it moves through the panorama. The ‘capture view model’ procedure 305 is subsequently described with respect to FIG. 3B.
An ‘enable tracking’ procedure 307 enables tracking in the tracking process 200. The ‘select view and enable tracking’ process 300 completes through an ‘end’ terminal 309.
Once an object box is defined the ‘capture model’ process 320 continues to an ‘iterate models’ procedure 329 that passes the view and object box to at least one of the available models for initialization. Once the models are initialized the ‘select view and enable tracking’ process 300 continues to an ‘end’ terminal 331.
Looking again at the ‘iterate models’ procedure 329; as each model is iterated, a ‘determine model parameters and weight’ procedure 333 initializes the model with respect to the object box and the view. The ‘determine model parameters and weight’ procedure 333 can operate by evaluating the pixels in the current and a subset of the previous frames to determine which models are better suited for tracking the object-of-interest represented by a group of pixels (a blob). Thus, a set of models is enabled based on the characteristics of a blob in the object box and in some cases how these characteristics have changed over a subset of the past frames. Each of the enabled models has an associated “weight value” that represents how well the model expects be able to track the particular characteristics of the blob in the object box. For example, a pixel motion model (such as shown in
A blob is a data representation of a portion of the light reflected off the object-of-interest and captured by a panoramic video camera. In one preferred embodiment the blob can be defined by specifying some combination of its center (with respect to the warped video image), its size and/or shape, a pixel motion map, a color map, a texture map and/or an edge map, a set of eigenaxes with corresponding eigenweights that represent the pixel arrangement that make up the blob or other image identification mechanisms.
As the object-of-interest is captured, the ‘determine model parameters and weight’ procedure 333 also computes the quality of the model as applied to the blob. For example, if the color map of the object-of-interest is similar to the color map of the surroundings, if the blob is too small, or if the blob is motionless for a sufficient period of time the corresponding color model, motion model, or texture model will be assigned a lower weight. If the blob specified by the user's object box yields models with insufficient weight, model can be disabled or the model's result rejected. In addition, if all the models generate a low weight (meaning that the specified object cannot be tracked) the object is rejected and tracking is not enabled.
Another preferred embodiment automatically selects an interesting area of the warped video image as the object-of-interest. It does so by detecting changes between adjacent video frames. These changes can include motion, changes in velocity, changes in texture, changes in color or some combination of these. In this embodiment, the user/viewer need not select a view containing the object-of-interest. Instead, the system automatically selects some interesting change. User/viewer preferences can be provided to prioritize the relative importance of the types of detected changes.
Regardless of the results of the ‘model weight check’ decision procedure 402 the ‘generate tracker transformation’ process 400 eventually continues to an ‘iterate enabled models’ procedure 403 that iterates each enabled tracking model. For each iterated model, an ‘apply model’ procedure 405 is applied to the pixels in a region surrounding where the object-of-interest was in the previous frame. Examples (but without limitation) of the available models are subsequently described with respect to FIG. 4B and FIG. 4C.
After all the enabled tracking models have been applied to the current frame, a ‘select model’ procedure 407 can select a model result by comparing the results, weight and confidence of each of the enabled models and selecting a model dependent on the model's weight, confidence, and agreement to the other models. In addition, the ‘select model’ procedure 407 can combine the results from the enabled models (responsive to the model's weight and confidence, and agreement between models) to generate a model result that is different from any of the results returned by the enabled models.
An ‘update model history’ procedure 411 updates the history for all the enabled modules. The model result from the ‘select model’ procedure 407 is used to update each enabled model. Thus, each enabled model has access to the current position of the object as represented by the model result as well as its own result. The history can include the last location of the blob in the warped video image, the last size of the blob, the velocity vector of the blob over the last ‘n’ seconds (such as captured by a Kalman filter), and the change in the size of the blob in the last ‘n’ seconds (again such as captured by a Kalman filter). One skilled in the art will understand that filters other than a Kalman filter can be used. In addition, the ‘update model history’ procedure 411 can cause the models to reevaluate their weight based on the model result from the ‘select model’ procedure 407.
Once the ‘select model’ procedure 407 has determined the model result, a ‘generate transformations’ procedure 413 generates the transforms that, when applied to the panorama, will position the object-of-interest substantially in the center (or other user/viewer specified location) of the view while optionally maintaining the object-of-interest at substantially the same size. The ‘select model’ procedure 407 also generates a confidence value that indicates a certainty that the object-of-interest (represented by the blob) was successfully tracked. This confidence value is passed to a ‘view confidence sufficient’ decision procedure 415 that determines whether the confidence value is sufficient to continue tracking. If so, the ‘generate tracker transformation’ process 400 completes through an ‘end’ terminal 417 and the transformations are applied at the ‘apply tracker transformations’ procedure 221 of FIG. 2. However, if the confidence level is insufficient the ‘generate tracker transformation’ process 400 continues to a ‘recovery’ procedure 419.
The ‘recovery’ procedure 419 can disable tracking (presenting an indication to the user/viewer where the object-of-interest was expected to be and where the object-of-interest was last known to be), and/or it can attempt to recover the object-of-interest in subsequent frames to handle the case where, for example, the object-of-interest has been occluded for a short time, and/or it can accept user/viewer input to help recapture the object-of-interest. After attempting recovery, the ‘generate tracker transformation’ process 400 completes through the ‘end’ terminal 417.
The user/viewer can specify threshold confidence value below which the user/viewer is notified that the tracking of the object-of-interest may have failed, that the tracker is attempting recovery, or if two similar blobs have been found.
Looking again at the ‘model weight check’ decision procedure 402, if the ‘model weight check’ decision procedure 402 determines that it should reevaluate and re-weight a selection of the available disabled models (the weights of the enabled models can be updated when they execute and/or at the ‘update model history’ procedure 411), the ‘generate tracker transformation’ process 400 continues to a ‘enable weight check’ procedure 421 that enables selected models. The ‘model weight check’ decision procedure 402 periodically (after some number of frames, after a period of time or in response to an event) invokes the ‘enable weight check’ procedure 421. If a model is enabled for weight check, it will be applied to the current frame with other enabled models. The weight-check-enabled model will re-evaluate its weight with respect to the current information in the object box, and if the re-evaluated weight is sufficient, the model will enable itself. In a preferred embodiment, there can be a limit on the number of models that can be enabled at any given time. In addition, there can be a limit on the number of models that are weight-check-enabled.
The region includes the position of the object-of-interest in the previous frame (for example, the region can be centered on this position). The size of the region is determined by the past history of the motion and size of the object-of-interest such that the region includes (but is not limited to) both the position of the object-of-interest in the previous frame and the expected position of the object-of-interest in the current frame.
Next, a ‘median filter’ procedure 433 filters out isolated pixel differences within the region without substantially altering the motion characteristics of the pixels within the region that may represent the object-of-interest.
Using the motion map, a ‘motion’ decision procedure 435 determines whether the object-of-interest is in motion. If no motion is detected, the ‘pixel motion tracker’ process 430 continues to a ‘determine view confidence’ procedure 437 that evaluates the confidence value for the view. The ‘determine view confidence’ procedure 437 can (for example, but without limitation) reduce the confidence value when the object box is too small or remains motionless for too long, reduce the confidence value when the boundary of pixels representing the object-of-interest changes shape, color, texture, and/or when the expected position of pixels representing the object-of-interest is sufficiently different from the actual position of pixels representing the object-of-interest.
An ‘update motion history’ procedure 439 maintains any model-specific history required by the operation of the model. This history can include (among others) previous motion maps, motion vectors, the search region, size and position of the object box, the previous image, and environmental information such as three-dimensional information about motionless objects in the warped video image. The ‘update motion history’ procedure 439 can also adjust the weight of the motion tracker model using the same or similar techniques as used in the ‘determine model parameters and weight’ procedure 333.
The ‘pixel motion tracker’ process 430 completes through an ‘end’ terminal 441.
Looking again at the ‘motion’ decision procedure 435, if motion is detected, the ‘pixel motion tracker’ process 430 continues to a ‘find bounding box around motion’ procedure 445 that updates the position and size of the object box around the object-of-interest. In the circumstance where the motion is insubstantially transverse to the warped video image the object box changes size but does not change position. Where there is a transverse component, the object box will change position and possibly size. The ‘pixel motion tracker’ process 430 continues through the ‘determine view confidence’ procedure 437 and eventually completes through the ‘end’ terminal 441.
The ‘color tracker’ process 450 initiates at a ‘start’ terminal 451 and continues to a ‘blob prediction’ procedure 453. The ‘blob prediction’ procedure 453 uses the blob's history to predict the current position, and size of the blob in the panorama. The blob's history can also be used to predict changing color relationships within the blob by monitoring the color map history. Next, a ‘blob search’ procedure 455 attempts to match the blob with the data in the current frame. The search starts at the blob's predicted position (using the blob's predicted size and color map) and if the blob is not immediately found at the predicted position the ‘blob search’ procedure 455 will search around the predicted position or trace back to the previously known position to attempt to locate the blob. A ‘generate confidence’ procedure 457 then generates a value that represents how confident the model is that it was able to find the blob. An ‘update history’ procedure 459 can then adjust the model's weight depending on how well the model was able to track the blob. Finally, the ‘color tracker’ process 450 completes through an ‘end’ terminal 461.
The ‘update history’ procedure 459 maintains any model-specific history required by the operation of the model. This history can include (among others), color maps, motion vectors, the search region, size and position of the object box, and environmental information such as three-dimensional information about motionless objects in the warped video image. The ‘update history’ procedure 459 can also adjust the weight of the color tracker model using the same or similar techniques as used in the ‘determine model parameters and weight’ procedure 333 (for example, by how well the blob is distinguishable from its surroundings).
Similar models can be created following the same or similar structure as shown in FIG. 4B. These models can include a texture model, a shape model, an edge model, an eigen model, etc.
However, if the view has changed, the ‘ancillary operation’ process 500 continues to a ‘change bandwidth allocation for new view’ procedure 507 that determines whether the changed view has been using lower quality data to generate the view and if so allocates more bandwidth to the portion of the data currently used to generate the view.
Regardless of which branch of the ‘view change’ decision procedure 503 and the ‘extended stationary view’ decision procedure 504 is taken, the ‘ancillary operation’ process 500 eventually continues to an ‘area monitoring’ decision procedure 508. The ‘area monitoring’ decision procedure 508 determines whether an area monitoring capability is enabled as is subsequently described.
If area monitoring is not enabled, the ‘ancillary operation’ process 500 continues to an ‘other ancillary adjustments’ procedure 509 that implements other enhancements that can be responsive to the state of change of the object-of-interest in the panorama. Finally, the ‘ancillary operation’ process 500 completes through an ‘end’ terminal 511.
Looking again at the ‘area monitoring’ decision procedure 508. If area monitoring is enabled, the ‘ancillary operation’ process 500 continues to an ‘alarm condition’ decision procedure 513 that determines whether the object-of-interest has come into proximity an area-of-interest for which intrusions are monitored. If an alarm condition is detected, the ‘ancillary operation’ process 500 continues to an ‘alarm response’ procedure 515 that responds to the alarm. The ‘alarm response’ procedure 515 can respond to the alarm in a variety of ways. Some of these responses include starting a recording mechanism of the warped video image or view, presenting a display on a computer monitor, television or other presentation or display device of the view containing the area-of-interest (or areas) and the triggering object, causing audible and/or visual alarms, and invoking a surveillance response from a security company or the police. The recorded views can be saved for later retrieval and playback.
One skilled in the art will understand from reading this description an area-of-interest and an object-of-interest can be specified in very similar ways. The user/viewer can also specify the alarm type for intrusions or proximity to each specific area-of-interest by any of the object-of-interests. Furthermore, the area-of-interest can also be tracked (but generally not presented) and need not be stationary. Thus, for example, a moving area-of-interest (for example, a car) can be specified as well as an object-of-interest (for example, a dog). One embodiment would present the object-of-interest (for example, by display on a computer screen) but track both the object-of-interest and the area-of-interest. In this case, the embodiment would raise an alarm if the object-of-interest and the area-of-interest eventually came into proximity with each other.
One preferred embodiment of previously described models allows the user/viewer to specify that the tracker can wander to follow changes in the warped video image. Thus, the tracker would detect and follow interesting motion in the warped video image instead of following only one particular object-of-interest specified by the user/viewer. In this instance, there would be no “lost track” condition. Instead, when the object-of-interest was lost (or after a suitable time), the ‘recovery’ procedure 419 would find other motion and automatically track that motion. This can also be applied to the other models to wander from color change to color change, or texture change to texture change or some combination thereof. These models track a particular condition (motion, color, texture etc.) for a limited time (for example for as long as it remains interesting or until a more interesting condition occurs).
One preferred embodiment includes the capability to select a second view into the warped video image and to identify and track a second object-of-interest while still tracking the first. Thus, multiple objects of interest can be tracked through the warped video image.
Another preferred embodiment allows the object to be selected on a client computer and to have the tracking done by the server computer.
When using a stationary single video camera with a panoramic lens, one skilled in the art will understand that detecting and tracking movement or moving objects is much simplified over the prior art. In particular, because the background does not move in the image (because the camera does not move) movement is simpler to detect. This advantage also extends over multiple stationary cameras covering an area because there is no need to identify and calibrate the overlapping portions of the images from the multiple cameras nor any need to track movement that exits the view of one camera and enters the view of another.
From the foregoing, it will be appreciated that the invention has (without limitation) the following advantages:
Although the present invention has been described in terms of the presently preferred embodiments, one skilled in the art will understand that various modifications and alterations may be made without departing from the scope of the invention. In particular, the order of the programming steps as described are not intended to be limiting. For example, but without limitation, the inventors contemplate practicing the invention using multiple threads of execution, multiple processors, and object-oriented programming practices. In addition, the scope of the invention includes the addition of or replacement of the previously described models. Accordingly, the scope of the invention is not to be limited to the particular invention embodiments discussed herein.
This application is a Continuation-in-part of U.S. patent application Ser. No.: 09/589,645, filed Jun. 7, 2000, entitled Method and Apparatus for Electronically Distributing Motion Panoramic Images that is a continuation in part of U.S. patent application Ser. No.: 09/131,186 filed Aug. 7, 1998, also entitled Method and Apparatus for Electronically Distributing Motion Panoramic Images. Both of these applications are included by reference in their entirety.
Number | Name | Date | Kind |
---|---|---|---|
2146662 | Van Albada | Feb 1939 | A |
2244235 | Ayres | Jun 1941 | A |
2304434 | Ayres | Dec 1942 | A |
2628529 | Braymer | Feb 1953 | A |
2654286 | Cesar | Oct 1953 | A |
3203328 | Brueggeman | Aug 1965 | A |
3205777 | Benner | Sep 1965 | A |
3229576 | Rees | Jan 1966 | A |
3692934 | Herndon | Sep 1972 | A |
3723805 | Scarpino et al. | Mar 1973 | A |
3785715 | Mecklenborg | Jan 1974 | A |
3832046 | Mecklenborg | Aug 1974 | A |
3846809 | Pinzone et al. | Nov 1974 | A |
3872238 | Herndon | Mar 1975 | A |
3934259 | Krider | Jan 1976 | A |
3998532 | Dykes | Dec 1976 | A |
4012126 | Rosendahl et al. | Mar 1977 | A |
4017145 | Jerie | Apr 1977 | A |
4038670 | Seitz | Jul 1977 | A |
4058831 | Smith | Nov 1977 | A |
4078860 | Globus et al. | Mar 1978 | A |
4157218 | Gordon et al. | Jun 1979 | A |
4190866 | Lukner | Feb 1980 | A |
4241985 | Globus et al. | Dec 1980 | A |
D263716 | Globus et al. | Apr 1982 | S |
4326775 | King | Apr 1982 | A |
4395093 | Rosendahl et al. | Jul 1983 | A |
4429957 | King | Feb 1984 | A |
4463380 | Hooks, Jr. | Jul 1984 | A |
4484801 | Cox | Nov 1984 | A |
4518898 | Tarnowski et al. | May 1985 | A |
4549208 | Kamejima et al. | Oct 1985 | A |
4561733 | Kreischer | Dec 1985 | A |
4566763 | Greguss | Jan 1986 | A |
4578682 | Hooper et al. | Mar 1986 | A |
4593982 | Rosset | Jun 1986 | A |
4602857 | Woltz et al. | Jul 1986 | A |
4656506 | Ritchey | Apr 1987 | A |
4661855 | Gulck | Apr 1987 | A |
4670648 | Hall et al. | Jun 1987 | A |
4728839 | Coughlan et al. | Mar 1988 | A |
4736436 | Yasukawa et al. | Apr 1988 | A |
4742390 | Francke et al. | May 1988 | A |
4751660 | Hedley | Jun 1988 | A |
4754269 | Kishi et al. | Jun 1988 | A |
4761641 | Schreiber | Aug 1988 | A |
4772942 | Tuck | Sep 1988 | A |
4797942 | Burt et al. | Jan 1989 | A |
4807158 | Blanton et al. | Feb 1989 | A |
4835532 | Fant | May 1989 | A |
4858002 | Zobel | Aug 1989 | A |
4858149 | Quarendon | Aug 1989 | A |
4864335 | Corrales | Sep 1989 | A |
4868682 | Shimizu et al. | Sep 1989 | A |
4899293 | Dawson et al. | Feb 1990 | A |
4901140 | Lang et al. | Feb 1990 | A |
4907084 | Nagafusa | Mar 1990 | A |
4908874 | Gabriel | Mar 1990 | A |
4918473 | Blackshear | Apr 1990 | A |
4924094 | Moore | May 1990 | A |
4943821 | Gelphman et al. | Jul 1990 | A |
4943851 | Lang et al. | Jul 1990 | A |
4945367 | Blackshear | Jul 1990 | A |
4965844 | Oka et al. | Oct 1990 | A |
D312263 | Charles | Nov 1990 | S |
4974072 | Hasegawa | Nov 1990 | A |
4985762 | Smith | Jan 1991 | A |
4991020 | Zwirn | Feb 1991 | A |
5005083 | Grage et al. | Apr 1991 | A |
5020114 | Fujioka et al. | May 1991 | A |
5021813 | Corrales | Jun 1991 | A |
5023725 | McCutchen | Jun 1991 | A |
5038225 | Maeshima | Aug 1991 | A |
5040055 | Smith | Aug 1991 | A |
5048102 | Tararine | Sep 1991 | A |
5067019 | Juday et al. | Nov 1991 | A |
5068735 | Tuchiya et al. | Nov 1991 | A |
5077609 | Manephe | Dec 1991 | A |
5083389 | Alperin | Jan 1992 | A |
5097325 | Dill | Mar 1992 | A |
5115266 | Troje | May 1992 | A |
5130794 | Ritchey | Jul 1992 | A |
5142354 | Suzuki et al. | Aug 1992 | A |
5153716 | Smith | Oct 1992 | A |
5157491 | Kassatly | Oct 1992 | A |
5166878 | Poelstra | Nov 1992 | A |
5173948 | Blackham et al. | Dec 1992 | A |
5175808 | Sayre | Dec 1992 | A |
5185667 | Zimmermann | Feb 1993 | A |
5187571 | Braun et al. | Feb 1993 | A |
5189528 | Takashima et al. | Feb 1993 | A |
5200818 | Neta et al. | Apr 1993 | A |
5231673 | Elenga | Jul 1993 | A |
5259584 | Wainwright | Nov 1993 | A |
5262852 | Eouzan et al. | Nov 1993 | A |
5262867 | Kojima | Nov 1993 | A |
5280540 | Addeo et al. | Jan 1994 | A |
5289312 | Hashimoto et al. | Feb 1994 | A |
5305035 | Schonherr et al. | Apr 1994 | A |
5311572 | Freides et al. | May 1994 | A |
5313306 | Kuban et al. | May 1994 | A |
5315331 | Ohshita | May 1994 | A |
5341218 | Kaneko et al. | Aug 1994 | A |
5359363 | Kuban et al. | Oct 1994 | A |
5384588 | Martin et al. | Jan 1995 | A |
5396583 | Chen et al. | Mar 1995 | A |
5422987 | Yamada | Jun 1995 | A |
5432871 | Novik | Jul 1995 | A |
5434617 | Bianchi | Jul 1995 | A |
5444476 | Conway | Aug 1995 | A |
5446833 | Miller et al. | Aug 1995 | A |
5452450 | Delory | Sep 1995 | A |
5473474 | Powell | Dec 1995 | A |
5479203 | Kawai et al. | Dec 1995 | A |
5490239 | Myers | Feb 1996 | A |
5495576 | Ritchey | Feb 1996 | A |
5508734 | Baker et al. | Apr 1996 | A |
5530650 | Bifero et al. | Jun 1996 | A |
5537155 | O'Connell et al. | Jul 1996 | A |
5539483 | Nalwa | Jul 1996 | A |
5548659 | Okamoto | Aug 1996 | A |
5550646 | Hassan et al. | Aug 1996 | A |
5563650 | Poelstra | Oct 1996 | A |
5601353 | Naimark et al. | Feb 1997 | A |
5606365 | Maurinus et al. | Feb 1997 | A |
5610391 | Ringlien | Mar 1997 | A |
5612533 | Judd et al. | Mar 1997 | A |
5627675 | Davis et al. | May 1997 | A |
5631778 | Powell | May 1997 | A |
5633924 | Kaish et al. | May 1997 | A |
5649032 | Burt et al. | Jul 1997 | A |
5682511 | Sposato et al. | Oct 1997 | A |
5686957 | Baker et al. | Nov 1997 | A |
5714997 | Anderson et al. | Feb 1998 | A |
5729471 | Jain et al. | Mar 1998 | A |
5748194 | Chen | May 1998 | A |
5760826 | Nayer | Jun 1998 | A |
5761416 | Mandet et al. | Jun 1998 | A |
5764276 | Martin et al. | Jun 1998 | A |
5796426 | Gullichsen et al. | Aug 1998 | A |
5841589 | Davis et al. | Nov 1998 | A |
5844520 | Guppy et al. | Dec 1998 | A |
5850352 | Moezzi et al. | Dec 1998 | A |
5854713 | Kuroda et al. | Dec 1998 | A |
5877801 | Martin et al. | Mar 1999 | A |
5877804 | Otsuki et al. | Mar 1999 | A |
RE36207 | Zimmerman et al. | May 1999 | E |
5903319 | Busko et al. | May 1999 | A |
5920337 | Glassman et al. | Jul 1999 | A |
5990941 | Jackson et al. | Nov 1999 | A |
5991444 | Burt et al. | Nov 1999 | A |
5995095 | Ratakonda | Nov 1999 | A |
6002430 | McCall et al. | Dec 1999 | A |
6034716 | Whiting et al. | Mar 2000 | A |
6037988 | Gu et al. | Mar 2000 | A |
6043837 | Driscoll et al. | Mar 2000 | A |
6072494 | Nguyen | Jun 2000 | A |
6226035 | Korein et al. | May 2001 | B1 |
6252975 | Bozdagi et al. | Jun 2001 | B1 |
Number | Date | Country |
---|---|---|
1234341 | May 1960 | FR |
2 221 118 | Jan 1990 | GB |
2289820 | Nov 1995 | GB |
I 2-127877 | Nov 1988 | JP |
Number | Date | Country | |
---|---|---|---|
Parent | 09589645 | Jun 2000 | US |
Child | 09659621 | US | |
Parent | 09131186 | Aug 1998 | US |
Child | 09589645 | US |