1. Field of the Invention
This invention relates generally to printers that have embedded functionality for printing time-based media, and in particular to a user interface for operating a printer that prints time-based media resulting in a combination of a printed output and a related electronic data output.
2. Description of the Related Art
Conventional printers receive documents in a variety of formats and print the contents of the documents in accordance with a proper format. For example, a printer enabled to print Portable Document Format (PDF) documents will correctly recreate the original appearance of the document, regardless of the platform being used to view the document.
Today, as more databases and computer networks are interconnected, people often have multiple data systems and destinations in which to store information. For example, a person may receive an email containing information that he wants to retain. The person may want to print some or all of the information. The person may further want to add the information to a database or to send the information to other people or destinations or to add the information to a web page. Currently, the person will need to execute several different software programs and will need to type multiple commands into the programs. He may also need to re-enter the information into one or more programs. This is not efficient and is prone to human error, since human beings occasionally forget to perform one of more of the tasks usually associated with a received document and are also prone to making typographical errors.
Some conventional printers incorporate a management function in which the printer monitors its own internal functions and displays an alert if, for example, its toner is low or it is out of paper. This action is based on the printer doing “self-monitoring,” not on any monitoring of the documents to be printed.
While conventional printers can print documents in a wide variety of formats, these printers are fundamentally limited in their ability to reproduce different kinds of media. For example, it is standard technology for a printer to produce images of static text, pictures, or a combination of the two. But because these printers print onto paper or another similar fixed medium, they cannot record the nuances of time-based media very well.
In developing a printer that is equipped to print time-based media without the limitations of conventional printers, a user interface for such a printer is needed. It is further desirable that such a user interface be operable with a printer that performs at least some of the necessary processing itself rather than require an attached computer or other device to perform all of the processing.
A multifunction printer enables the printing of time-based media. In a typical hardware configuration for such a multifunction printer, a printer includes a print engine that produces a paper or other printed output and one or more electronic devices the produce a related electronic output. Together, the printed and electronic outputs provide an improved representation of the time-based media over that of a convention paper printer.
A user interface provides access to functionality of the printer. In a preferred embodiment, the user interface includes a touch screen for accepting command inputs and providing information to a user. In an alternative embodiment, input is provided by way of a keypad, keyboard, or other input device.
Time-based media data is received by the printer from a media source specified via the user interface. A user specifies one or more multimedia processing functions for the printer to apply to the data. The printer performs the specified functions, and previews the output to the user via a display of the user interface. If the user decides to print the previewed output, the user specifies one or more output devices, such that the printer can print conventional printed output and/or to a specified electronic output.
The figures depict preferred embodiments of the present invention for purposes of illustration only. One skilled in the art will readily recognize from the following discussion that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles of the invention described herein.
Various embodiments of a multifunction printer enable the printing of time-based media in a useful and intelligent format. To create a representation of this time-based media, the printer produces a printed output and a related electronic output, which together provide a representation of the received media. Depending on the desired application for the printer, the printer may include any number of devices for receiving the media, printing the printed output, and producing the electronic output. The described user interface is meant to be generally applicable not only to the specific combinations of input and output devices described herein, but also to additional combinations not specifically disclosed, as would be understood by those of skill in the art from the present disclosure.
Printer Architecture
In one embodiment, the media processing system 125 includes a memory 130, a processor 135, and one or more embedded functionality modules 140. The embedded functionality modules 140 may include software, hardware, or a combination thereof for implementing at least a portion of the functionality of the multifunction printer 100. The media processing system 125 is coupled to the media source interface 105 and the user interface 110, allowing it to communicate with each of those devices. The media processing system 125 is also coupled to the printed output system 115 and to the electronic output system 120 for providing the appropriate commands and data to those systems.
Printer 100 is further described in related application, “Printer Having Embedded Functionality for Printing Time-Based Media,” filed Mar. 30, 2004, Ser. No. 10/814,931, which application is incorporated by reference in its entirety
User interface (UI) 110 of printer 100 preferably includes a screen on which information can be conveyed to the user. UI 110 preferably also includes a mechanism for allowing a user to input responses and make selections. In a preferred embodiment, printer UI 110 includes a touch screen, and the user makes selections and inputs responses by touching the appropriate part of the screen. In one embodiment, a keypad is also provided for entry of alphanumeric data. In an alternative embodiment, a joystick, trackball or other input device can be used to provide input and make selections.
Once a data source has been specified so that data can be received 202, a multimedia processing function is selected. In a preferred embodiment, processing functions include audio range selection and video range selection.
Audio Range Selection
In a preferred embodiment, printer 100 is capable of performing many kinds of multimedia processing functions on the audio selection. Available functions preferably include event detection, speaker segmentation, speaker recognition, sound source location, speech recognition and profile analysis. As can be seen in
Selecting one of the multimedia processing functions displays a submenu for the user specific to that processing function.
In one embodiment, if a user selects the Print Segment button 403 on display 302, a dialog appears on display 302 showing an audio waveform and additionally providing slider controls with which the user can select portions of the current input to print. The manner in which output is created by the printer is discussed below.
Selecting the Event Detection button 404 displays a dialog to the user showing the results of applying audio event detection, such as clapping, yelling, or laughing, along a timeline. Each detected event is preferably accompanied by a confidence that it was detected correctly. The display 302 also includes a slider that lets the user adjust a threshold on the confidence values. As the slider is moved, events that are above threshold are displayed. The audio file is then preferably segmented into clips that bound the above-threshold events. In one embodiment, the display 302 also includes a series of check boxes that let the user choose which events to display.
Selecting the Speaker Segmentation button 414 presents a dialog showing the results of applying speaker segmentation along a timeline. Each segment is preferably shown in a different color and segments that were produced by the same speaker are shown in the same color.
Selecting the Speaker Recognition button 408 displays a dialog showing the results of applying speaker recognition along a timeline. In a preferred embodiment, the name of each speaker is accompanied by a confidence that it was detected correctly. The dialog preferably includes a series of check boxes that let the user choose which speakers to display, as well as a slider that lets the user adjust a threshold on the confidence values. As the slider is moved, speaker names that are above threshold are displayed. The audio file is segmented into clips that bound the above-threshold speaker identities.
In another embodiment, Speaker Segmentation and Speaker Recognition functions can be combined. In such an embodiment, a dialog shows the results of applying speaker segmentation and speaker recognition along a timeline. Each segment is shown in a different color and segments that were produced by the same speaker are shown in the same color. Each segment is accompanied by a confidence that the segmentation is correct. The speaker recognition results include text and optionally confidence values for each speaker name. Multiple speaker names could be associated with each segment. The user interface includes sliders that let the user adjust thresholds on confidence values. As the sliders are moved, speaker segments or speaker recognition results that are “above threshold” are displayed.
Selecting the Sound Source Locator button 406 displays a dialog showing the results of applying sound source localization along a timeline. The direction from which sound was detected is displayed as a sector of a circle. Each sector is accompanied by a confidence that it was detected correctly. The dialog additionally preferably includes a series of check boxes arranged around the circumference of a prototype circle that let the user choose which directions to display.
In another embodiment, Event Detection and Sound Source Locator functions can be combined. In such an embodiment, a dialog shows the results of applying sound source localization and audio event detection along a timeline. The direction from which sound was detected is displayed as a sector of a circle. Each sector is accompanied by a confidence that it was detected correctly. The user interface includes a series of check boxes arranged around the circumference of a prototype circle that let the user choose which directions to display. Each detected event is accompanied by a confidence that it was detected correctly. The user interface includes a series of check boxes that let the user choose which events (e.g., clapping, yelling, laughing) to display.
If a user selects the Speech Recognition button 412, a dialog shows the results of applying speech recognition along a timeline. This includes text and optionally confidence values for each word or sentence. The dialog preferably includes a slider that lets the user adjust a threshold on the confidence values. As the slider is moved, words that are above threshold are displayed. The audio file is segmented into clips that bound the above-threshold words.
In another embodiment, the speech recognition results are matched against results from a Profile Analysis 410, in which a pre-existing text-based profile that represents the user's interests. The dialog includes a slider that lets the user adjust a threshold on the confidence values. Another slider adjusts a threshold on the degree of match between the profile and the speech recognition results. As the sliders are moved, words that are above threshold are displayed.
In another embodiment, the user interface also shows the results of applying audio event detection, such as clapping, yelling, or laughing, along a timeline in combination with speech recognition, speaker segmentation, or speaker recognition. Each detected event is accompanied by a confidence that it was detected correctly. The user interface includes sliders that let the user adjust thresholds on the confidence values.
In another embodiment, the functions of Event Detection, Speech Recognition and Speaker Recognition can be combined. In such an embodiment, a dialog shows the results of applying speech recognition along a timeline, and additionally shows the results of applying audio event detection, such as clapping, yelling, or laughing, along a timeline; and the results of applying speaker recognition along the timeline. The dialog includes a series of check boxes that let the user choose which speakers to display, and additionally includes sliders that let the user adjust thresholds on the confidence values.
In another embodiment, Event Detection, Speech Recognition and Speaker Segmentation can be combined. In such an embodiment, a dialog shows the results of applying speech recognition along a timeline, and additionally shows the results of applying audio event detection, such as clapping, yelling, or laughing, along a timeline; and the results of applying speaker segmentation along the timeline. Each segment is shown in a different color and segments that were produced by the same speaker are shown in the same color. The dialog includes sliders that let the user adjust thresholds on the confidence values.
In another embodiment, Speech Recognition, Event Detection and Sound Source Locator functions can be combined. In such an embodiment, a dialog shows the results of applying speech recognition along a timeline, and additionally shows the results of applying audio event detection, such as clapping, yelling, or laughing, along a timeline, and also displays the direction from which sound was detected as a sector of a circle. The dialog includes a series of check boxes arranged around the circumference of a prototype circle that let the user choose which directions to display. The dialog includes sliders that let the user adjust thresholds on the confidence values.
Video Range Selection
In a preferred embodiment, printer 100 is capable of performing many multimedia processing functions on a video selection. Available functions preferably include event detection, color histogram analysis, face detection, face recognition, optical character recognition (OCR), motion analysis, distance estimation, foreground/background segmentation, scene segmentation, automobile recognition and license plate recognition. As can be seen in
Selecting one of the multimedia processing functions displays a submenu for the user specific to that processing function.
In one embodiment, if a user selects the Print Segment button 403, dialog box 302 shows key frames along a timeline and has slider controls that let the user select portions of a given video file to print. The manner in which output is created by the printer is discussed below.
Selecting the Event Detection button 524 displays a dialog showing the results of applying a video event detection algorithm along a timeline. Examples of video events include cases when people stand up during a meeting or when they enter a room. A slider control is preferably provided to let the user select portions of a given video file to print, based on a confidence value.
Selecting the Color Histogram button 526 presents a dialog showing the results of applying a color histogram analysis algorithm along a timeline. For example, a hue diagram is shown at every 30-second interval. This allows a user to quickly locate the portions of a video that contain sunsets, for example. A slider control is preferably provided that lets the user select portions of a given video file to print, based on the histogram computation.
Selecting the Face Detection button 504 displays a dialog showing segments along a timeline that contain face images. Each segment is preferably accompanied by an integer that expresses the number of faces detected in the clip as well as a confidence value. Slider controls are preferably provided that let the user select portions of a given video file to print, based the confidence values. In another embodiment, the face images are clustered so that multiple instances of the same face are merged into one representation face image.
Selecting the Face Recognition button 522 presents a dialog showing names along a timeline that were derived by application of face recognition to video frames at corresponding points along the time line. Slider controls are preferably provided that let the user select portions of a given video file to print. Also, a series of checkboxes are provided that let the user select clips by choosing names.
Selecting the OCR button 512 causes each frame in the video to be OCR'd and subsampled, for example once every 30 frames, and the results displayed along a timeline. Slider controls are preferably provided to let the user select portions of a given video file to print based on the confidence values that accompany the OCR results. A text entry dialog is also preferably provided to let the user enter words that are searched within the OCR results. Clips that contain the entered text are indicated along the timeline. In one embodiment, results of the OCR function are clustered so that similar OCR results are merged.
Selecting the Motion Analysis button 506 displays a dialog showing the results of applying a motion analysis algorithm along a timeline. One type of motion analysis that can be used is a waveform having a magnitude that indicates the amount of detected motion. This allows a user to quickly locate the portions of a video that contain a person running across the camera's view, for example. A slider control preferably lets the user select portions of a given video file to print, based on the amount of motion that was detected.
Selecting the Distance Estimation button 510 presents a dialog showing the results of applying a distance estimation algorithm along a timeline. For example, in a surveillance camera application using two cameras separated by a known distance, the distance of each point from the camera can be estimated. A slider control preferably lets the user select portions of a given video file to print, based on their distance from the camera. For example, the user may wish to see only objects that are more than 50 yards away from the camera.
In one embodiment, the motion analysis algorithm and distance estimation algorithm are applied together.
Selecting the Foreground/Background Segmentation button 514 displays a dialog showing the results of applying a foreground/background segmentation algorithm along a timeline. At each point, the foreground objects are displayed. A clustering and merging algorithm can be applied across groups of adjacent frames to reduce the number of individual objects that are displayed. Slider controls are preferably provided to let the user select portions of a given video file to print based on the confidence value of the foreground/background segmentation as well as the merging algorithm.
Selecting the Scene Segmentation button 518 presents a dialog showing the results of applying a shot segmentation algorithm along a timeline. Each segment can be accompanied by a confidence value that the segmentation is correct. A slider control is preferably provided to let the user select portions of a given video file to print, based on the confidence value.
Selecting the Visual Inspection button 528 presents a dialog showing an image from an attached video camera. The user can outline areas of the scene and define parameters for the objects that could appear in those areas. For example, for circular objects, a slider control preferably lets a user choose the diameter and allowable variations. This can be applied to automatic inspection of objects on an assembly line such as ball bearings that should be perfectly circular with a diameter of 2.54 centimeters. The user can also choose actions that are executed when the selected parameters are exceeded. For example, a frame image should be grabbed and printed if a ball bearing is detected with a diameter different from the defined value by more than 0.01 centimeters. Also, an email can be sent to a specified user.
In one embodiment, scene segmentation can be combined with face recognition. In such an embodiment, each segment can be accompanied by a confidence value that the segmentation is correct. The results of face recognition are shown as names along the timeline. Slider controls are preferably provided to let the user select portions of a given video file to print, based the confidence values of shot segmentation and face recognition. Additionally, a series of checkboxes are provided to let the user select clips by choosing names.
In another embodiment, scene segmentation can be combined with face detection such that color or a special icon indicates segments on the timeline that contain face images. Each segment can be accompanied by an integer that expresses the number of faces detected as well as a confidence value. Slider controls are preferably provided to let the user select portions of a given video file to print, based the confidence values of shot segmentation and face detection.
In another embodiment, scene segmentation and OCR can be combined. In such an embodiment, a dialog shows the results of applying a shot segmentation algorithm along a timeline. Each segment can be accompanied by a confidence value that the segmentation is correct. Each frame in the video is OCR'd and subsampled, for example once every 30 frames, and the results are displayed along the timeline. A text entry dialog is also provided that lets the user enter words to be searched within the OCR results. Clips that contain the entered text are indicated along the timeline. Slider controls are preferably provided to let the user select portions of a given video file to print based on the confidence values that accompany the shot segmentation and OCR results. In another embodiment, names are also shown on the timeline that were derived by application of face recognition to video frames. Additionally, a series of checkboxes are provided that let the user select clips by choosing names. Slider controls are provided that let the user select portions of a given video file to print based on the confidence values that accompany the shot segmentation, OCR, and face recognition results.
In another embodiment, scene segmentation, OCR and face detection are combined. In that embodiment, segments on the timeline that contain face images are shown. Each segment can be accompanied by an integer that expresses the number of faces detected in the clip as well as a confidence value. Slider controls are preferably provided to let the user select portions of a given video file to print based on the confidence values that accompany the shot segmentation, OCR, and face detection results.
Selecting the Automobile Recognition button 516 displays a dialog for configuring and using the automobile recognition function. In one embodiment, a user operates a surveillance camera that creates many hours of video, most of which is not of interest to the user. The user needs to find and print only those sections that contain a specific object, for example a red Cadillac. For this purpose, each frame in the video is input to an automobile recognition technique and the results are displayed along a timeline. Slider controls are preferably provided that let the user select portions of a given video file to print based on the confidence values that accompany the automobile recognition results. A text entry dialog is also preferably provided that lets the user enter identifiers for the make, model, color, and year for an automobile that are searched within the automobile recognition results. Clips that contain the entered information are indicated along the timeline.
In another embodiment, a user often needs to find and print only those sections that contain a specific license plate number. For this purpose, a user can select the License Plate Recognition button 520. Each frame in the video is input to a license plate recognition technique and the results, typically a plate number, state, plate color, name and address of plate holder, outstanding arrest warrants, criminal history of the plate holder, etc., are displayed along a timeline. Slider controls are preferably provided to let the user select portions of a given video file to print based on the confidence values that accompany the license plate recognition results. A text entry dialog is also provided that lets the user enter identifiers for the plate number, state, and year, etc. for a license plate that are searched within the license plate recognition results. Clips that contain the entered information are indicated along the timeline.
In another embodiment, a user needs to find and print only those sections that contain a specific object, such as a speeding red Cadillac. The automobile recognition function can be combined with motion analysis for this purpose. Each frame in the video is preferably input to an automobile recognition technique and the results are displayed along a timeline. Additionally, a motion analysis technique is applied to the video to estimate the automobile's speed from one frame to the next. Slider controls are preferably provided that let the user select portions of a given video file to print based on the confidence values that accompany the automobile recognition and motion analysis results. A text entry dialog is also provided that lets the user enter identifiers for the make, model, color, and year for an automobile and its speed that are searched within the automobile recognition and motion analysis results. Clips that contain the entered information are indicated along the timeline.
After the user has selected one or more multimedia processing functions to apply, printer 100 performs the selected functions and produces a preview of the result.
Once the user is satisfied with the previewed output 602 and selects the Continue button, the next task in a preferred embodiment is to select the output path. In a preferred embodiment, output can be directed along a printed output path 160, an electronic output path 170, or both.
Referring now to
The present invention has been described in particular detail with respect to a limited number of embodiments. Those of skill in the art will appreciate that the invention may additionally be practiced in other embodiments. First, the particular naming of the components, capitalization of terms, the attributes, data structures, or any other programming or structural aspect is not mandatory or significant, and the mechanisms that implement the invention or its features may have different names, formats, or protocols. Further, the system may be implemented via a combination of hardware and software, as described, or entirely in hardware elements. Also, the particular division of functionality between the various system components described herein is merely exemplary, and not mandatory; functions performed by a single system component may instead be performed by multiple components, and functions performed by multiple components may instead performed by a single component. For example, the particular functions of the user interface 110 and so forth may be provided in many or one module.
Some portions of the above description present the feature of the present invention in terms of algorithms and symbolic representations of operations on information. These algorithmic descriptions and representations are the means used by those skilled in the user interface and printing arts to most effectively convey the substance of their work to others skilled in the art. These operations, while described functionally or logically, are understood to be implemented by computer programs. Furthermore, it has also proven convenient at times, to refer to these arrangements of operations as modules or code devices, without loss of generality.
It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the present discussion, it is appreciated that throughout the description, discussions utilizing terms such as “processing” or “computing” or “calculating” or “determining” or “displaying” or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system memories or registers or other such information storage, transmission or display devices.
Certain aspects of the present invention include process steps and instructions described herein in the form of an algorithm. It should be noted that the process steps and instructions of the present invention could be embodied in software, firmware or hardware, and when embodied in software, could be downloaded to reside on and be operated from different platforms used by real time network operating systems.
The present invention also relates to an apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, application specific integrated circuits (ASICs), or any type of media suitable for storing electronic instructions, and each coupled to a computer system bus. Furthermore, the computers referred to in the specification may include a single processor or may be architectures employing multiple processor designs for increased computing capability.
The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general-purpose systems may also be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the required method steps. The required structure for a variety of these systems will appear from the description above. In addition, the present invention is not described with reference to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any references to specific languages are provided for disclosure of enablement and best mode of the present invention.
Finally, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter. Accordingly, the disclosure of the present invention is intended to be illustrative, but not limiting, of the scope of the invention.
This application claims the benefit of the following provisional patent applications, each of which is incorporated by reference in its entirety: U.S. Provisional Application No. 60/506,206, filed Sep. 25, 2003; U.S. Provisional Application No. 60/506,263, filed Sep. 25, 2003; U.S. Provisional Application No. 60/506,302, filed Sep. 25, 2003; U.S. Provisional Application No. 60/506,303, filed Sep. 25, 2003; and U.S. Provisional Application No. 60/506,411, filed Sep. 25, 2003. This application is also related to the following applications, each of which was filed on Mar. 30, 2004, and is incorporated by reference herein in its entirety: application Ser. No. 10/814,931, entitled “Printer Having Embedded Functionality For Printing Time-Based Media”, application Ser. No. 10/814,500 entitled, “User Interface for Networked Printer,” application Ser. No. 10/814,845 entitled, “Stand Alone Multimedia Printer With User Interface for Allocating Processing,” application Ser. No. 10/814,842 entitled, “Printer with Multimedia Server,” and application Ser. No. 10/814,944 entitled, “Multimedia Print Driver Dialog Interfaces,”.
Number | Name | Date | Kind |
---|---|---|---|
4133007 | Wessler et al. | Jan 1979 | A |
4205780 | Burns et al. | Jun 1980 | A |
4437378 | Ishida et al. | Mar 1984 | A |
4619522 | Imai | Oct 1986 | A |
4635132 | Nakamura | Jan 1987 | A |
4734898 | Morinaga | Mar 1988 | A |
4754485 | Klatt | Jun 1988 | A |
4807186 | Ohnishi et al. | Feb 1989 | A |
4831610 | Hoda et al. | May 1989 | A |
4881135 | Heilweil | Nov 1989 | A |
4907973 | Hon | Mar 1990 | A |
4998215 | Black et al. | Mar 1991 | A |
5059126 | Kimball | Oct 1991 | A |
5091948 | Kametani | Feb 1992 | A |
5093730 | Ishii et al. | Mar 1992 | A |
5111285 | Fujita et al. | May 1992 | A |
5115967 | Wedekind | May 1992 | A |
5136563 | Takemasa et al. | Aug 1992 | A |
5170935 | Federspiel et al. | Dec 1992 | A |
5220649 | Forcier | Jun 1993 | A |
5231698 | Forcier | Jul 1993 | A |
5237648 | Mills et al. | Aug 1993 | A |
5270989 | Kimura | Dec 1993 | A |
5343251 | Nafeh | Aug 1994 | A |
5386510 | Jacobs | Jan 1995 | A |
5428555 | Starkey et al. | Jun 1995 | A |
5432532 | Mochimaru et al. | Jul 1995 | A |
5436792 | Leman et al. | Jul 1995 | A |
5438426 | Miake et al. | Aug 1995 | A |
5444476 | Conway | Aug 1995 | A |
5479600 | Wroblewski et al. | Dec 1995 | A |
5480306 | Liu | Jan 1996 | A |
5485554 | Lowitz et al. | Jan 1996 | A |
5493409 | Maeda et al. | Feb 1996 | A |
5524085 | Bellucco et al. | Jun 1996 | A |
5566271 | Tomitsuka et al. | Oct 1996 | A |
5568406 | Gerber | Oct 1996 | A |
5572651 | Weber et al. | Nov 1996 | A |
5576950 | Tonomura et al. | Nov 1996 | A |
5581366 | Merchant et al. | Dec 1996 | A |
5590257 | Forcier | Dec 1996 | A |
5596698 | Morgan | Jan 1997 | A |
5617138 | Ito et al. | Apr 1997 | A |
5624265 | Redford et al. | Apr 1997 | A |
5627936 | Prasad et al. | May 1997 | A |
5628684 | Bouedec | May 1997 | A |
5633723 | Sugiyama et al. | May 1997 | A |
5640193 | Wellner | Jun 1997 | A |
5661506 | Lazzouni et al. | Aug 1997 | A |
5661783 | Assis | Aug 1997 | A |
5682330 | Seaman et al. | Oct 1997 | A |
5682540 | Klotz et al. | Oct 1997 | A |
5690496 | Kennedy | Nov 1997 | A |
5706097 | Schelling et al. | Jan 1998 | A |
5717841 | Farrell et al. | Feb 1998 | A |
5721883 | Katsuo et al. | Feb 1998 | A |
5729665 | Gauthier | Mar 1998 | A |
5749735 | Redford et al. | May 1998 | A |
5757897 | LaBarbera et al. | May 1998 | A |
5761380 | Lewis et al. | Jun 1998 | A |
5764368 | Shibaki et al. | Jun 1998 | A |
5774260 | Petitto et al. | Jun 1998 | A |
5793869 | Claflin, Jr. | Aug 1998 | A |
5804803 | Cragun et al. | Sep 1998 | A |
5845144 | Tateyama et al. | Dec 1998 | A |
5877764 | Feitelson et al. | Mar 1999 | A |
5884056 | Steele | Mar 1999 | A |
5903538 | Fujita et al. | May 1999 | A |
5936542 | Kleinrock et al. | Aug 1999 | A |
5938727 | Ikeda | Aug 1999 | A |
5940776 | Baron et al. | Aug 1999 | A |
5941936 | Taylor | Aug 1999 | A |
5945998 | Eick | Aug 1999 | A |
5949879 | Berson et al. | Sep 1999 | A |
5962839 | Eskildsen | Oct 1999 | A |
5974189 | Nicponski | Oct 1999 | A |
5987226 | Ishikawa et al. | Nov 1999 | A |
5999173 | Ubillos | Dec 1999 | A |
6000030 | Steinberg et al. | Dec 1999 | A |
6006241 | Purnaveja et al. | Dec 1999 | A |
6020916 | Gersberg et al. | Feb 2000 | A |
6038567 | Young | Mar 2000 | A |
6043904 | Nickerson | Mar 2000 | A |
6076733 | Wilz, Sr. et al. | Jun 2000 | A |
6076734 | Dougherty et al. | Jun 2000 | A |
6081261 | Wolff et al. | Jun 2000 | A |
6098106 | Philyaw et al. | Aug 2000 | A |
6106457 | Perkins et al. | Aug 2000 | A |
6108656 | Durst et al. | Aug 2000 | A |
6111567 | Savchenko et al. | Aug 2000 | A |
6115718 | Huberman et al. | Sep 2000 | A |
6118888 | Chino et al. | Sep 2000 | A |
6123258 | Iida | Sep 2000 | A |
6125229 | Dimitrova et al. | Sep 2000 | A |
6138151 | Reber et al. | Oct 2000 | A |
6141001 | Baleh | Oct 2000 | A |
6148094 | Kinsella | Nov 2000 | A |
6152369 | Wilz, Sr. et al. | Nov 2000 | A |
6153667 | Howald | Nov 2000 | A |
6167033 | Chang et al. | Dec 2000 | A |
6170007 | Venkatraman et al. | Jan 2001 | B1 |
6175489 | Markow et al. | Jan 2001 | B1 |
6189009 | Stratigos et al. | Feb 2001 | B1 |
6193658 | Wendelken et al. | Feb 2001 | B1 |
6199042 | Kurzweil | Mar 2001 | B1 |
6256638 | Dougherty et al. | Jul 2001 | B1 |
6266053 | French et al. | Jul 2001 | B1 |
6296693 | McCarthy | Oct 2001 | B1 |
6297812 | Ohara et al. | Oct 2001 | B1 |
6297851 | Taubman et al. | Oct 2001 | B1 |
6298145 | Zhang et al. | Oct 2001 | B1 |
6302527 | Walker | Oct 2001 | B1 |
6307956 | Black | Oct 2001 | B1 |
6308887 | Korman et al. | Oct 2001 | B1 |
6330976 | Dymetman et al. | Dec 2001 | B1 |
6360057 | Tsumagari et al. | Mar 2002 | B1 |
6369811 | Graham et al. | Apr 2002 | B1 |
6373498 | Abgrall | Apr 2002 | B1 |
6373585 | Mastie et al. | Apr 2002 | B1 |
6375298 | Purcell et al. | Apr 2002 | B2 |
6378070 | Chan et al. | Apr 2002 | B1 |
6381614 | Barnett et al. | Apr 2002 | B1 |
6400996 | Hoffberg et al. | Jun 2002 | B1 |
6417435 | Chantzis et al. | Jul 2002 | B2 |
6421738 | Ratan et al. | Jul 2002 | B1 |
6439465 | Bloomberg | Aug 2002 | B1 |
6442336 | Lemelson | Aug 2002 | B1 |
6452615 | Chiu et al. | Sep 2002 | B1 |
6466534 | Cundiff, Sr. | Oct 2002 | B2 |
6476793 | Motoyama et al. | Nov 2002 | B1 |
6476834 | Doval et al. | Nov 2002 | B1 |
6502114 | Forcier | Dec 2002 | B1 |
D468277 | Sugiyama | Jan 2003 | S |
6502756 | Fåhraeus | Jan 2003 | B1 |
6504620 | Kinjo | Jan 2003 | B1 |
6515756 | Mastie et al. | Feb 2003 | B1 |
6519360 | Tanaka | Feb 2003 | B1 |
6529920 | Arons et al. | Mar 2003 | B1 |
6535639 | Uchihachi et al. | Mar 2003 | B1 |
6544294 | Greenfield et al. | Apr 2003 | B1 |
6552743 | Rissman | Apr 2003 | B1 |
6556241 | Yoshimura et al. | Apr 2003 | B1 |
6568595 | Russell et al. | May 2003 | B1 |
6581070 | Gibbon et al. | Jun 2003 | B1 |
6587859 | Dougherty et al. | Jul 2003 | B2 |
6593860 | Lai et al. | Jul 2003 | B2 |
6594377 | Kim et al. | Jul 2003 | B1 |
6611276 | Muratori et al. | Aug 2003 | B1 |
6611622 | Krumm | Aug 2003 | B1 |
6611628 | Sekiguchi et al. | Aug 2003 | B1 |
6625334 | Shiota et al. | Sep 2003 | B1 |
6647534 | Graham | Nov 2003 | B1 |
6647535 | Bozdagi et al. | Nov 2003 | B1 |
6654887 | Rhoads | Nov 2003 | B2 |
6665092 | Reed | Dec 2003 | B2 |
6674538 | Takahashi | Jan 2004 | B2 |
6678389 | Sun et al. | Jan 2004 | B1 |
6687383 | Kanevsky et al. | Feb 2004 | B1 |
6700566 | Shimoosawa et al. | Mar 2004 | B2 |
6701369 | Philyaw | Mar 2004 | B1 |
6724494 | Danknick | Apr 2004 | B1 |
6728466 | Tanaka | Apr 2004 | B1 |
6745234 | Philyaw et al. | Jun 2004 | B1 |
6750978 | Marggraff et al. | Jun 2004 | B1 |
6753883 | Schena et al. | Jun 2004 | B2 |
6771283 | Carro | Aug 2004 | B2 |
6772947 | Shaw | Aug 2004 | B2 |
6774951 | Narushima | Aug 2004 | B2 |
6775651 | Lewis et al. | Aug 2004 | B1 |
6807303 | Kim et al. | Oct 2004 | B1 |
6824044 | Lapstun et al. | Nov 2004 | B1 |
6845913 | Madding et al. | Jan 2005 | B2 |
6853980 | Ying et al. | Feb 2005 | B1 |
6856415 | Simchik et al. | Feb 2005 | B1 |
6871780 | Nygren et al. | Mar 2005 | B2 |
6877134 | Fuller et al. | Apr 2005 | B1 |
6883162 | Jackson et al. | Apr 2005 | B2 |
6886750 | Rathus et al. | May 2005 | B2 |
6892193 | Bolle et al. | May 2005 | B2 |
6898709 | Teppler | May 2005 | B1 |
6904168 | Steinberg et al. | Jun 2005 | B1 |
6904451 | Orfitelli et al. | Jun 2005 | B1 |
6923721 | Luciano et al. | Aug 2005 | B2 |
6931594 | Jun | Aug 2005 | B1 |
6938202 | Matsubayashi et al. | Aug 2005 | B1 |
6946672 | Lapstun et al. | Sep 2005 | B1 |
6950623 | Brown et al. | Sep 2005 | B2 |
6964374 | Djuknic et al. | Nov 2005 | B1 |
6966495 | Lynggaard et al. | Nov 2005 | B2 |
6983482 | Morita et al. | Jan 2006 | B2 |
7000193 | Impink, Jr. et al. | Feb 2006 | B1 |
7023459 | Arndt et al. | Apr 2006 | B2 |
7031965 | Moriya et al. | Apr 2006 | B1 |
7073119 | Matsubayashi et al. | Jul 2006 | B2 |
7075676 | Owen | Jul 2006 | B2 |
7079278 | Sato | Jul 2006 | B2 |
7089420 | Durst et al. | Aug 2006 | B1 |
7092568 | Eaton | Aug 2006 | B2 |
7131058 | Lapstun | Oct 2006 | B1 |
7134016 | Harris | Nov 2006 | B1 |
7149957 | Hull et al. | Dec 2006 | B2 |
7151613 | Ito | Dec 2006 | B1 |
7152206 | Tsuruta | Dec 2006 | B1 |
7162690 | Gupta et al. | Jan 2007 | B2 |
7174151 | Lynch et al. | Feb 2007 | B2 |
7181502 | Incertis | Feb 2007 | B2 |
7196808 | Kofman et al. | Mar 2007 | B2 |
7215436 | Hull et al. | May 2007 | B2 |
7228492 | Graham | Jun 2007 | B1 |
7263659 | Hull et al. | Aug 2007 | B2 |
7263671 | Hull et al. | Aug 2007 | B2 |
7280738 | Kauffman et al. | Oct 2007 | B2 |
7298512 | Reese et al. | Nov 2007 | B2 |
7313808 | Gupta et al. | Dec 2007 | B1 |
7363580 | Tabata et al. | Apr 2008 | B2 |
20010003846 | Rowe et al. | Jun 2001 | A1 |
20010017714 | Komatsu et al. | Aug 2001 | A1 |
20010037408 | Thrift et al. | Nov 2001 | A1 |
20010052942 | MacCollum et al. | Dec 2001 | A1 |
20020001101 | Hamura et al. | Jan 2002 | A1 |
20020004807 | Graham et al. | Jan 2002 | A1 |
20020006100 | Cundiff, Sr. et al. | Jan 2002 | A1 |
20020010641 | Stevens et al. | Jan 2002 | A1 |
20020011518 | Goetz et al. | Jan 2002 | A1 |
20020015066 | Siwinski et al. | Feb 2002 | A1 |
20020023957 | Michaelis et al. | Feb 2002 | A1 |
20020048224 | Dygert et al. | Apr 2002 | A1 |
20020051010 | Jun et al. | May 2002 | A1 |
20020060748 | Aratani et al. | May 2002 | A1 |
20020066782 | Swaminathan et al. | Jun 2002 | A1 |
20020067503 | Hiatt | Jun 2002 | A1 |
20020078149 | Chang et al. | Jun 2002 | A1 |
20020087530 | Smith et al. | Jul 2002 | A1 |
20020087598 | Carro | Jul 2002 | A1 |
20020095501 | Chiloyan et al. | Jul 2002 | A1 |
20020099534 | Hegarty | Jul 2002 | A1 |
20020101343 | Patton | Aug 2002 | A1 |
20020101513 | Halverson | Aug 2002 | A1 |
20020131071 | Parry | Sep 2002 | A1 |
20020131078 | Tsukinokizawa | Sep 2002 | A1 |
20020134699 | Bradfield et al. | Sep 2002 | A1 |
20020135800 | Dutta | Sep 2002 | A1 |
20020137544 | Myojo | Sep 2002 | A1 |
20020140993 | Silverbrook | Oct 2002 | A1 |
20020159637 | Echigo et al. | Oct 2002 | A1 |
20020165769 | Ogaki et al. | Nov 2002 | A1 |
20020169849 | Schroath | Nov 2002 | A1 |
20020171857 | Hisatomi et al. | Nov 2002 | A1 |
20020185533 | Shieh et al. | Dec 2002 | A1 |
20020199149 | Nagasaki et al. | Dec 2002 | A1 |
20030002068 | Constantin et al. | Jan 2003 | A1 |
20030007776 | Kameyama et al. | Jan 2003 | A1 |
20030014615 | Lynggaard | Jan 2003 | A1 |
20030024975 | Rajasekharan | Feb 2003 | A1 |
20030025951 | Pollard et al. | Feb 2003 | A1 |
20030038971 | Renda | Feb 2003 | A1 |
20030051214 | Graham et al. | Mar 2003 | A1 |
20030065925 | Shindo et al. | Apr 2003 | A1 |
20030076521 | Li et al. | Apr 2003 | A1 |
20030084462 | Kubota et al. | May 2003 | A1 |
20030088582 | Pflug | May 2003 | A1 |
20030093384 | Durst et al. | May 2003 | A1 |
20030110926 | Sitrick et al. | Jun 2003 | A1 |
20030117652 | Lapstun | Jun 2003 | A1 |
20030121006 | Tabata et al. | Jun 2003 | A1 |
20030128877 | Nicponski | Jul 2003 | A1 |
20030146927 | Crow et al. | Aug 2003 | A1 |
20030160898 | Baek et al. | Aug 2003 | A1 |
20030164898 | Imai | Sep 2003 | A1 |
20030177240 | Gulko et al. | Sep 2003 | A1 |
20030187642 | Ponceleon et al. | Oct 2003 | A1 |
20030218597 | Hodzic | Nov 2003 | A1 |
20030220988 | Hymel | Nov 2003 | A1 |
20030231198 | Janevski | Dec 2003 | A1 |
20040024643 | Pollock et al. | Feb 2004 | A1 |
20040036842 | Tsai et al. | Feb 2004 | A1 |
20040039723 | Lee et al. | Feb 2004 | A1 |
20040044894 | Lofgren et al. | Mar 2004 | A1 |
20040049681 | Diehl et al. | Mar 2004 | A1 |
20040118908 | Ando et al. | Jun 2004 | A1 |
20040125402 | Kanai et al. | Jul 2004 | A1 |
20040128514 | Rhoads | Jul 2004 | A1 |
20040128613 | Sinisi | Jul 2004 | A1 |
20040143459 | Engelson et al. | Jul 2004 | A1 |
20040143602 | Ruiz et al. | Jul 2004 | A1 |
20040150627 | Luman et al. | Aug 2004 | A1 |
20040156616 | Strub et al. | Aug 2004 | A1 |
20040167895 | Carro | Aug 2004 | A1 |
20040184064 | TaKeda et al. | Sep 2004 | A1 |
20040207876 | Aschenbrenner et al. | Oct 2004 | A1 |
20040215470 | Bodin | Oct 2004 | A1 |
20040229195 | Marggraff et al. | Nov 2004 | A1 |
20040240541 | Chadwick et al. | Dec 2004 | A1 |
20040249650 | Freedman et al. | Dec 2004 | A1 |
20050038794 | Piersol | Feb 2005 | A1 |
20050064935 | Blanco | Mar 2005 | A1 |
20050068569 | Hull et al. | Mar 2005 | A1 |
20050068581 | Hull et al. | Mar 2005 | A1 |
20050083413 | Reed et al. | Apr 2005 | A1 |
20050125717 | Segal et al. | Jun 2005 | A1 |
20050149849 | Graham et al. | Jul 2005 | A1 |
20050213153 | Hull et al. | Sep 2005 | A1 |
20050216838 | Graham | Sep 2005 | A1 |
20050216852 | Hull et al. | Sep 2005 | A1 |
20060043193 | Brock | Mar 2006 | A1 |
20060136343 | Coley et al. | Jun 2006 | A1 |
20060171559 | Rhoads | Aug 2006 | A1 |
20060250585 | Anderson et al. | Nov 2006 | A1 |
20070033419 | Kocher et al. | Feb 2007 | A1 |
20070065094 | Chien et al. | Mar 2007 | A1 |
20070109397 | Yuan et al. | May 2007 | A1 |
20070162858 | Hurley et al. | Jul 2007 | A1 |
20070168426 | Ludwig et al. | Jul 2007 | A1 |
20070234196 | Nicol et al. | Oct 2007 | A1 |
20070268164 | Lai et al. | Nov 2007 | A1 |
20080037043 | Hull et al. | Feb 2008 | A1 |
Number | Date | Country |
---|---|---|
2386829 | Nov 2002 | CA |
1352765 | Jun 2002 | CN |
1097394 | Dec 2002 | CN |
1079313 | Feb 2001 | EP |
1133170 | Sep 2001 | EP |
WO 9918523 | Apr 1999 | WO |
WO 02082316 | Oct 2002 | WO |
Number | Date | Country | |
---|---|---|---|
20050068570 A1 | Mar 2005 | US |
Number | Date | Country | |
---|---|---|---|
60506206 | Sep 2003 | US | |
60506263 | Sep 2003 | US | |
60506302 | Sep 2003 | US | |
60506303 | Sep 2003 | US | |
60506411 | Sep 2003 | US |