1. Field of the Invention
The present invention relates to systems and methods for providing a multimedia printing interface. In particular, the present invention relates to systems and methods for providing a print driver dialog interface that allows users to format multimedia data to generate a representation of multimedia data.
2. Description of the Background Art
Printers in modern systems today are not designed to generate multimedia documents. Currently, there is not any effective method for generating an easily readable representation of multimedia content in either paper or digital format. Several different techniques and tools are available for accessing and navigating multimedia information (e.g., existing multimedia players). However, none of these provide the user with the option of creating a multimedia document that the user can easily review and through which a user can gain access to multimedia content.
Printers in modern systems today are also not designed to facilitate interaction with multimedia content or with print content, in general. Standard printer dialog boxes provide users with some general formatting options in a print job, such as number of pages to print, number of copies to be made, and the like. However, printer drivers in modern operating systems are not designed to facilitate interactive information gathering. Since the print job can be redirected to another printer, or the printing protocol does not allow such interactive sessions, the operating system does not encourage interaction with the user.
Due to these limitations in printer interaction, the user cannot define more detailed printing preferences in standard printing. Additionally, the user cannot define any printing preferences at all regarding multimedia content, since such printing capabilities are not currently available. Thus, a user cannot use current print dialog boxes to select segments of multimedia content that are of interest for printing. Current print dialog boxes also do not permit a user to preview any multimedia content. Additionally, there is not any way for a user to search through a lengthy multimedia segment for particular features of interest. For example, a user cannot currently search through a news segment for content covering a particular topic, nor can a user search for specific faces or events in a news segment. Moreover, there is no way to define a printing format for selected segments of multimedia content, and there is no way to preview or modify printing formats directly through a print dialog box.
Therefore, what is needed is a system and methods for permitting user interaction with and control over generation of a multimedia representation that overcomes the limitations found in the prior art.
The present invention overcomes the deficiencies and limitations of the prior art with a system and method providing a user interface that permits users to interact with media content analysis processes and media representation generation processes. The system of the present invention includes a user interface for allowing a user to control the media content analysis and media representation generation. A media analysis software module analyzes and recognizes features of the media content. In addition, the system can include an output device driver module that receives instructions from the user and drives the media content analysis and the media representation generation. For example, the media software analysis module recognizes features, such as faces, speech, text, etc. The system can also include an augmented output device for generating a media representation. Processing logic manages the display of a user interface that allows the user to control generation of a multimedia representation. Processing logic also controls the generation of a printable multimedia representation. The representation can be generated in a paper-based format, in digital format, or in any other representation format. The user interface includes a number of fields through which the user can view media content and modify the media representation being generated.
The methods of the present invention include interacting with a user interface to control the media data analysis and media representation generation. The methods further include analyzing features of media data for media representation generation, driving the media data analysis, and driving the media representation generation by receiving instructions and sending instructions regarding media representation parameters. Additionally, the methods can include generating a media representation.
The invention is illustrated by way of example, and not by way of limitation in the figures of the accompanying drawings in which like reference numerals refer to similar elements.
A system and method for providing a graphical user interface or print driver dialog interface that allows users to interact with a process of multimedia representation generation is described. According to an embodiment of the present invention, a graphical user interface is provided that displays multimedia information that may be stored in a multimedia document. According to the teachings of the present invention, the interface enables a user to navigate through multimedia information stored in a multimedia document.
For the purposes of this invention, the terms “media,” “multimedia,” “multimedia content,” “multimedia data,” or “multimedia information” refer to any one of or a combination of text information, graphics information, animation information, sound (audio) information, video information, slides information, whiteboard images information, and other types of information. For example, a video recording of a television broadcast may comprise video information and audio information. In certain instances the video recording may also comprise close-captioned (CC) text information, which comprises material related to the video information, and in many cases, is an exact representation of the speech contained in the audio portions of the video recording. Multimedia information is also used to refer to information comprising one or more objects wherein the objects include information of different types. For example, multimedia objects included in multimedia information may comprise text information, graphics information, animation information, sound (audio) information, video information, slides information, whiteboard images information, and other types of information.
For the purposes of this invention, the terms “print” or “printing,” when referring to printing onto some type of medium, are intended to include printing, writing, drawing, imprinting, embossing, generating in digital format, and other types of generation of a data representation. Also for purposes of this invention, the output generated by the system will be referred to as a “media representation,” a “multimedia document,” a “multimedia representation,” a “document,” a “paper document,” or either “video paper” or “audio paper.” While the words “document” and “paper” are referred to in these terms, output of the system in the present invention is not limited to such a physical medium, like a paper medium. Instead, the above terms can refer to any output that is fixed in a tangible medium. In some embodiments, the output of the system of the present invention can be a representation of multimedia content printed on a physical paper document. In paper format, the multimedia document takes advantage of the high resolution and portability of paper and provides a readable representation of the multimedia information. According to the teachings of the present invention, a multimedia document may also be used to select, retrieve, and access the multimedia information. In other embodiments, the output of the system can exist in digital format or some other tangible medium. In addition, the output of the present invention can refer to any storage unit (e.g., a file) that stores multimedia information in digital format. Various different formats may be used to store the multimedia information. These formats include various MPEG formats (e.g., MPEG 1, MPEG 2, MPEG 4, MPEG 7, etc.), MP3 format, SMIL format, HTML+TIME format, WMF (Windows Media Format), RM (Real Media) format, Quicktime format, Shockwave format, various streaming media formats, formats being developed by the engineering community, proprietary and customary formats, and others.
In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the invention. It will be apparent, however, to one skilled in the art that the invention can be practiced without these specific details. In other instances, structures and devices are shown in block diagram form in order to avoid obscuring the invention. For example, certain features of the present invention are described primarily with reference to video content. However, the features of the present invention apply to any type of media content, including audio content, even if the description discusses the features only in reference to video information.
Reference in the specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the invention. The appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment.
Referring now to
In other embodiments, instead of accessing a multimedia document, the system 100 may receive a stream of multimedia information (e.g., a streaming media signal, a cable signal, etc.) from a multimedia information source. According to an embodiment of the present invention, system 100 stores the multimedia information signals in a multimedia document and then generates the interface 122 that displays the multimedia information. Examples of sources that can provide multimedia information to system 100 include a television, a television broadcast receiver, a cable receiver, a video recorder, a digital video recorder, a personal digital assistant (PDA), or the like. For example, the source of multimedia information may be embodied as a television that is configured to receive multimedia broadcast signals and to transmit the signals to system 100. In this example, the information source may be a television receiver/antenna providing live television feed information to system 100. The information source may also be a device such as a video recorder/player, a DVD player, a CD player, etc. providing recorded video and/or audio stream to system 100. In alternative embodiments, the source of information may be a presentation or meeting recorder device that is capable of providing a stream of the captured presentation or meeting information to system 100. Additionally, the source of multimedia information may be a receiver (e.g., a satellite dish or a cable receiver) that is configured to capture or receive (e.g., via a wireless link) multimedia information from an external source and then provide the captured multimedia information to system 100 for further processing. Multimedia content can originate from a proprietary or customized multimedia player, such as RealPlayer™, Microsoft Windows Media Player, and the like.
In alternative embodiments, system 100 may be configured to intercept multimedia information signals received by a multimedia information source. System 100 may receive the multimedia information directly from a multimedia information source or may alternatively receive the information via a communication network.
The augmented output device or printer 102 comprises a number of components that including a conventional printer 103, a media analysis software module 104, processing logic 106, and digital media output 108. The conventional printer 103 component of the printer 102 can include all or some of the capabilities of a standard or conventional printing device, such as an inkjet printer, a laser printer, or other printing device. Thus, conventional printer 103 has the functionality to print paper documents, and may also have the capabilities of a fax machine, a copy machine, and other devices for generating physical documents. More information about printing systems is provided in the U.S. Patent Application entitled “Networked Printing System Having Embedded Functionality for Printing Time-Based Media,” to Hart, et al., filed Mar. 30, 2004, and which was incorporated by reference previously.
The media analysis software module 104 includes audio and video content recognition and processing software. The media analysis software module 104 can be located on the printer 102 or can be located remotely, such as on a personal computer (PC). Some examples of such multimedia analysis software include, but are not limited to, video event detection, video foreground/background segmentation, face detection, face image matching, face recognition, face cataloging, video text localization, video optical character recognition (OCR), language translation, frame classification, clip classification, image stitching, audio reformatter, speech recognition, audio event detection, audio waveform matching, audio-caption alignment, video OCR and caption alignment. Once a user selects “print” within system 100, the system 100 can analyze multimedia content using one or more of these techniques, and can provide the user with analysis results from which the user can generate a document.
In the embodiment shown in
In the example shown in
Additionally, the PDDI 122 can allow the user to set formatting preferences with regard to the multimedia document 120 produced. In some embodiments, the user can set preferences as to document format and layout, font type and size, information displayed in each line, information displayed in a header, size and location of schedule columns, font colors, line spacing, number of words per line, bolding and capitalization techniques, language in which the document is printed, paper size, paper type, and the like. For example, the user might choose to have a multimedia document that includes a header in large, bold font showing the name of the multimedia content being displayed (e.g., CNN News segment), and the user can choose the arrangement of video frames to be displayed per page.
As shown in the embodiment of
The DFS 112 can include meta data information about a multimedia file, such as information about the title of the multimedia content, the producer/publisher of the multimedia content, and the like. The DFS 112 can also include other information, such as beginning and ending times of a multimedia segment (e.g., beginning and ending times of an audio recording), and a specification for a graphical representation of the multimedia data that can be displayed along a time line (e.g., a waveform showing the amplitude of an audio signal over time). The DFS 112 can further include a specification for time stamp markers and meta-data for each time stamp (e.g., textual tags or bar codes) that could be displayed along the timeline, and layout parameters that determine the appearance of the physical multimedia document 120. More information about the DFS 112 and examples are provided in the U.S. Utility Application entitled “Printable Representations for Time-Based Media ,” to Hull, et. al., filed on Mar. 30, 2004, which is incorporated by reference herein, in its entirety.
The multimedia document 120 generated by the printer 102 can comprise various formats. For example, the multimedia document 120 can comprise a paper document, such as video paper of the form shown in
The multimedia document 120 can have a number of different types of layouts and can display various types of information.
In another embodiment of the present invention, user-selectable identifiers 134 (e.g., a barcode) are associated with each frame 132. In the
The printer 102 is capable of retrieving multimedia information corresponding to the user-selectable identifiers 134. The signal communicated to the printer 102 from the selection device (i.e., device with barcode scanner or keypad for entering in numerical identifiers) may identify the multimedia content frame 132 selected by the user, the location of the multimedia content to be displayed, the multimedia paper documents from which the segments are to be selected, information related to preferences and/or one or more multimedia display devices (e.g., a television set) selected by the user, and other like information to facilitate retrieval of the requested multimedia information. For example, the system 100 can access a video file stored on a PC, and the system can play this video content on the user's command.
The example of
The user might also choose to have included in the multimedia document 120 some of the audio information for a frame 132, which is displayed as text. For example, the user may choose to have a portion of the transcript of a multimedia segment (i.e., a transcript of a news program segment) displayed next to the multimedia frame 132. As another example, the user might opt to include in the printed document a text description or summary of the content of each frame 132, such as a brief summary of a particular television segment or program. The user can use the print driver dialog interface 122 to identify techniques to be used for converting the audio information to text information (i.e., techniques for generating a text transcript for the audio information), the format and styles for printing the audio transcript (which may be the same as for printing text information), formats and styles for printing summary text about multimedia content, and the like. Additionally, information about retrieving multimedia information and annotating multimedia information is provided in the Video Paper Applications, referenced previously.
Referring now to
In the example of
The processor 214 processes data signals and may comprise various computing architectures including a complex instruction set computer (CISC) architecture, a reduced instruction set computer (RISC) architecture, or an architecture implementing a combination of instruction sets. Although only a single processor is shown in
As described previously, the printer 102 accesses or receives multimedia information, such as an audio or video file, from some source. In one embodiment, the multimedia file is stored on a data processing system, such as PC 230, which is coupled to the printer 102 by signal line 248. In the embodiment of
A user can view multimedia content on a display device (not shown) to select particular content for printing with printer 102, as described above. The display device (not shown) can include a cathode ray tube (CRT), a flat-panel device such as a liquid crystal display (LCD), a projection device, and the like. In other embodiments, the printer 102 includes an LCD display panel or other type of display panel, and the user can display multimedia content on the printer, itself.
In the
In the embodiment of
When printer 102 receives a print request, the request and the associated multimedia data are transferred to processor 214. The processor 214 interprets the input and activates the appropriate module. The processor 214 is coupled to and controls the multimedia transformation software module (MTS) (not shown) for transforming multimedia content. If the processor 214 has received a print request, the processor 214 may then activate the MTS (not shown) depending on whether or not the user has requested transformation of the multimedia data. The transformations to the multimedia content can be applied on the printer 102, on a PC 230 (i.e., by software installed with the print driver 208), or at some other location. The MTS (not shown) applies specified transformation functions to a given audio or video file. The MTS (not shown) generates the appropriate document-based representation and interacts with the user through the print driver dialog interface to modify the parameters of the transformation and to preview the results. The results and parameters of the multimedia transformation are represented in the Document Format Specification (DFS) that was described previously.
As described above, printer 102 can include multimedia storage 202, for storing multimedia data, such as video or audio files. The processor 214 is coupled to multimedia storage 202 and can transfer multimedia data, through bus 251, to the multimedia storage 202. This data can be stored while a print job is progressing. Storage 202 may include a number of memory types including a main random access memory (RAM) for storage of instructions and data during program execution and a read only memory (ROM) in which fixed instructions are stored. Storage 202 may also include persistent (non-volatile) storage for program and data files, such as a hard disk drive, a floppy disk drive, a CD-ROM device, a DVD-ROM device, a DVD-RAM device, a DVD-RW device, or other like storage device known in the art. One or more of the drives or devices may be located at remote locations on other connected computers.
The processor 214 also controls a digital media input/output 108. the processor 214 transfers information to and receives information from digital media input/output 108, through bus 250. Multimedia documents created can be converted into some type of digital format, as described previously. The digital media writing hardware can include, for example, a network interface card, a digital video disc (DVD) writer, a secure digital (SD) writer, a compact disc (CD) writer, and the like. The digital output 260 documents can be stored on digital media, including a CD, a DVD, flash media, and the like. Thus, the user can create a digital output 260 version of input audio or video file, and this can be viewed on a specified target device, such as a PC, a cell phone, or a PDA.
The processor 214 also manages generation of a multimedia document 120, such as a video or audio paper document. Multimedia information can also be displayed in a paper document or multimedia document 120, as shown in
The processor 214 also controls external communication hardware, such as through a network interface. The processor 214 can transmit information to and receive information from an application server 212 through bus 254. The printer 102 can also communicate with and obtain information from an application server 212 (e.g., “Web services” or “grid computing” systems).
In one embodiment, the system 200 includes a communication monitoring module or a user interface listener module 210 (UI Listener). In the embodiment of
Referring now to
In order to allow this interaction without modifying printer driver architecture of the underlying operating system, an extra mechanism, such as the one shown in
Because network transactions of this type are prone to many complex error conditions, a system of timeouts allows efficient operation. Each message sent across a network generally either expects a reply or is a one-way message. Messages that expect replies can have a timeout, or a limited period of time during which it is acceptable for the reply to arrive. In this invention, embedded metadata would include metadata about a UI listener 210 that will accept requests for further information. Such metadata consists of at least a network address, port number, and a timeout period. It might also include authentication information, designed to prevent malicious attempts to elicit information from the user 302, since the user 302 cannot tell whether the request is coming from a printer 102, a delegated server 212, or a malicious agent. If the printer 102 or a delegated application server 212 wishes more information, it can use the above noted information to request that the UI Listener 210 ask a user 302 for the needed information. The UI Listener 210 program can be located on a user's 302 interaction device (e.g., a PC, a cell phone, or a PDA), on the printer 102 (i.e., for user interaction on a LCD panel located on the printer), or another remote location.
In the example of
Additionally, in the example of
Referring now to
In operation, the system 200 provides methods for printing multimedia content. The user selects a print option in an MRA, and an initial print driver dialog interface (PDDI) 122 appears to the user. The initial PDDI 122 is populated with information about the abilities of the printer 102 to transform multimedia data. The initial PDDI 122 can display options available to the user for transforming the data, or it can show the result of performing a default transformation with a default set of parameters. The user can choose which of these two options the user prefers, and the user's preference can also be set in the printer's 102 properties. The flow of operations for each of these options is depicted in
Referring now to
The system 200 then waits 508 for the user to press the Update button or the OK button on the PDDI 122. If the user selects the Cancel button, then the system 200 exits and the PDDI 122 disappears from view. Once the user has selected the Update button or the OK button, the system 200 sends 510 parameters and other user-selection information to the printer 102. The system 200 determines if the multimedia data has already been transferred to the printer 102. As described previously, this multimedia data may be located on a PC, a cell phone, a PDA, or other device that can contain multimedia content. If the multimedia data has not yet been transferred to the printer 102, then the system 200 transfers 512 multimedia data to the printer 102, and then continues with the operation flow. If the multimedia data has already been transferred to the printer 102, then the system 200 determines whether or not the multimedia transformation with the user-defined parameters has already been performed. If not, the printer performs 514 the transformation on the multimedia data. If so, the system 200 then determines whether or not the user pressed the Update button after entering in the parameters, or if the user alternatively pressed the OK button. If the user did not press the Update button, and instead pressed the OK button, the printer 102 generates 516 a document, multimedia data, and control data that links the paper document with the multimedia data. Additionally, the system 200 assigns identifiers (e.g., a barcode) to the multimedia data, providing the user with an interface by which to access the multimedia content. If necessary, before generating the document, the printer 102 may first prompt the user for further information regarding the print job. Metadata about the multimedia data and the commands entered into the PDDI 122 are represented in the DFS 112.
If the user pressed the Update button, rather than the OK button, the user is not yet requesting that the printer 102 create a multimedia document. Instead, the user presses the Update button when the user has modified the user selection parameters in the PDDI 122, and the user wants the preview field of the PDDI 122 to be updated. If the user pressed the Update button, the system 200 will interactively return 518 results for display in an interactive PDDI 122. This allows the user to preview how the multimedia document will appear with the newly added parameter modifications. The flow of operation then returns to the point at which the user has the opportunity to select 506 parameters, and the system 200 can cycle through the flow again, continuing to modify parameters in the interactive PDDI 122 until a final document is generated.
Referring now to
AUDIO
As is found in standard print dialog boxes, the top of the PDDI 122 includes a file name field 702 that displays the name (e.g., “locomotion.mp3”) of the multimedia file being printed. In the Printer field 704, the user can select which printer will carry out the print job, and other options with regard to properties of the print job, printing as a image or file, printing order, and the like. Additionally, the Printer field 704 displays the status of the selected printer, the type of printer, where the printer is located, and the like. The Print Range field 706 allows the user to make selections about what portions of a document will be printed and the like. The Copies and Adjustments field 708 permits a user to designate the number of copies to be generated in a print job, the size of the print job pages relative to the paper, the positioning of the print job pages on the paper, and the like. Although not shown, this dialog box could also include any of the various combinations of other conventional print parameters associated with outputting representations of video, audio, or text documents.
In the embodiment of
Each segmentation type can have a confidence level associated with each of the events detected in that segmentation. For example, if the user has applied audio event detection that segments the audio data according to applause events that occur within the audio data, each applause event will have an associated confidence level defining the confidence that an applause event was correctly detected. Within the Advanced Options field 710, the user can define or adjust a threshold on the confidence values associated with a particular segmentation. The user sets the threshold by typing the threshold value into the threshold field 718. For example, the user can set a threshold of 75%, and only events that are above this threshold (i.e., more than 75% chance that the event was correctly detected to be an applause event) will be displayed. In other embodiments, a threshold slider (not shown) is included in the PDDI 122, and the user can move the slider along a threshold bar that runs from 0% to 100% to select a specific threshold within that range.
In one embodiment, the user can also make layout selections with regard to the multimedia representation generated. The user sets, within the “Fit on” field 720, the number of pages on which the audio waveform timeline 734 will be displayed. The user also selects, within the timeline number selection field 722, the number of timelines to be displayed on each page. Additionally, the user selects, within the orientation field 724, the orientation (e.g., vertical or horizontal) of display of the timelines on the multimedia representation. For example, as shown in
In the embodiment of
In the embodiment of
In the embodiment shown in
The user can play the audio content in a number of ways. For example, the user can click on the play selectors or play arrows 750 on the audio waveform timeline 750 to cause the segment to begin to play. Additionally, the system can be configured so that selecting a play arrow 750 will cause the full audio content on the audio waveform timeline 734 to begin to play. The user can also right click on any one of the selected segments to delete the corresponding marker on the multimedia document. A paper multimedia representation also can provide an interface for playing the audio content. A user can select any of the markers (i.e., scan the barcodes) for any of the selected segments on the paper documents, and this will cause the selected audio segment to play. For example, the user can scan a barcode with a cell phone or PDA device with a barcode scanner. The user can listen to the selected clips on the cell phone or PDA, or the user can hear the content via the sound card on his/her PC. Additionally, the user can select the play marker 760 that acts as a pause button, so that if the user has selected any of the markers on the page and the corresponding audio content is playing, the user can pause this by selecting the play marker 760. The user can resume the playing of the audio content by selecting the play marker 760 again, or the user can select another marker on the page to play the corresponding audio content.
Referring now to
Referring now to
The multimedia document shown in the Preview field 712 of
The document in the Preview field 712 of
Referring now to
In the example of
The event segments 1104 are shown as staggered boxes in
VIDEO
In the embodiment of
Within the Advanced Options field 710, the user can define or adjust a threshold on the confidence values associated with a particular segmentation, as discussed previously. The user sets the threshold by typing the threshold value into the threshold field 1204. For example, the user can set a threshold of 75%, and only frames that are above this threshold (i.e., more than 75% chance that the frame includes a face in a face detection analysis) will be displayed. In other embodiments, a threshold slider is included in the PDDI 122, and the user can move the slider along a threshold bar that runs from 0% to 100% to select a specific threshold within that range. In addition, the buttons shown in the embodiment of
In the embodiment of
Additionally, there are three columns 1250, 1252, and 1254 displayed in Content Selection field 714. One column 1250 displays text information, and the other two columns 1252 and 1254 display video frames. The video frames displayed in
The user can slide the selector 1222 along the video timeline to select certain segments of the video content, which will be displayed on the multimedia document generated. In one embodiment, once the selector 1222 is located at the segment of video content that the user would like to select, the user can click on the selector 1222 to select segment 1226. The video timeline could also be displayed in a number of alternative manners, such as showing a horizontal timeline, showing more than one timeline side-by-side, showing a different video frame appearance, and the like. As discussed above, while the video timeline in the embodiment of
In the example shown in
Additionally, the location of each displayed video frame within the video timeline is displayed above each video frame as a time marker 1240. In
The user can also play the video content in a number of ways. For example, the user can click on the play arrows 1224 next to each selected segment on the video timeline to cause the segment to begin to play. In the embodiment of
When a user selects an identifier 1208, the associated video content will begin to play starting at the time displayed on the corresponding time marker 1240. In the
The multimedia document shown in the embodiment of
In the
The Preview field 712 shown in the
In the
Referring now to
When the user selects a particular video segment for preview, a media player that is embedded in the PDDI 122 starts to play the video segment in the Preview field 712 from the start of the video segment. For example, in
The media player in the Preview field 712 also includes the features of many standard multimedia players (e.g., Microsoft Windows Media Player), such as a pause button 1310 for stopping/pausing the display of the video clip, a rewind button 1312 for rewinding within the video content, a fast forward button 1314 for fast forwarding within the video content, and a volume adjuster 1306 for setting the volume for display. A slider 1308 is also included, which can allow the user to move around within the video content. The slider bar 1316, along which the slider moves 1308, can correspond to the length of the full video content displayed along the time line or it can correspond only to the length of the clip. The user can click on and drag the slider 1308 along the slider bar 1316 to move around within the video content. The fast forward button 1314 and the rewind button 1312 can be configured to allow the user to only move within the selected segment, or can alternatively allow the user to move within the full video content associated with the video timeline. The media player can be missing any one of the control buttons shown in
Referring now to
Referring now to
In the example of
The event segments 1704 are shown as staggered boxes in
Besides the face detection example of
Face recognition is another segmentation type, in which the PDDI 122 shows names along a timeline that were derived by application of face recognition to video frames at corresponding points along the time line. Also, a series of checkboxes are provided that let the user select clips by choosing names. Optical character recognition (OCR) is a segmentation type, in which OCR is performed on each frame in the video content, and each frame is subsampled (i.e., once every 30 frames). The results are displayed along a timeline. A text entry dialog box is also provided that lets the user enter words that are searched within the OCR results. Clips that contain the entered text are indicated along the timeline. In addition, clustering can be applied so that the similar results in performing OCR to each frame are merged. Clusters that contain the entered text are indicated along the timeline.
In addition to the above segmentation types, there are other examples of that could be applied. Motion analysis is another segmentation type, in which the PDDI 122 shows the results of applying a motion analysis algorithm along a timeline. The results can be shown as a waveform, for example, with a magnitude that indicates the amount of detected motion. This would allow an experienced user to quickly locate the portions of a video that contain a person running across the camera's view, for example. Distance estimation is another segmentation type, in which the PDDI 122 shows the results of applying a distance estimation algorithm along a timeline. For example, in a surveillance camera application using two cameras a known distance apart, the distance of each point from the camera can be estimated. The user can set the threshold value to select portions of a given video file to print, based on their distance from the camera. For example, the user may wish to see only objects that are more than 50 yards away from the camera. Foreground and background segmentation can also applied, in which the PDDI 122 shows the results of applying a foreground/background segmentation algorithm along a timeline. At each point, the foreground objects are displayed. A clustering and merging algorithm can be applied across groups of adjacent frames to reduce the number of individual objects that are displayed. A user can set the threshold value to select portions of a given video file to print based the confidence value of the foreground/background segmentation, as well as the merging algorithm. Scene segmentation is another type that the user can apply, in which the PDDI 122 shows the results of applying a shot segmentation algorithm along a timeline. Each segment can be accompanied by a confidence value that the segmentation is correct.
Segmentation types for recognizing automobiles or license plates can also be applied. Automobile recognition might be useful, for example, to a user who operates a surveillance camera that creates many hours of very boring video. Such a user often needs to find and print only those sections that contain a specific object, such as a red Cadillac. For this purpose, each frame in the video is input to an automobile recognition technique and the results are displayed along a timeline. License plate recognition might also be useful to a user operating a surveillance camera and may need to search the surveillance video for sections containing a specific license plate number. For this purpose, each frame in the video is input to a license plate recognition technique and the results (plate number, state, plate color, name and address of plate holder, outstanding arrest warrants, criminal history of the plate holder, etc.) are displayed along a timeline. With either automobile or license plate recognition, the user can set a threshold value to select portions of a given video file to print based on the confidence values that accompany the automobile or license plate recognition results. A text entry dialog box is also provided that allows the user to enter identifiers for the make, model, color, and year for an automobile, or plate number, state, and year, etc. for a license plate. These text entries are searched for within the recognition results. Clips that contain the entered information are indicated along the timeline.
Referring now to
The user can apply a number of different segmentation types to video content using the PDDI. The user may choose to apply both audio detection and speaker recognition to one twelve-minute-long CNN News show, for example.
In other embodiments, the menu might also include a number of different combination options, allowing the user to select one item in the menu that includes more than one segmentation type. For example, audio detection plus speaker recognition may be one combination item on the menu. By selecting this option in the menu, the user causes audio detection and speaker recognition to be performed on the multimedia content. This combination menu items may be preset in the printer 102 properties as a default list of segmentation types and segmentation combination types. In addition, the user can define his or her own combination types. When the user creates a user-defined segmentation type, the user can give the segmentation type a name, and this option will appear in the drop-down menu of segmentation types. The segmentation type in
As shown in
Besides the example of
When applying combinations of segmentation types to multimedia content, the user is not limited to applying just two types in a combination. The user can apply three or more segmentation types, and such combinations can be shown in the segmentation type menu by default or they can be created by the user. Scene segmentation, OCR, and face recognition can be applied in combination, in which the PDDI 122 shows the results of applying a shot segmentation algorithm along a timeline. Each frame in the video has OCR performed on it and is subsampled, and the results are displayed along the same or different timeline. Names that were derived by application of face recognition to video frames are also shown on the same or different timeline. Also, a series of checkboxes are provided that let the user select clips by choosing names. The user can set threshold values for the results, allowing the user to select portions of a given video file to print based on the confidence values that accompany the shot segmentation, OCR, and face recognition results. Alternatively, the user could apply face detection with OCR and scene segmentation. The PDDI 122 would display the OCR and scene segmentation results as described above. The same or different timeline could also include segments that contain face images. Each segment can be accompanied by an integer that expresses the number of faces detected in the clip as well as a confidence value.
Automobile recognition plus motion analysis could be another alternative segmentation type combination, in which each frame in the video is input to an automobile recognition technique and the results are displayed along a timeline. Also, a motion analysis technique is applied to the video to estimate the automobile's speed from one frame to the next. A text entry dialog box is also provided that allows the user to enter identifiers for the make, model, color, and year for an automobile, and the automobile speed. These items are searched within the automobile recognition and motion analysis results, and clips that contain the entered information are indicated along the timeline.
While
When applying combinations of segmentation types to multimedia content, the user is not limited to applying just two types in a combination. The user can apply three or more segmentation types, and such combinations can be shown in the segmentation type menu by default or they can be created by the user. Speech recognition, audio event detection, and speaker recognition can be applied in combination. The speech recognition results include text and optionally confidence values for each word or sentence. Audio events detected are shown on the same or different timeline. The PDDI 122 also displays the name of each speaker detected, accompanied by a confidence that it was detected correctly. The user interface includes a series of check boxes that let the user choose which speakers to display. Speech recognition, audio event detection, and speaker segmentation could alternatively be applied. The application is the same as above, except speaker segmentation events are shown instead of speaker recognition events. Each speaker segment is shown in a different color or with a different icon, and segments that were produced by the same speaker are shown in the same color or with the same icon. As another example, speech recognition, audio event detection, and sound localization could be applied in combination. The timeline(s) will show text and optionally confidence values for each word or sentence, along with audio events detected. The timeline(s) also display the direction from which sound was detected as a sector of a circle. Each sector is accompanied by a confidence that it was detected correctly. The user interface includes a series of check boxes arranged around the circumference of a prototype circle that let the user choose which directions to display.
Referring now to
Identifiers 1208 are shown under each video frame 1206, and the user can select any one of these identifiers 1208 to cause the video content associated with the video frame 1206 to begin to play. The video frame 1206 can begin to play at a point at which the speaker is starting to recite the associated text 1216 transcript. The video frames 1206 for which no text is shown or for which the phrase “no text” is displayed could include video content in which the person in the clip is not speaking, or may represent examples in which the user selected not to show text.
The multimedia document shown in the embodiment of
While the present invention has been described with reference to certain preferred embodiments, those skilled in the art will recognize that various modifications may be provided. Variations upon and modifications to the preferred embodiments are provided for by the present invention, which is limited only by the following claims.
This application claims the benefit of the following provisional patent applications, each of which is incorporated by reference in its entirety: U.S. Provisional patent application entitled “Printer Including One or More Specialized Hardware Devices” filed on Sep. 25, 2003, having Ser. No. 60/506,303, and U.S. Provisional patent application entitled “Printer Driver, Interface and Method for Selecting and Printing Representations of Audio, Video or Processed Information” filed on Sep. 25, 2003, having Ser. No. 60/506,206 This application is related to the following co-pending U.S Patent Applications (hereinafter referred to as the “Video Paper Applications”), each of which is hereby incorporated by reference in its entirety: U.S. patent application Ser. No. 10/001,895, “Paper-based Interface for Multimedia Information,” filed Nov. 19, 2001; U.S. patent application Ser. No. 10/001,849, “Techniques for Annotating Multimedia Information,” filed Nov. 19, 2001; U.S. patent application Ser. No. 10/001,893, “Techniques for Generating a Coversheet for a paper-based Interface for Multimedia Information,” filed Nov. 19, 2001; U.S. patent application Ser. No. 10/001,894, “Techniques for Retrieving Multimedia Information Using a Paper-Based Interface,” filed Nov. 19, 2001; U.S. patent application Ser. No. 10/001,891, “Paper-based Interface for Multimedia Information Stored by Multiple Multimedia Documents,” filed Nov. 19, 2001; U.S. patent application Ser. No. 10/175,540, “Device for Generating a Multimedia Paper Document,” filed Jun. 18, 2002; and U.S. patent application Ser. No. 10/645,821, “Paper-Based Interface for Specifying Ranges,” filed Aug. 20, 2003. This application is related to the following co-pending U.S Patent Applications, each of which is hereby incorporated by reference in its entirety: U.S. patent application Ser. No. 10/081,129, to Graham, entitled “Multimedia Visualization and Integration Environment,” filed on Feb. 21, 2002; U.S. patent application Ser. No. 10/701,966, to Graham, entitled “Multimedia Visualization and Integration Environment,” filed on Nov. 4, 2003; U.S. patent application Ser. No. 10/465,027, to Graham, et. al., entitled “Interface For Printing Multimedia Information,” filed on Jun. 18, 2003; U.S. patent application Ser. No. 10/465,022 entitled “Techniques For Displaying Information Stored In Multiple Multimedia Documents,” to Graham, et. al., filed on Jun. 18, 2003; U.S. patent application Ser. No. 10/174,522, to Graham, entitled “Television-Based Visualization and Navigation Interface, filed on Jun. 17, 2002; and U.S. patent application Ser. No. 10/795,031 to Graham, entitled “Multimedia Visualization and Integration Environment,” filed Mar. 3, 2004. This application is also related to the following co-pending patent applications, each of which is hereby incorporated by reference in its entirety: U.S. patent application entitled, “Printer Having Embedded Functionality for Printing Time-Based Media,” to Hart et al., filed Mar. 30, 2004, U.S. patent application entitled, “Printer With Hardware and Software Interfaces for Peripheral Devices,” to Hart et al., filed Mar. 30, 2004, U.S. patent application entitled, “Printer User Interface,” to Hart et al., filed Mar. 30, 2004, U.S. patent application entitled, “User Interface for Networked Printer,” to Hart et al., filed Mar. 30, 2004, and U.S. patent application entitled, “Stand Alone Multimedia Printer With User Interface for Allocating Processing,” to Hart et al., filed Mar. 30, 2004, U.S. Patent Application entitled “Networked Printing System Having Embedded Functionality for Printing Time-Based Media,” to Hart, et al., filed Mar. 30, 2004, U.S. Patent Application entitled “Printable Representations for Time-Based Media ,” to Hull, et. al., filed on Mar. 30, 2004,and U.S. Patent Application entitled “Printing System with Embedded Audio/Video Content Recognition and Processing,” to Hull et. al., filed on Mar. 30, 2004.
Number | Name | Date | Kind |
---|---|---|---|
4133007 | Wessler et al. | Jan 1979 | A |
4205780 | Burns et al. | Jun 1980 | A |
4417239 | Demke et al. | Nov 1983 | A |
4437378 | Ishida et al. | Mar 1984 | A |
4481412 | Fields | Nov 1984 | A |
4619522 | Imai | Oct 1986 | A |
4635132 | Nakamura | Jan 1987 | A |
4703366 | Kobori et al. | Oct 1987 | A |
4734898 | Morinaga | Mar 1988 | A |
4754485 | Klatt | Jun 1988 | A |
4807186 | Ohnishi et al. | Feb 1989 | A |
4823303 | Terasawa | Apr 1989 | A |
4831610 | Hoda et al. | May 1989 | A |
4881135 | Heilweil | Nov 1989 | A |
4907973 | Hon | Mar 1990 | A |
4987447 | Ojha | Jan 1991 | A |
4998215 | Black et al. | Mar 1991 | A |
5010498 | Miyata | Apr 1991 | A |
5059126 | Kimball | Oct 1991 | A |
5060135 | Levine et al. | Oct 1991 | A |
5091948 | Kametani | Feb 1992 | A |
5093730 | Ishii et al. | Mar 1992 | A |
5111285 | Fujita et al. | May 1992 | A |
5115967 | Wedekind | May 1992 | A |
5136563 | Takemasa et al. | Aug 1992 | A |
5153831 | Yianilos | Oct 1992 | A |
5161037 | Saito | Nov 1992 | A |
5168371 | Takayanagi | Dec 1992 | A |
5170935 | Federspiel et al. | Dec 1992 | A |
5220649 | Forcier | Jun 1993 | A |
5225900 | Wright | Jul 1993 | A |
5231698 | Forcier | Jul 1993 | A |
5237648 | Mills et al. | Aug 1993 | A |
5243381 | Hube | Sep 1993 | A |
5250787 | Arii et al. | Oct 1993 | A |
5255389 | Wang | Oct 1993 | A |
5258880 | Takahashi | Nov 1993 | A |
5267303 | Johnson et al. | Nov 1993 | A |
5270989 | Kimura | Dec 1993 | A |
5280609 | MacPhail | Jan 1994 | A |
5299123 | Wang et al. | Mar 1994 | A |
5309359 | Katz et al. | May 1994 | A |
5343251 | Nafeh | Aug 1994 | A |
5349658 | O'Rourke et al. | Sep 1994 | A |
5369508 | Lech et al. | Nov 1994 | A |
5382776 | Arii et al. | Jan 1995 | A |
5384703 | Withgott et al. | Jan 1995 | A |
5386510 | Jacobs | Jan 1995 | A |
5404295 | Katz et al. | Apr 1995 | A |
5418948 | Turtle | May 1995 | A |
5428555 | Starkey et al. | Jun 1995 | A |
5432532 | Mochimaru et al. | Jul 1995 | A |
5436792 | Leman et al. | Jul 1995 | A |
5438426 | Miake et al. | Aug 1995 | A |
5442795 | Levine et al. | Aug 1995 | A |
5444476 | Conway | Aug 1995 | A |
5448375 | Cooper et al. | Sep 1995 | A |
5459307 | Klotz, Jr. | Oct 1995 | A |
5467288 | Fasciano et al. | Nov 1995 | A |
5468371 | Nelson et al. | Nov 1995 | A |
5479600 | Wroblewski et al. | Dec 1995 | A |
5480306 | Liu | Jan 1996 | A |
5481666 | Nguyen et al. | Jan 1996 | A |
5485554 | Lowitz et al. | Jan 1996 | A |
5493409 | Maeda et al. | Feb 1996 | A |
5499108 | Cotte et al. | Mar 1996 | A |
5524085 | Bellucco et al. | Jun 1996 | A |
5566271 | Tomitsuka et al. | Oct 1996 | A |
5568406 | Gerber | Oct 1996 | A |
5572651 | Weber et al. | Nov 1996 | A |
5576950 | Tonomura et al. | Nov 1996 | A |
5581366 | Merchant et al. | Dec 1996 | A |
5581682 | Anderson et al. | Dec 1996 | A |
5590257 | Forcier | Dec 1996 | A |
5596698 | Morgan | Jan 1997 | A |
5596700 | Darnell et al. | Jan 1997 | A |
5600775 | King et al. | Feb 1997 | A |
5617138 | Ito et al. | Apr 1997 | A |
5624265 | Redford et al. | Apr 1997 | A |
5627936 | Prasad et al. | May 1997 | A |
5628684 | Bouedec | May 1997 | A |
5633723 | Sugiyama et al. | May 1997 | A |
5638543 | Pedersen et al. | Jun 1997 | A |
5640193 | Wellner | Jun 1997 | A |
5661506 | Lazzouni et al. | Aug 1997 | A |
5661783 | Assis | Aug 1997 | A |
5675752 | Scott et al. | Oct 1997 | A |
5680223 | Cooper et al. | Oct 1997 | A |
5680636 | Levine et al. | Oct 1997 | A |
5682330 | Seaman et al. | Oct 1997 | A |
5682540 | Klotz et al. | Oct 1997 | A |
5686957 | Baker | Nov 1997 | A |
5690496 | Kennedy | Nov 1997 | A |
5694559 | Hobson et al. | Dec 1997 | A |
5706097 | Schelling et al. | Jan 1998 | A |
5710874 | Bergen | Jan 1998 | A |
5715381 | Hamilton | Feb 1998 | A |
5717841 | Farrell et al. | Feb 1998 | A |
5717879 | Moran et al. | Feb 1998 | A |
5721883 | Katsuo et al. | Feb 1998 | A |
5721897 | Rubinstein | Feb 1998 | A |
5729665 | Gauthier | Mar 1998 | A |
5734752 | Knox | Mar 1998 | A |
5734753 | Bunce | Mar 1998 | A |
5737599 | Rowe et al. | Apr 1998 | A |
5745756 | Henley | Apr 1998 | A |
5748805 | Withgott et al. | May 1998 | A |
5749735 | Redford et al. | May 1998 | A |
5751283 | Smith | May 1998 | A |
5754308 | Lopresti et al. | May 1998 | A |
5754939 | Herz et al. | May 1998 | A |
5757897 | LaBarbera et al. | May 1998 | A |
5758037 | Schroeder | May 1998 | A |
5760767 | Shore et al. | Jun 1998 | A |
5761380 | Lewis et al. | Jun 1998 | A |
5761655 | Hoffman | Jun 1998 | A |
5761686 | Bloomberg | Jun 1998 | A |
5764235 | Hunt et al. | Jun 1998 | A |
5764368 | Shibaki et al. | Jun 1998 | A |
5765176 | Bloomberg | Jun 1998 | A |
5774260 | Petitto et al. | Jun 1998 | A |
5778397 | Kupiec et al. | Jul 1998 | A |
5781785 | Rowe et al. | Jul 1998 | A |
5784616 | Horvitz | Jul 1998 | A |
5790114 | Geaghan et al. | Aug 1998 | A |
5793365 | Tang et al. | Aug 1998 | A |
5793869 | Claflin, Jr. | Aug 1998 | A |
5802294 | Ludwig et al. | Sep 1998 | A |
5804803 | Cragun et al. | Sep 1998 | A |
5809318 | Rivette et al. | Sep 1998 | A |
5812664 | Bernobich et al. | Sep 1998 | A |
5819301 | Rowe et al. | Oct 1998 | A |
5832474 | Lopresti et al. | Nov 1998 | A |
5838317 | Bolnick et al. | Nov 1998 | A |
5845144 | Tateyama et al. | Dec 1998 | A |
5857185 | Yamaura | Jan 1999 | A |
5860074 | Rowe et al. | Jan 1999 | A |
5869819 | Knowles et al. | Feb 1999 | A |
5870552 | Dozier et al. | Feb 1999 | A |
5870770 | Wolfe | Feb 1999 | A |
5873107 | Borovoy et al. | Feb 1999 | A |
5877764 | Feitelson et al. | Mar 1999 | A |
5884056 | Steele | Mar 1999 | A |
5892536 | Logan et al. | Apr 1999 | A |
5894333 | Kanda et al. | Apr 1999 | A |
5895476 | Orr et al. | Apr 1999 | A |
5898166 | Fukuda et al. | Apr 1999 | A |
5898709 | Imade et al. | Apr 1999 | A |
5903538 | Fujita et al. | May 1999 | A |
5933829 | Durst et al. | Aug 1999 | A |
5933841 | Schumacher et al. | Aug 1999 | A |
5936542 | Kleinrock et al. | Aug 1999 | A |
5938727 | Ikeda | Aug 1999 | A |
5939699 | Perttunen et al. | Aug 1999 | A |
5940776 | Baron et al. | Aug 1999 | A |
5941936 | Taylor | Aug 1999 | A |
5943679 | Niles et al. | Aug 1999 | A |
5945998 | Eick | Aug 1999 | A |
5946678 | Aalbersberg | Aug 1999 | A |
5949879 | Berson et al. | Sep 1999 | A |
5950187 | Tsuda | Sep 1999 | A |
5962839 | Eskildsen | Oct 1999 | A |
5974189 | Nicponski | Oct 1999 | A |
5978477 | Hull et al. | Nov 1999 | A |
5978773 | Hudetz et al. | Nov 1999 | A |
5982507 | Weiser et al. | Nov 1999 | A |
5986655 | Chiu et al. | Nov 1999 | A |
5986692 | Logan et al. | Nov 1999 | A |
5987226 | Ishikawa et al. | Nov 1999 | A |
5987454 | Hobbs | Nov 1999 | A |
5990934 | Nalwa | Nov 1999 | A |
5999173 | Ubillos | Dec 1999 | A |
6000030 | Steinberg et al. | Dec 1999 | A |
6005562 | Shiga et al. | Dec 1999 | A |
6006218 | Breese et al. | Dec 1999 | A |
6006241 | Purnaveja et al. | Dec 1999 | A |
6009442 | Chen et al. | Dec 1999 | A |
6020916 | Gerszberg et al. | Feb 2000 | A |
6021403 | Horvitz et al. | Feb 2000 | A |
6026409 | Blumenthal | Feb 2000 | A |
6028601 | Machiraju et al. | Feb 2000 | A |
6038567 | Young | Mar 2000 | A |
6043904 | Nickerson | Mar 2000 | A |
6046718 | Suzuki et al. | Apr 2000 | A |
6055542 | Nielsen et al. | Apr 2000 | A |
6061758 | Reber et al. | May 2000 | A |
6076733 | Wilz, Sr. et al. | Jun 2000 | A |
6076734 | Dougherty et al. | Jun 2000 | A |
6081261 | Wolff et al. | Jun 2000 | A |
6094648 | Aalbersberg | Jul 2000 | A |
RE36801 | Logan et al. | Aug 2000 | E |
6098082 | Gibbon et al. | Aug 2000 | A |
6098106 | Philyaw et al. | Aug 2000 | A |
6101503 | Cooper et al. | Aug 2000 | A |
6106457 | Perkins et al. | Aug 2000 | A |
6108656 | Durst et al. | Aug 2000 | A |
6111567 | Savchenko et al. | Aug 2000 | A |
6115718 | Huberman et al. | Sep 2000 | A |
6118888 | Chino et al. | Sep 2000 | A |
6123258 | Iida | Sep 2000 | A |
6125229 | Dimitrova et al. | Sep 2000 | A |
6138151 | Reber et al. | Oct 2000 | A |
6141001 | Baleh | Oct 2000 | A |
6148094 | Kinsella | Nov 2000 | A |
6151059 | Schein et al. | Nov 2000 | A |
6152369 | Wilz, Sr. et al. | Nov 2000 | A |
6153667 | Howald | Nov 2000 | A |
6160633 | Mori | Dec 2000 | A |
6167033 | Chang et al. | Dec 2000 | A |
6170007 | Venkatraman et al. | Jan 2001 | B1 |
6175489 | Markow et al. | Jan 2001 | B1 |
6182090 | Peairs | Jan 2001 | B1 |
6189009 | Stratigos et al. | Feb 2001 | B1 |
6193658 | Wendelken et al. | Feb 2001 | B1 |
6195068 | Suzuki et al. | Feb 2001 | B1 |
6199042 | Kurzweil | Mar 2001 | B1 |
6199048 | Hudetz et al. | Mar 2001 | B1 |
6211869 | Loveman et al. | Apr 2001 | B1 |
6222532 | Ceccarelli | Apr 2001 | B1 |
6256638 | Dougherty et al. | Jul 2001 | B1 |
6262724 | Crow et al. | Jul 2001 | B1 |
6266053 | French et al. | Jul 2001 | B1 |
6296693 | McCarthy | Oct 2001 | B1 |
6297812 | Ohara et al. | Oct 2001 | B1 |
6297851 | Taubman et al. | Oct 2001 | B1 |
6298145 | Zhang et al. | Oct 2001 | B1 |
6301586 | Yang et al. | Oct 2001 | B1 |
6301660 | Benson | Oct 2001 | B1 |
6302527 | Walker | Oct 2001 | B1 |
6307956 | Black | Oct 2001 | B1 |
6308887 | Korman et al. | Oct 2001 | B1 |
6330976 | Dymetman et al. | Dec 2001 | B1 |
6332147 | Moran et al. | Dec 2001 | B1 |
6340971 | Janse et al. | Jan 2002 | B1 |
6360057 | Tsumagari et al. | Mar 2002 | B1 |
6369811 | Graham et al. | Apr 2002 | B1 |
6373498 | Abgrall | Apr 2002 | B1 |
6373585 | Mastie et al. | Apr 2002 | B1 |
6375298 | Purcell et al. | Apr 2002 | B2 |
6378070 | Chan et al. | Apr 2002 | B1 |
6381614 | Barnett et al. | Apr 2002 | B1 |
6396594 | French et al. | May 2002 | B1 |
6400996 | Hoffberg et al. | Jun 2002 | B1 |
6409401 | Petteruti et al. | Jun 2002 | B1 |
6417435 | Chantzis et al. | Jul 2002 | B2 |
6421067 | Kamen et al. | Jul 2002 | B1 |
6421738 | Ratan et al. | Jul 2002 | B1 |
6430554 | Rothschild | Aug 2002 | B1 |
6434561 | Durst, Jr. et al. | Aug 2002 | B1 |
6439465 | Bloomberg | Aug 2002 | B1 |
6442336 | Lemelson | Aug 2002 | B1 |
6452615 | Chiu et al. | Sep 2002 | B1 |
6466329 | Mukai | Oct 2002 | B1 |
6466534 | Cundiff, Sr. | Oct 2002 | B2 |
6476793 | Motoyama et al. | Nov 2002 | B1 |
6476834 | Doval et al. | Nov 2002 | B1 |
6502114 | Forcier | Dec 2002 | B1 |
D468277 | Sugiyama | Jan 2003 | S |
6502756 | Fåhraeus | Jan 2003 | B1 |
6504620 | Kinjo | Jan 2003 | B1 |
6505153 | Van Thong et al. | Jan 2003 | B1 |
6515756 | Mastie et al. | Feb 2003 | B1 |
6518986 | Mugura | Feb 2003 | B1 |
6519360 | Tanaka | Feb 2003 | B1 |
6529920 | Arons et al. | Mar 2003 | B1 |
6535639 | Uchihachi et al. | Mar 2003 | B1 |
6542933 | Durst, Jr. et al. | Apr 2003 | B1 |
6544294 | Greenfield et al. | Apr 2003 | B1 |
6546385 | Mao et al. | Apr 2003 | B1 |
6552743 | Rissman | Apr 2003 | B1 |
6556241 | Yoshimura et al. | Apr 2003 | B1 |
6567980 | Jain et al. | May 2003 | B1 |
6568595 | Russell et al. | May 2003 | B1 |
6581070 | Gibbon et al. | Jun 2003 | B1 |
6587859 | Dougherty et al. | Jul 2003 | B2 |
6593860 | Lai et al. | Jul 2003 | B2 |
6594377 | Kim et al. | Jul 2003 | B1 |
6596031 | Parks | Jul 2003 | B1 |
6608563 | Weston et al. | Aug 2003 | B2 |
6611276 | Muratori et al. | Aug 2003 | B1 |
6611622 | Krumm | Aug 2003 | B1 |
6611628 | Sekiguchi et al. | Aug 2003 | B1 |
6623528 | Squilla et al. | Sep 2003 | B1 |
6625334 | Shiota et al. | Sep 2003 | B1 |
6636869 | Reber et al. | Oct 2003 | B1 |
6647534 | Graham | Nov 2003 | B1 |
6647535 | Bozdagi et al. | Nov 2003 | B1 |
6651053 | Rothschild | Nov 2003 | B1 |
6654887 | Rhoads | Nov 2003 | B2 |
6665092 | Reed | Dec 2003 | B2 |
6674538 | Takahashi | Jan 2004 | B2 |
6675165 | Rothschild | Jan 2004 | B1 |
6678389 | Sun et al. | Jan 2004 | B1 |
6687383 | Kanevsky et al. | Feb 2004 | B1 |
6700566 | Shimoosawa et al. | Mar 2004 | B2 |
6701369 | Philyaw | Mar 2004 | B1 |
6724494 | Danknick | Apr 2004 | B1 |
6728466 | Tanaka | Apr 2004 | B1 |
6684368 | Philyaw et al. | Jun 2004 | B1 |
6745234 | Philyaw et al. | Jun 2004 | B1 |
6750978 | Marggraff et al. | Jun 2004 | B1 |
6752317 | Dymetman et al. | Jun 2004 | B2 |
6753883 | Schena et al. | Jun 2004 | B2 |
6760541 | Ohba | Jul 2004 | B1 |
6766363 | Rothschild | Jul 2004 | B1 |
6771283 | Carro | Aug 2004 | B2 |
6772947 | Shaw | Aug 2004 | B2 |
6774951 | Narushima | Aug 2004 | B2 |
6775651 | Lewis et al. | Aug 2004 | B1 |
6781609 | Barker et al. | Aug 2004 | B1 |
6807303 | Kim et al. | Oct 2004 | B1 |
6824044 | Lapstun et al. | Nov 2004 | B1 |
6845913 | Madding et al. | Jan 2005 | B2 |
6853980 | Ying et al. | Feb 2005 | B1 |
6856415 | Simchik et al. | Feb 2005 | B1 |
6865608 | Hunter | Mar 2005 | B2 |
6865714 | Liu et al. | Mar 2005 | B1 |
6871780 | Nygren et al. | Mar 2005 | B2 |
6877134 | Fuller et al. | Apr 2005 | B1 |
6883162 | Jackson et al. | Apr 2005 | B2 |
6886750 | Rathus et al. | May 2005 | B2 |
6892193 | Bolle et al. | May 2005 | B2 |
6898709 | Teppler | May 2005 | B1 |
6904168 | Steinberg et al. | Jun 2005 | B1 |
6904451 | Orfitelli et al. | Jun 2005 | B1 |
6923721 | Luciano et al. | Aug 2005 | B2 |
6931594 | Jun | Aug 2005 | B1 |
6938202 | Matsubayashi et al. | Aug 2005 | B1 |
6946672 | Lapstun et al. | Sep 2005 | B1 |
6950623 | Brown et al. | Sep 2005 | B2 |
6964374 | Djuknic et al. | Nov 2005 | B1 |
6966495 | Lynggaard et al. | Nov 2005 | B2 |
6983482 | Morita et al. | Jan 2006 | B2 |
6993573 | Hunter | Jan 2006 | B2 |
7000193 | Impink, Jr. et al. | Feb 2006 | B1 |
7023459 | Arndt et al. | Apr 2006 | B2 |
7031965 | Moriya et al. | Apr 2006 | B1 |
7073119 | Matsubayashi et al. | Jul 2006 | B2 |
7075676 | Owen | Jul 2006 | B2 |
7079278 | Sato | Jul 2006 | B2 |
7089420 | Durst et al. | Aug 2006 | B1 |
7092568 | Eaton | Aug 2006 | B2 |
7131058 | Lapstun | Oct 2006 | B1 |
7134016 | Harris | Nov 2006 | B1 |
7149957 | Hull et al. | Dec 2006 | B2 |
7151613 | Ito | Dec 2006 | B1 |
7152206 | Tsuruta | Dec 2006 | B1 |
7162690 | Gupta et al. | Jan 2007 | B2 |
7174151 | Lynch et al. | Feb 2007 | B2 |
7181502 | Incertis | Feb 2007 | B2 |
7196808 | Kofman et al. | Mar 2007 | B2 |
7215436 | Hull et al. | May 2007 | B2 |
7225158 | Toshikage et al. | May 2007 | B2 |
7228492 | Graham | Jun 2007 | B1 |
7260828 | Aratani et al. | Aug 2007 | B2 |
7263659 | Hull et al. | Aug 2007 | B2 |
7263671 | Hull et al. | Aug 2007 | B2 |
7266782 | Hull et al. | Sep 2007 | B2 |
7280738 | Kauffman et al. | Oct 2007 | B2 |
7298512 | Reese et al. | Nov 2007 | B2 |
7305620 | Nakajima et al. | Dec 2007 | B1 |
7313808 | Gupta et al. | Dec 2007 | B1 |
7363580 | Tabata et al. | Apr 2008 | B2 |
7647555 | Wilcox et al. | Jan 2010 | B1 |
20010003846 | Rowe et al. | Jun 2001 | A1 |
20010005203 | Wiernik | Jun 2001 | A1 |
20010013041 | Macleod et al. | Aug 2001 | A1 |
20010017714 | Komatsu et al. | Aug 2001 | A1 |
20010037408 | Thrift et al. | Nov 2001 | A1 |
20010043789 | Nishimura et al. | Nov 2001 | A1 |
20010044810 | Timmons | Nov 2001 | A1 |
20010052942 | MacCollum et al. | Dec 2001 | A1 |
20020001101 | Hamura et al. | Jan 2002 | A1 |
20020004807 | Graham et al. | Jan 2002 | A1 |
20020006100 | Cundiff Sr, Sr. | Jan 2002 | A1 |
20020010641 | Stevens et al. | Jan 2002 | A1 |
20020011518 | Goetz et al. | Jan 2002 | A1 |
20020015066 | Siwinski et al. | Feb 2002 | A1 |
20020019982 | Aratani et al. | Feb 2002 | A1 |
20020023957 | Michaelis et al. | Feb 2002 | A1 |
20020036800 | Nozaki et al. | Mar 2002 | A1 |
20020047870 | Carro | Apr 2002 | A1 |
20020048224 | Dygert et al. | Apr 2002 | A1 |
20020051010 | Jun et al. | May 2002 | A1 |
20020059342 | Gupta et al. | May 2002 | A1 |
20020060748 | Aratani et al. | May 2002 | A1 |
20020066782 | Swaminathan et al. | Jun 2002 | A1 |
20020067503 | Hiatt | Jun 2002 | A1 |
20020070982 | Hill et al. | Jun 2002 | A1 |
20020078149 | Chang et al. | Jun 2002 | A1 |
20020085759 | Davies et al. | Jul 2002 | A1 |
20020087530 | Smith et al. | Jul 2002 | A1 |
20020087598 | Carro | Jul 2002 | A1 |
20020095460 | Benson | Jul 2002 | A1 |
20020095501 | Chiloyan et al. | Jul 2002 | A1 |
20020097426 | Gusmano et al. | Jul 2002 | A1 |
20020099452 | Kawai | Jul 2002 | A1 |
20020099534 | Hegarty | Jul 2002 | A1 |
20020101343 | Patton | Aug 2002 | A1 |
20020101513 | Halverson | Aug 2002 | A1 |
20020116575 | Toyomura et al. | Aug 2002 | A1 |
20020131071 | Parry | Sep 2002 | A1 |
20020131078 | Tsukinokizawa | Sep 2002 | A1 |
20020134699 | Bradfield et al. | Sep 2002 | A1 |
20020135800 | Dutta | Sep 2002 | A1 |
20020135808 | Parry | Sep 2002 | A1 |
20020137544 | Myojo | Sep 2002 | A1 |
20020140993 | Silverbrook | Oct 2002 | A1 |
20020159637 | Echigo et al. | Oct 2002 | A1 |
20020163653 | Struble et al. | Nov 2002 | A1 |
20020165769 | Ogaki et al. | Nov 2002 | A1 |
20020169849 | Schroath | Nov 2002 | A1 |
20020171857 | Hisatomi et al. | Nov 2002 | A1 |
20020185533 | Shieh et al. | Dec 2002 | A1 |
20020199149 | Nagasaki et al. | Dec 2002 | A1 |
20030002068 | Constantin et al. | Jan 2003 | A1 |
20030007776 | Kameyama et al. | Jan 2003 | A1 |
20030014615 | Lynggaard | Jan 2003 | A1 |
20030024975 | Rajasekharan | Feb 2003 | A1 |
20030025951 | Pollard et al. | Feb 2003 | A1 |
20030038971 | Renda | Feb 2003 | A1 |
20030046241 | Toshikage et al. | Mar 2003 | A1 |
20030051214 | Graham et al. | Mar 2003 | A1 |
20030052897 | Lin | Mar 2003 | A1 |
20030065665 | Kinjo | Apr 2003 | A1 |
20030065925 | Shindo et al. | Apr 2003 | A1 |
20030076521 | Li et al. | Apr 2003 | A1 |
20030084462 | Kubota et al. | May 2003 | A1 |
20030088582 | Pflug | May 2003 | A1 |
20030093384 | Durst et al. | May 2003 | A1 |
20030110926 | Sitrick et al. | Jun 2003 | A1 |
20030117652 | Lapstun | Jun 2003 | A1 |
20030121006 | Tabata et al. | Jun 2003 | A1 |
20030128877 | Nicponski | Jul 2003 | A1 |
20030130952 | Bell et al. | Jul 2003 | A1 |
20030146927 | Crow et al. | Aug 2003 | A1 |
20030156589 | Suetsugu | Aug 2003 | A1 |
20030160898 | Baek et al. | Aug 2003 | A1 |
20030163552 | Savitzky et al. | Aug 2003 | A1 |
20030164898 | Imai | Sep 2003 | A1 |
20030177240 | Gulko et al. | Sep 2003 | A1 |
20030184598 | Graham | Oct 2003 | A1 |
20030187642 | Ponceleon et al. | Oct 2003 | A1 |
20030218597 | Hodzic | Nov 2003 | A1 |
20030220988 | Hymel | Nov 2003 | A1 |
20030231198 | Janevski | Dec 2003 | A1 |
20040006577 | Rix | Jan 2004 | A1 |
20040008209 | Adams et al. | Jan 2004 | A1 |
20040015524 | Chalstrom et al. | Jan 2004 | A1 |
20040024643 | Pollock et al. | Feb 2004 | A1 |
20040036842 | Tsai et al. | Feb 2004 | A1 |
20040037540 | Frohlich et al. | Feb 2004 | A1 |
20040039723 | Lee et al. | Feb 2004 | A1 |
20040044894 | Lofgren et al. | Mar 2004 | A1 |
20040049681 | Diehl et al. | Mar 2004 | A1 |
20040064207 | Zacks et al. | Apr 2004 | A1 |
20040064338 | Shiota et al. | Apr 2004 | A1 |
20040064339 | Shiota et al. | Apr 2004 | A1 |
20040071441 | Foreman et al. | Apr 2004 | A1 |
20040090462 | Graham | May 2004 | A1 |
20040100506 | Shiota et al. | May 2004 | A1 |
20040118908 | Ando et al. | Jun 2004 | A1 |
20040125402 | Kanai et al. | Jul 2004 | A1 |
20040128514 | Rhoads | Jul 2004 | A1 |
20040128613 | Sinisi | Jul 2004 | A1 |
20040143459 | Engleson et al. | Jul 2004 | A1 |
20040143602 | Ruiz et al. | Jul 2004 | A1 |
20040150627 | Luman et al. | Aug 2004 | A1 |
20040156616 | Strub et al. | Aug 2004 | A1 |
20040167895 | Carro | Aug 2004 | A1 |
20040181747 | Hull et al. | Sep 2004 | A1 |
20040181815 | Hull et al. | Sep 2004 | A1 |
20040184064 | TaKeda et al. | Sep 2004 | A1 |
20040207876 | Aschenbrenner et al. | Oct 2004 | A1 |
20040215470 | Bodin | Oct 2004 | A1 |
20040229195 | Marggraff et al. | Nov 2004 | A1 |
20040240541 | Chadwick et al. | Dec 2004 | A1 |
20040240562 | Bargeron et al. | Dec 2004 | A1 |
20040247298 | Ohba | Dec 2004 | A1 |
20040249650 | Freedman et al. | Dec 2004 | A1 |
20050038794 | Piersol | Feb 2005 | A1 |
20050064935 | Blanco | Mar 2005 | A1 |
20050068569 | Hull et al. | Mar 2005 | A1 |
20050068581 | Hull et al. | Mar 2005 | A1 |
20050083413 | Reed et al. | Apr 2005 | A1 |
20050125717 | Segal et al. | Jun 2005 | A1 |
20050149849 | Graham et al. | Jul 2005 | A1 |
20050213153 | Hull et al. | Sep 2005 | A1 |
20050216838 | Graham | Sep 2005 | A1 |
20050216852 | Hull et al. | Sep 2005 | A1 |
20050223322 | Graham et al. | Oct 2005 | A1 |
20050225781 | Koizumi | Oct 2005 | A1 |
20050262437 | Patterson et al. | Nov 2005 | A1 |
20060013478 | Ito et al. | Jan 2006 | A1 |
20060043193 | Brock | Mar 2006 | A1 |
20060092450 | Kanazawa et al. | May 2006 | A1 |
20060136343 | Coley et al. | Jun 2006 | A1 |
20060171559 | Rhoads | Aug 2006 | A1 |
20060250585 | Anderson et al. | Nov 2006 | A1 |
20070033419 | Kocher et al. | Feb 2007 | A1 |
20070065094 | Chien et al. | Mar 2007 | A1 |
20070109397 | Yuan et al. | May 2007 | A1 |
20070162858 | Hurley et al. | Jul 2007 | A1 |
20070168426 | Ludwig et al. | Jul 2007 | A1 |
20070234196 | Nicol et al. | Oct 2007 | A1 |
20070268164 | Lai et al. | Nov 2007 | A1 |
20080037043 | Hull et al. | Feb 2008 | A1 |
20080246757 | Ito | Oct 2008 | A1 |
Number | Date | Country |
---|---|---|
2386829 | Nov 2002 | CA |
1352765 | Jun 2002 | CN |
1097394 | Dec 2002 | CN |
248 403 | Dec 1987 | EP |
378 848 | Jul 1990 | EP |
459 174 | Dec 1991 | EP |
0651556 | May 1995 | EP |
737 927 | Oct 1996 | EP |
459 174 | Nov 1996 | EP |
0743613 | Nov 1996 | EP |
762 297 | Mar 1997 | EP |
802 492 | Oct 1997 | EP |
1 001 605 | May 2000 | EP |
1079313 | Feb 2001 | EP |
1133170 | Sep 2001 | EP |
788 064 | Jan 2003 | EP |
788 063 | Oct 2005 | EP |
2 137 788 | Oct 1984 | GB |
2 156 118 | Oct 1985 | GB |
2 234 609 | Jun 1991 | GB |
2 290 898 | Jan 1996 | GB |
2 331 378 | May 1999 | GB |
60-046653 | Mar 1985 | JP |
04-225670 | Aug 1992 | JP |
05-101484 | Apr 1993 | JP |
06-124502 | May 1994 | JP |
H07-284033 | Oct 1995 | JP |
08-69419 | Mar 1996 | JP |
8-297677 | Nov 1996 | JP |
H09-037180 | Feb 1997 | JP |
H10-049761 | Feb 1998 | JP |
10-126723 | May 1998 | JP |
H11-341423 | Dec 1999 | JP |
2000-516006 | Nov 2000 | JP |
2001-176246 | Jun 2001 | JP |
2003-87458 | Mar 2003 | JP |
2003-513564 | Apr 2003 | JP |
2003-514318 | Apr 2003 | JP |
WO9806098 | Feb 1998 | WO |
WO 9918523 | Apr 1999 | WO |
WO0073875 | Dec 2000 | WO |
WO 02082316 | Oct 2002 | WO |
Number | Date | Country | |
---|---|---|---|
20040181747 A1 | Sep 2004 | US |
Number | Date | Country | |
---|---|---|---|
60506303 | Sep 2003 | US | |
60506206 | Sep 2003 | US |