A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever.
The present invention relates generally to data processing. More particularly, this invention relates to processing metadata.
Modern data processing systems, such as general purpose computer systems, allow the users of such systems to create a variety of different types of data files. For example, a typical user of a data processing system may create text files with a word processing program such as Microsoft Word or may create an image file with an image processing program such as Adobe's PhotoShop. Numerous other types of files are capable of being created or modified, edited, and otherwise used by one or more users for a typical data processing system. The large number of the different types of files that can be created or modified can present a challenge to a typical user who is seeking to find a particular file which has been created.
Modern data processing systems often include a file management system which allows a user to place files in various directories or subdirectories (e.g. folders) and allows a user to give the file a name. Further, these file management systems often allow a user to find a file by searching for the file's name, or the date of creation, or the date of modification, or the type of file. An example of such a file management system is the Finder program which operates on Macintosh computers from Apple Computer, Inc. of Cupertino, Calif. Another example of a file management system program is the Windows Explorer program which operates on the Windows operating system from Microsoft Corporation of Redmond, Wash. Both the Finder program and the Windows Explorer program include a find command which allows a user to search for files by various criteria including a file name or a date of creation or a date of modification or the type of file. However, this search capability searches through information which is the same for each file, regardless of the type of file. Thus, for example, the searchable data for a Microsoft Word file is the same as the searchable data for an Adobe PhotoShop file, and this data typically includes the file name, the type of file, the date of creation, the date of last modification, the size of the file and certain other parameters which may be maintained for the file by the file management system.
Certain presently existing application programs allow a user to maintain data about a particular file. This data about a particular file may be considered metadata because it is data about other data. This metadata for a particular file may include information about the author of a file, a summary of the document, and various other types of information. A program such as Microsoft Word may automatically create some of this data when a user creates a file and the user may add additional data or edit the data by selecting the “property sheet” from a menu selection in Microsoft Word. The property sheets in Microsoft Word allow a user to create metadata for a particular file or document. However, in existing systems, a user is not able to search for metadata across a variety of different applications using one search request from the user. Furthermore, existing systems can perform one search for data files, but this search does not also include searching through metadata for those files. Further, the metadata associated with a file is typically limited to those standardized metadata or content of the file.
Methods and apparatuses for processing metadata are described herein. In one embodiment, when a file (e.g., a text, audio, and/or image files) having metadata is received, the metadata and optionally at least a portion of the content of the file are extracted from the file to generate a first set of metadata. An analysis is performed on the extracted metadata and the content to generate a second set of metadata, which may include metadata in addition to the first set of metadata. The second set of metadata may be stored in a database suitable to be searched to identify or locate the file.
According to certain embodiments of the invention, the metadata that can be searched, for example, to locate or identify a file, may include additional metadata generated based on the original metadata associated with the file and/or at least a portion of content of the file, which may not exist in the original metadata and/or content of the file. In one embodiment, the additional metadata may be generated via an analysis performed on the original metadata and/or at least a portion of the content of the file. The additional metadata may capture a higher level concept or broader scope information regarding the content of the file.
Other features of the present invention will be apparent from the accompanying drawings and from the detailed description which follows.
The present invention is illustrated by way of example and not limitation in the figures of the accompanying drawings in which like references indicate similar elements.
Methods and apparatuses for processing metadata are described herein. In the following description, numerous details are set forth to provide a more thorough explanation of the present invention. It will be apparent, however, to one skilled in the art, that the present invention may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form, rather than in detail, in order to avoid obscuring the present invention.
Reference in the specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the invention. The appearances of the phrase “in one embodiment” in various places in the specification do not necessarily all refer to the same embodiment.
As shown in
It will be apparent from this description that aspects of the present invention may be embodied, at least in part, in software. That is, the techniques may be carried out in a computer system or other data processing system in response to its processor, such as a microprocessor, executing sequences of instructions contained in a memory, such as ROM 107, RAM 105, mass storage 106 or a remote storage device. In various embodiments, hardwired circuitry may be used in combination with software instructions to implement the present invention. Thus, the techniques are not limited to any specific combination of hardware circuitry and software nor to any particular source for the instructions executed by the data processing system. In addition, throughout this description, various functions and operations are described as being performed by or caused by software code to simplify description. However, those skilled in the art will recognize what is meant by such expressions is that the functions result from execution of the code by a processor, such as the microprocessor 103.
Capturing and Use of Metadata Across a Variety of Application Programs
The method of
The method of
One particular field which may be useful in the various metadata formats would be a field which includes an identifier of a plug in or other software element which may be used to capture metadata from a data file and/or export metadata back to the creator application.
Various different software architectures may be used to implement the functions and operations described herein. The following discussion provides one example of such an architecture, but it will be understood that alternative architectures may also be employed to achieve the same or similar results. The software architecture shown in
The software architecture 400 also includes a file system directory 417 for the metadata. This file system directory keeps track of the relationship between the data files and their metadata and keeps track of the location of the metadata object (e.g. a metadata file which corresponds to the data file from which it was extracted) created by each importer. In one exemplary embodiment, the metadata database is maintained as a flat file format as described below, and the file system directory 417 maintains this flat file format. One advantage of a flat file format is that the data is laid out on a storage device as a string of data without references between fields from one metadata file (corresponding to a particular data file) to another metadata file (corresponding to another data file). This arrangement of data will often result in faster retrieval of information from the metadata database 415.
The software architecture 400 of
The method of
It will be appreciated that the notification, if done through the OS kernel, is a global, system wide notification process such that changes to any file will cause a notification to be sent to the metadata processing software. It will also be appreciated that in alternative embodiments, each application program may itself generate the necessary metadata and provide the metadata directly to a metadata database without the requirement of a notification from an operating system kernel or from the intervention of importers, such as the importers 413. Alternatively, rather than using OS kernel notifications, an embodiment may use software calls from each application to a metadata processing software which receives these calls and then imports the metadata from each file in response to the call.
As noted above, the metadata database 415 may be stored in a flat file format in order to improve the speed of retrieval of information in most circumstances. The flat file format may be considered to be a non-B tree, non-hash tree format in which data is not attempted to be organized but is rather stored as a stream of data. Each metadata object or metadata file will itself contain fields, such as the fields shown in the examples of
A flexible query language may be used to search the metadata database in the same way that such query languages are used to search other databases. The data within each metadata file may be packed or even compressed if desirable. As noted above, each metadata file, in certain embodiments, will include a persistent identifier which uniquely identifies its corresponding data file. This identifier remains the same even if the name of the file is changed or the file is modified. This allows for the persistent association between the particular data file and its metadata.
User Interface Aspects
Various different examples of user interfaces for inputting search parameters and for displaying search results are provided herein. It will be understood that some features from certain embodiments may be mixed with other embodiments such that hybrid embodiments may result from these combinations. It will be appreciated that certain features may be removed from each of these embodiments and still provide adequate functionality in many instances.
The combination of text entry region 709 and the search parameter menu bar allow a user to specify a search query or search parameters. Each of the configurable pull down menus presents a user with a list of options to select from when the user activates the pull down menu. As shown in
It will also be appreciated that the various options in the pull down menus may depend upon the fields within a particular type of metadata file. For example, the selection of “images” to be searched may cause the various fields present in the metadata for an image type file to appear in one or more pull down menus, allowing the user to search within one or more of those fields for that particular type of file. Other fields which do not apply to “images” types of files may not appear in these menus in order reduce the complexity of the menus and to prevent user confusion.
Another feature of the present invention is shown in
The window 1001 includes an additional feature which may be very useful while analyzing a search result. A user may select individual files from within the display region 1005 and associate them together as one collection. Each file may be individually marked using a specific command (e.g. pressing the right button on a mouse and selecting a command from a menu which appears on the screen, which command may be “add selection to current group”) or similar such commands. By individually selecting such files or by selecting a group of files at once, the user may associate this group of files into a selected group or a “marked” group and this association may be used to perform a common action on all of the files in the group (e.g. print each file or view each file in a viewer window or move each file to a new or existing folder, etc.). A representation of this marked group appears as a folder in the user-configurable portion 1003A. An example of such a folder is the folder 1020 shown in the user-configurable portion 1003A. By selecting this folder (e.g. by positioning a cursor over the folder 1020 and pressing and releasing a mouse button or by pressing another button) the user, as a result of this selection, will cause the display within the display region 1005 of the files which have been grouped together or marked. Alternatively, a separate window may appear showing only the items which have been marked or grouped. This association or grouping may be merely temporary or it may be made permanent by retaining a list of all the files which have been grouped and by keeping a folder 1020 or other representations of the grouping within the user-configurable side bar, such as the side bar 1003A. Certain embodiments may allow multiple, different groupings to exist at the same time, and each of these groupings or associations may be merely temporary (e.g. they exist only while the search results window is displayed), or they may be made permanent by retaining a list of all the files which have been grouped within each separate group. It will be appreciated that the files within each group may have been created from different applications. As noted above, one of the groupings may be selected and then a user may select a command which performs a common action (e.g. print or view or move or delete) on all of the files within the selected group.
The window 1201 shown in
A column 1211 of window 1201 allows a user to select various search parameters by selecting one of the options which in turn causes the display of a submenu that corresponds to the selected option. In the case of
The window 1301 shown in
The search results user interface shown in
It will be appreciated that this method may employ various alternatives. For example, a window may appear after the command option 2232 or 2233 has been selected, and this window asks for a name for the new folder. This window may display a default name (e.g. “new folder”) in case the user does not enter a new name. Alternatively, the system may merely give the new folder or new storage facility a default path name. Also, the system may merely create the new folder and move or copy the items into the new folder without showing the new window as shown in
Exemplary Processes for Generating Metadata
According to certain embodiments of the invention, the metadata that can be searched, for example, to locate or identify a file, may include additional metadata generated based on the original metadata associated with the file and/or at least a portion of content of the file, which may not exist in the original metadata and/or content of the file. In one embodiment, the additional metadata may be generated via an analysis performed on the original metadata and/or at least a portion of the content of the file. The additional metadata may capture a higher level concept or broader scope information regarding the content of the file.
For example, according to one embodiment, if a text file or a word document contains first metadata or content of “Nike” (e.g., a shoe designer), “Callaway” (e.g., a golf club designer), and “Tiger Woods” (e.g., a professional golf player), based on an analysis on these terms, additional metadata (e.g., second metadata) generated may include “Golf” and/or “PGA Tournament”, etc., although the additional metadata may not exist in the file's content of metadata. Subsequently, when a search is conducted on a term such as “golf”, the file may be identified since the file contains the first metadata (e.g., “Nike”, “Callaway”, and “Tiger Woods”) likely related to the term being searched (e.g., golf). As a result, although a user searches a term that is not contained in the file, the file may still be identified as a part of a search result because the file contains certain terms that are considered related to the term being searched based on an analysis.
In at least certain embodiments, a file is analyzed algorithmically in order to derive or generate metadata for the file and this metadata is added to a metadata database, such as metadata database 415 of
In one embodiment, exemplary system 2300 includes, but is not limited to, a metadata importer to extract at least a portion of content and metadata from a file to generate (e.g., import) a first set of metadata, and a metadata analyzer coupled to the metadata importer to perform a metadata analysis on the first set of metadata to generate a second set of metadata. In certain embodiments, a content analyzer may analyze the content (e.g. text) of a file to generate metadata which is added to a metadata database, such as metadata database 2303. The second set of metadata may include at least one metadata that is not included in the first set of metadata, where the second set of metadata is suitable to be searched to identify or locate the file.
Referring to
In one embodiment, the metadata importer 2302 receives a file containing metadata associated with the file and extracts at least a portion of the metadata and content of the file to generate a first metadata set 2305. File 2304 may be one of the various types of files including, but are not limited to, the following types of files:
Plain text, rich text format (RTF) files
JPEG, PNG, TIFF, EXIF, and/or GIF images
MP3 and/or AAC audio files
QuickTime movies
Portable document format (PDF) files
Word and/or spreadsheet documents (e.g., Microsoft Word or Excel documents)
Chatting documents (e.g., iChat transcripts)
Email messages (e.g., Apple Xmail)
Address Book contacts
Calendar files (e.g., iCal calendar files)
Video clip (e.g., QuickTime movies)
In one embodiment, metadata importer 2302 may be an importer dedicated to import certain types of documents. Metadata importer 2302 may be a third party application or driver that is dedicated to import the metadata from a particular file format produced by the third party application or drives. For example, metadata importer 2302 may be a PDF metadata importer that is dedicated to import metadata from a PDF file and the metadata importer 2302 may be designed and/or provided by a PDF file designer (e.g., Adobe System) or its partners. The metadata importer 2302 may be communicatively coupled to the metadata analysis module via an API (application programming interface) such as, for example, as a plug-in application or driver.
In response to the first metadata set 2305, according to one embodiment, the metadata analysis module 2301 performs an analysis (e.g., a semantic analysis) on the first metadata set 2305 and generates additional metadata, a second metadata set 2306. At least a portion of the first and/or second metadata sets may then be stored in the metadata database 2303 in a manner suitable to be searched to identify or locate the file subsequently, using one of the techniques described above. In addition to generating a second metadata set, or as an alternative to generating the second metadata set, a content analyzer may analyze the content of a file and generate metadata which is added to metadata database 2303.
In one embodiment, the metadata analysis module 2301 may perform the analysis using a variety of analytical techniques including, but not limited to, the following techniques:
Latent semantic analysis (LSA)
Tokenization
Stemming
Concept extraction
Spectrum analysis and/or filtering
Optical character recognition (OCR)
Voice recognition (also referred to as speech-to-text operations)
Latent semantic analysis (LSA) is a statistical model of words usage that permits comparisons of the semantic similarity between pieces of textual information. LSA was originally designed to improve the effectiveness of information retrieval (IR) methods by performing retrieval based on the derived “semantic” content of words in a query as opposed to performing direct word matching. This approach avoids some of the problem of synonymy, in which different words can be used to describe the same semantic concept.
The primary assumption of LSA is that there is some underlying or “latent” structure in the pattern of word usage across documents, and that statistical techniques can be used to estimate this latent structure. The term “document” in this case, can be thought of as contexts in which words occur and also could be smaller text segments such as individual paragraphs or sentences. Through an analysis of the associations among words and documents, the method produces a representation in which words that are used in similar contexts will be more semantically associated.
Typically, in order to analyze a text, LSA first generates a matrix of occurrences of each word in each document (e.g., sentences or paragraphs). LSA then uses singular-value decomposition (SVD), a technique closely related to eigenvector decomposition and factor analysis. The SVD scaling decomposes the word-by-document matrix into a set of k, typically ranging from 100 to 300, orthogonal factors from which the original matrix can be approximated by linear combination. Instead of representing documents and terms directly as vectors of independent words, LSA represents them as continuous values on each of the k orthogonal indexing dimensions derived from the SVD analysis. Since the number of factors or dimensions is much smaller than the number of unique terms, words will not be independent. For example, if two terms are used in similar contexts (documents), they will have similar vectors in the reduced-dimensional LSA representation. One advantage of this approach is that matching can be done between two pieces of textual information, even if they have no words in common.
For example, to illustrate this, if the LSA was trained on a large number of documents, including the following two:
1) The U.S.S. Nashville arrived in Colon harbor with 42 marines
2) With the warship in Colon harbor, the Colombian troops withdrew.
The vector for the word “warship” would be similar to that of the word “Nashville” because both words occur in the same context of other words such as “Colon” and “harbor”. Thus, the LSA technique automatically captures deeper associative structure than simple term-term correlations and clusters. One can interpret the analysis performed by SVD geometrically. The result of the SVD is a k-dimensional vector space containing a vector for each term and each document. The location of term vectors reflects the correlations in their usage across documents. Similarly, the location of document vectors reflects correlations in the terms used in the documents. In this space the cosine or dot product between vectors corresponds to their estimated semantic similarity. Thus, by determining the vectors of two pieces of textual information, the semantic similarity between them can be determined.
Tokenization is a process of converting a string of characters into a list of words and other significant elements. However, in most cases, morphological variants of words have similar semantic interpretations and can be considered as equivalent for the purpose of IR applications. For this reason, a number of so-called stemming algorithms, or stemmers, have been developed, which attempt to reduce a word to its stem or root form. Thus, the key terms of a query or document are represented by stems rather than by the original words. This not only means that different variants of a term can be conflated to a single representative form, it also reduces the dictionary size (e.g., the number of distinct terms needed for representing a set of documents). A smaller dictionary size results in a saving of storage space and processing time.
At least for certain embodiments, it does not usually matter whether the stems generated are genuine words or not. Thus, “computation” might be stemmed to “comput”, provided that (a) different words with the same base meaning are conflated to the same form, and (b) words with distinct meanings are kept separately. An algorithm, which attempts to convert a word to its linguistically correct root (“compute” in this case), is sometimes called a lemmatiser. Examples of stemming algorithms may include, but are not limited to, Paice/Husk, Porter, Lovins, Dawson, and Krovetz stemming algorithms. Typically, once the sentences of a document have been processed into a list of words, LSA may be applied to the words.
Concept extraction is a technique used for mining textual information based on finding common “themes” or “concept” in a given document. Concept extraction technique may also be used to extract some semantic features from raw text which are then linked together in a structure which represents the text's thematic content. Content extraction technique may be combined with LSA to analyze the first metadata set and generate the second metadata set, according to one embodiment.
A spectrum is normally referred to as a range of color or a function of frequency or wavelength, which may represent electromagnetic energy. The word spectrum also takes on the obvious analogous meaning in reference to other sorts of waves, such as sound wave, or other sorts of decomposition into frequency components. Thus, a spectrum is a usually a two-dimensional plot of a compound signal, depicting the components by another measure. For example, frequency spectrum is a result of Fourier-related transform of a mathematical function into a frequency domain.
According to one embodiment, spectrum analysis/filtering may be used to analyze an image and/or an audio in the frequency domain, in order to separate certain components from the image and/or audio sound. For example, spectrum analysis/filtering may be used to separate different colors from the image or different tunes of an audio, etc. The spectrum analysis may be used to determine the type of music or sound for an audio file or the type of picture.
In addition to spectrum analysis/filtering performed on an image, according to one embodiment, OCR (optical character recognition) may be used to recognize any text within the image. OCR is the recognition of printed or written text or characters by a computer. This involves photo scanning of the text character-by-character, analysis of the scanned-in image, and then translation of the character image into character codes, such as ASCII (American Standard Code for Information Interchange), commonly used in data processing. During OCR processing, the scanned-in image or bitmap is analyzed for light and dark areas in order to identify each alphabetic letter or numeric digit. When a character is recognized, it is converted into an ASCII code. Special circuit boards and computer chips (e.g., digital signal processing or DSP chip) designed expressly for OCR may be used to speed up the recognition process.
Similarly, in addition to the spectrum analysis/filtering performed on an audio, voice recognition (also referred to as speech-to-text) may be performed to recognize any text within the audio (e.g., words used in a song). Voice recognition is the field of computer science that deals with designing computer systems that can recognize spoken words and translate them into text. Once the text within an image and an audio has been extracted (e.g., via OCR and/or voice recognition), other text related analysis techniques such as LSA, etc. may be applied. Note that, throughout this application, the above techniques are described by way of examples. They are not shown by way of limitations. Other techniques apparent to one with ordinary skill in the art may also be applied.
Referring back to
In addition, certain external resources 2307 may be invoked to assist the analysis to obtain additional metadata. Some examples of the external resources 2307 may include GPS (global positioning system) services, Web services, and/or database services, etc. For example, in response to the first metadata as the example described above having the terms of “Nike”, “Callaway”, and “Tiger Woods”, the metadata analysis module 2301 may access certain external resources, such as databases or Web sites over a network, to obtain additional information about the companies of Nike and/or Callaway (e.g., company's press release or product announcements, etc.), as well as information regarding Tiger Woods (e.g., biography or world PGA ranking, etc.) At least a portion of the obtained information may be used as a part of the second metadata set 2306, which may in turn be stored in the metadata database 2303 in a manner suitable to be searched subsequently to identify or locate the file. Note that the external resources 2307 may also be used by other components for the system 2300, such as, for example, metadata importer 2302.
Furthermore, the first metadata set 2305 may be analyzed against a previously trained metadata set 2309 in order to generate the second metadata set 2306, according to one or more rules 2308. The trained metadata set 2309 may be trained against a relatively large amount of information via a training interface (not shown) by one or more users. Alternatively, the exemplary system 2300 may also include a dynamic training interface (not shown) to allow a user to categorize particular metadata, especially when the metadata analysis module 2301 could not determine. The result of user interaction may further be integrated into the trained metadata set 2309. Examples of rules 2308 may be implemented similar to those shown in
Referring to
According to one embodiment, the metadata importer 2302 may invoke one or more of the modules 2310-2316 to generate the first metadata set 2305. According to a further embodiment, the external resources 2307 may be invoked by the metadata importer 2302.
At block 2403, a metadata analysis is performed on the first metadata set and/or at least a portion of the file content to generate a second metadata set (e.g., second metadata set 2306) in addition to the first metadata set. The analysis may be performed by metadata analysis module 2301 of
A typical file may include a set of standard metadata, such as dates when the file was created, accessed, or modified, as well as security attributes such as whether the file is read-only, etc. In addition, each type of files may further include additional metadata specifically designed for the respective type of file. Among the types of files described above, text, image, audio, and a combination of these (e.g., a video clip) may be more popular than others. Following descriptions may be used to describe detailed processes on these files. However, these processes are illustrated by way of examples only, other types of files may also be handled using one or more techniques described above.
Referring to
At block 2703, a metadata analysis is performed on the first metadata set and/or at least a portion of the file content to generate a second metadata set (e.g., second metadata set 2306) in addition to the first metadata set. The analysis may be performed by metadata analysis module 2301 of
Furthermore, one or more external resources may be invoked to determine additional information regarding the article described in the text file. For example, external GPS services may be invoked to determine a location and dates/time of the event. Further, external Web services or database services may be invoked to obtain additional information regarding the companies or persons sponsoring the event, etc.
At block 2704, some or all of the first and second metadata sets may be stored in one or more databases (e.g., metadata database 2303) in a manner (e.g., category configuration examples shown in
Referring to
For example, at block 2902, based on at least a portion of the metadata 2800 of
According to one embodiment, a metadata analysis may then be performed on the first metadata set and/or at least a portion of the file content to generate a second metadata set (e.g., second metadata set 2306) in addition to the first metadata set. The analysis may be performed by metadata analysis module 2301 of
Furthermore, according to one embodiment, at block 2904, one or more external resources may be invoked to determine additional information regarding the image. For example, external GPS services may be invoked to determine a location and date/time when the image was generated.
At block 2905, any text (if there is any) existed in the image may be recognized, for example, using OCR techniques. Thereafter, any text metadata processing techniques, such as those described above (e.g., similar to operations of
At block 2906, some or all of the first and second metadata sets may be stored in one or more databases (e.g., metadata database 2303) in a manner (e.g., category configuration examples shown in
Referring to
According to one embodiment, a metadata analysis may then be performed on the first metadata set and/or at least a portion of the file content to generate a second metadata set (e.g., second metadata set 2306) in addition to the first metadata set. The analysis may be performed by metadata analysis module 2301 of
Furthermore, according to one embodiment, at block 3103, one or more external resources may be invoked to determine additional information regarding the audio. For example, external Web or database services may be invoked to determine biography information of the artist, and GPS services may be invoked to determine location and date when the audio was recorded (e.g., the location and date of the concert).
At block 3104, any text (if there is any) existed in the audio, such as, for example, words used in a song, may be recognized, for example, using OCR techniques. Thereafter, any text metadata processing techniques, such as those described above (e.g., similar to operations of
At block 3105, some or all of the first and second metadata sets may be stored in one or more databases (e.g., metadata database 2303) in a manner (e.g., category configuration examples shown in
Note that although a text file, an image file, and an audio file have been described above, they are illustrated by way of examples rather than by way of limitations. In fact, any of the above examples may be performed individually or in combination. For example, a word document may include text and an image. Some or all of the operations involved in
Thus, methods and apparatuses for processing metadata have been described herein. Some portions of the preceding detailed descriptions have been presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the ways used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of operations leading to a desired result. The operations are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.
It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the above discussion, it is appreciated that throughout the description, discussions utilizing terms such as “processing” or “computing” or “calculating” or “determining” or “displaying” or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.
Embodiments of the present invention also relate to an apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may include a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), erasable programmable ROMs (EPROMs), electrically erasable programmable ROMs (EEPROMs), magnetic or optical cards, or any type of media suitable for storing electronic instructions, and each coupled to a computer system bus.
The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general-purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the required method operations. The required structure for a variety of these systems will appear from the description below. In addition, the present invention is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the invention as described herein.
A machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer). For example, a machine-readable medium includes read only memory (“ROM”); random access memory (“RAM”); magnetic disk storage media; optical storage media; flash memory devices; electrical, optical, acoustical or other form of propagated signals (e.g., carrier waves, infrared signals, digital signals, etc.); etc.
In the foregoing specification, the invention has been described with reference to specific exemplary embodiments thereof. It will be evident that various modifications may be made thereto without departing from the broader spirit and scope of the invention as set forth in the following claims. The specification and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense.
This application is a continuation-in-part of U.S. patent application Ser. No. 10/877,584, filed on Jun. 25, 2004 now U.S. Pat. No. 7,730,012. This application also claims priority to co-pending U.S. Provisional Patent Application No. 60/643,087 filed on Jan. 7, 2005, which provisional application is incorporated herein by reference in its entirety; this application claims the benefit of the provisional's filing date under 35 U.S.C. §119(e). This present application hereby claims the benefit of these earlier filing dates under 35 U.S.C. §120.
Number | Name | Date | Kind |
---|---|---|---|
4270182 | Asija | May 1981 | A |
4704703 | Fenwick | Nov 1987 | A |
4736308 | Heckel | Apr 1988 | A |
4939507 | Beard et al. | Jul 1990 | A |
4985863 | Fujisawa et al. | Jan 1991 | A |
5008853 | Bly et al. | Apr 1991 | A |
5072412 | Henderson, Jr. et al. | Dec 1991 | A |
5228123 | Heckel | Jul 1993 | A |
5241671 | Reed et al. | Aug 1993 | A |
5319745 | Vinsonneau et al. | Jun 1994 | A |
5355497 | Cohen-Levy | Oct 1994 | A |
5392428 | Robins | Feb 1995 | A |
5504852 | Thompson-Rohrlich | Apr 1996 | A |
5544360 | Lewak et al. | Aug 1996 | A |
5557793 | Koerber | Sep 1996 | A |
5592608 | Weber et al. | Jan 1997 | A |
5623681 | Rivette et al. | Apr 1997 | A |
5644657 | Capps et al. | Jul 1997 | A |
5659735 | Parrish et al. | Aug 1997 | A |
5710844 | Capps et al. | Jan 1998 | A |
5761678 | Bendert et al. | Jun 1998 | A |
5828376 | Solimene et al. | Oct 1998 | A |
5832500 | Burrows | Nov 1998 | A |
5845301 | Rivette et al. | Dec 1998 | A |
5890147 | Peltonen et al. | Mar 1999 | A |
5966710 | Burrows | Oct 1999 | A |
6012053 | Pant et al. | Jan 2000 | A |
6055543 | Christensen et al. | Apr 2000 | A |
6067541 | Raju et al. | May 2000 | A |
6115717 | Mehrota et al. | Sep 2000 | A |
6119118 | Kain, III et al. | Sep 2000 | A |
6185574 | Howard et al. | Feb 2001 | B1 |
6353823 | Kumar | Mar 2002 | B1 |
6363386 | Soderberg et al. | Mar 2002 | B1 |
6370562 | Page et al. | Apr 2002 | B2 |
6374260 | Hoffert et al. | Apr 2002 | B1 |
6389412 | Light | May 2002 | B1 |
6401097 | McCotter et al. | Jun 2002 | B1 |
6408301 | Patton et al. | Jun 2002 | B1 |
6434548 | Emens et al. | Aug 2002 | B1 |
6466237 | Miyao et al. | Oct 2002 | B1 |
6473794 | Guheen et al. | Oct 2002 | B1 |
6480835 | Light | Nov 2002 | B1 |
6564225 | Brogliatti et al. | May 2003 | B1 |
6567805 | Johnson et al. | May 2003 | B1 |
6613101 | Mander et al. | Sep 2003 | B2 |
6665657 | Dibachi | Dec 2003 | B1 |
6681227 | Kojima et al. | Jan 2004 | B1 |
6704739 | Craft et al. | Mar 2004 | B2 |
6804684 | Stubler et al. | Oct 2004 | B2 |
6833865 | Fuller et al. | Dec 2004 | B1 |
6847959 | Arrouye et al. | Jan 2005 | B1 |
7069542 | Daly | Jun 2006 | B2 |
7076509 | Chen et al. | Jul 2006 | B1 |
7111021 | Lewis et al. | Sep 2006 | B1 |
7162473 | Dumais et al. | Jan 2007 | B2 |
7280956 | Cross et al. | Oct 2007 | B2 |
7408660 | Barbeau | Aug 2008 | B1 |
7437358 | Arrouye et al. | Oct 2008 | B2 |
7506111 | Hamilton | Mar 2009 | B1 |
7526812 | DeYoung | Apr 2009 | B2 |
7577692 | Corbett et al. | Aug 2009 | B1 |
7613689 | Arrouye et al. | Nov 2009 | B2 |
7617225 | Arrouye et al. | Nov 2009 | B2 |
7630971 | Arrouye et al. | Dec 2009 | B2 |
7672962 | Arrouye et al. | Mar 2010 | B2 |
7693856 | Arrouye et al. | Apr 2010 | B2 |
7730012 | Arrouye et al. | Jun 2010 | B2 |
7743035 | Chen et al. | Jun 2010 | B2 |
7774326 | Arrouye et al. | Aug 2010 | B2 |
7826709 | Moriya et al. | Nov 2010 | B2 |
7873630 | Arrouye et al. | Jan 2011 | B2 |
7908656 | Mu | Mar 2011 | B1 |
20010054042 | Watkins et al. | Dec 2001 | A1 |
20020040442 | Ishidera | Apr 2002 | A1 |
20020049738 | Epstein | Apr 2002 | A1 |
20020138820 | Daly | Sep 2002 | A1 |
20020169771 | Melmon et al. | Nov 2002 | A1 |
20020184195 | Qian | Dec 2002 | A1 |
20020184496 | Lehmeier et al. | Dec 2002 | A1 |
20030004942 | Bird | Jan 2003 | A1 |
20030018622 | Chau | Jan 2003 | A1 |
20030084087 | Berry | May 2003 | A1 |
20030088567 | Rosenfelt et al. | May 2003 | A1 |
20030088573 | Stickler | May 2003 | A1 |
20030093810 | Taniguchi | May 2003 | A1 |
20030100999 | Markowitz | May 2003 | A1 |
20030108237 | Hirata | Jun 2003 | A1 |
20030117907 | Kang | Jun 2003 | A1 |
20030122873 | Dieberger et al. | Jul 2003 | A1 |
20030122874 | Dieberger et al. | Jul 2003 | A1 |
20030135828 | Dockter et al. | Jul 2003 | A1 |
20030135840 | Szabo et al. | Jul 2003 | A1 |
20030140035 | Burrows | Jul 2003 | A1 |
20030144990 | Benelisha et al. | Jul 2003 | A1 |
20030158855 | Farnham et al. | Aug 2003 | A1 |
20030196094 | Hillis et al. | Oct 2003 | A1 |
20030200218 | Tijare et al. | Oct 2003 | A1 |
20030200234 | Koppich et al. | Oct 2003 | A1 |
20040172376 | Kobori et al. | Sep 2004 | A1 |
20040199491 | Bhatt | Oct 2004 | A1 |
20040199494 | Bhatt | Oct 2004 | A1 |
20050091487 | Cross et al. | Apr 2005 | A1 |
20050187965 | Abajian | Aug 2005 | A1 |
20050228833 | Choi et al. | Oct 2005 | A1 |
20050289109 | Arrouye et al. | Dec 2005 | A1 |
20050289111 | Tribble et al. | Dec 2005 | A1 |
20050289127 | Giampaolo et al. | Dec 2005 | A1 |
20060149781 | Blankinship | Jul 2006 | A1 |
20060190506 | Rao et al. | Aug 2006 | A1 |
20060196337 | Breebart et al. | Sep 2006 | A1 |
20070112844 | Tribble et al. | May 2007 | A1 |
20070261537 | Eronen et al. | Nov 2007 | A1 |
Number | Date | Country |
---|---|---|
WO 0146870 | Jun 2001 | WO |
WO 03060774 | Jul 2003 | WO |
WO 03090056 | Oct 2003 | WO |
Number | Date | Country | |
---|---|---|---|
20050289111 A1 | Dec 2005 | US |
Number | Date | Country | |
---|---|---|---|
60643087 | Jan 2005 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 10877584 | Jun 2004 | US |
Child | 11112955 | US |