Embodiments of the subject matter described herein relate generally to a computer implemented methodology for determining whether the source of online content is a content aggregator. More particularly, embodiments of the subject matter relate to a webcrawling system that detects content aggregator sources.
The Internet is a source of much useful information. However, the content of the internet is also polluted with spam data and duplicated content. Many useful websites represent the legitimate source of content such as news items, articles, comments, user posts, and the like. Social network and blog sites are also a rich source of online content. A blog is a discussion or collection of information published on the Internet, typically formatted as a series of discrete entries, called posts, usually displayed in reverse chronological order so that the most recent post appears first. Webcrawlers can obtain updated information from a blog through its Rich Site Summary (RSS) feed. An RSS feed normally includes summarized text, the publication date of a post and the name of the author. Thus, webcrawlers can analyze RSS data to characterize, index, and otherwise process blog site content (and other website content).
Marketing campaigns use information mined from the web (using, for example, a webcrawling system) to assist in meeting the needs of their customers. However, more than one-third of the content on the web is duplicated or copied content. Duplicate or near-duplicate posts are known as aggregated content, and such duplicate or near-duplicate content is often found on aggregator websites (or, simply, aggregators). Most aggregated content is generated automatically by stealing original content from legitimate sources (original sources or legitimate “republication” sources). In order to provide high quality content to end users, it is important to identify and eliminate aggregators and/or aggregated content when crawling the web.
Accordingly, it is desirable to have a computer implemented methodology for detecting the presence of aggregated online content. In addition, it is desirable to provide and maintain a system that is capable of dynamically responding to aggregated content in an efficient and effective manner. Furthermore, other desirable features and characteristics will become apparent from the subsequent detailed description and the appended claims, taken in conjunction with the accompanying drawings and the foregoing technical field and background.
A more complete understanding of the subject matter may be derived by referring to the detailed description and claims when considered in conjunction with the following figures, wherein like reference numbers refer to similar elements throughout the figures.
The subject matter presented here generally relates to webcrawling technology that analyzes websites, webpages, website content, blogs, and the like. The following description may refer to online content, which may be found on or in association with webpages, websites, blogs, posts, blogposts, comments, forums, or the like. These and other forms on online content (which may be targeted by content aggregators) are contemplated by this description.
A webcrawling system as described herein analyzes web content to determine whether or not the content is aggregated content. Given a very large set of posts, one challenge for an effective solution is how best to find any two posts having similar content. Moreover, given a set of posts having similar content, the system must differentiate between aggregated content and original or otherwise legitimate content.
In certain embodiments, the system analyzes the RSS information of blog posts and flags or otherwise identifies aggregated content in an appropriate manner. The reviewed RSS information may include, without limitation: summarized text, the publication date of a post, and the name of the author. If for some reason the publication date of a post is missing, the system sets the publication date of the post to the date the post was crawled.
The embodiments described herein may use a fast and efficient method for detecting whether any two posts contain similar or identical content. The systems and methods generate a content key for each individual post by combining a predefined number of short phrases, words, text, or letters from the post content. Using short phrases allows the systems and methods to be able to catch tricky aggregators such as those that copy only a few paragraphs or those that change common words in the original content. The systems and methods then hash the content key using, for example, the SHA-1 algorithm, and store the hashed key in a cache for a fast lookup. The system assumes that two posts have similar content if their hashed keys are the same. A memory sharing model is used in certain implementations to improve the performance of the lookup service.
The embodiments described herein may use a heuristic method to differentiate between original content and aggregated content. The systems and methods presented here use information associated with the post itself (such as the author, crawled date, and/or outbound links), and information from its RSS feed (such as volume and frequency of feed updates) to identify aggregated content. In practice, other detectable factors may also be considered. Note that the first-crawled post need not be the original content; there are situations where aggregated content is crawled long before the corresponding original content. Moreover, the published date of a post is not always extracted correctly for various reasons.
Turning now to the drawings,
The system 100 includes or cooperates with one or more databases 106 and one or more indices 108 that are utilized to store and index information obtained and processed by the data acquisition module 102. Although not shown in
The data acquisition module 102 may be implemented as a suitably configured module of a computing system. In this regard, the data acquisition module 102 can be realized as a software-based processing module or logical function of a host computer system. The data acquisition module 102 performs a number of conventional data acquisition and processing functions that need not be described in detail here. In addition to such conventional functionality, the data acquisition module 102 also performs certain noise filtering techniques, which are schematically depicted as a noise filtering module 110 in
The computing system 200 is only one example of a suitable operating environment and is not intended to suggest any limitation as to the scope of use or functionality of the inventive subject matter presented here. Other well-known computing systems, environments, and/or configurations that may be suitable for use with the embodiments described here include, but are not limited to, personal computers, server computers, hand-held or laptop devices, multiprocessor systems, microprocessor-based systems, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
The computing system 200 and certain aspects of the exemplary aggregator detection module 112 may be described in the general context of computer-executable instructions, such as program modules, executed by one or more computers or other devices. Generally, program modules include routines, programs, objects, components, data structures, and/or other elements that perform particular tasks or implement particular abstract data types. Typically, the functionality of the program modules may be combined or distributed as desired in various embodiments.
The computing system 200 typically includes at least some form of computer readable media. Computer readable media can be any available media that can be accessed by the computing system 200 and/or by applications executed by the computing system 200. By way of example, and not limitation, computer readable media may comprise tangible and non-transitory computer storage media. Computer storage media includes volatile, nonvolatile, removable, and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can accessed by the computing system 200. Combinations of any of the above should also be included within the scope of computer readable media.
Referring again to
The computing system 200 may also contain communications connection(s) 212 that allow the computing system 200 to communicate with other devices. For example, the communications connection(s) could be used to establish data communication between the computing system 200 and devices or terminals operated by developers or end users, and to establish data communication between the computing system 200 and the Internet. The communications connection(s) 212 may also be associated with the handling of communication media as defined above.
The computing system 200 may also include or communicate with various input device(s) 214 such as a keyboard, mouse or other pointing device, pen, voice input device, touch input device, etc. Although the exemplary embodiment described herein utilizes a mouse device, certain embodiments can be equivalently configured to support a trackball device, a joystick device, a touchpad device, or any type of pointing device. The computing system 200 may also include or communicate with various output device(s) 216 such as a display, speakers, printer, or the like. All of these devices are well known and need not be discussed at length here.
As mentioned above, an exemplary embodiment of the system 100 includes or cooperates with at least one processor and a suitable amount of memory that stores executable instructions that, when executed by the processor, support various data acquisition and aggregator detection functions. In this regard,
The process 300 obtains an online content item from any suitable online or web-based source (task 302) using any appropriate technique or technology. In this regard, the process 300 may utilize conventional webcrawling methodologies to acquire the content item. As used here, an online content item may be any of the following, without limitation: a blog site; a blog post; a website; a webpage; a video; a news item; a social media profile page; a user post (e.g., a post on a social media site such as FACEBOOK); a user comment; a user message (e.g., a short message such as the type generated by the TWITTER service); etc. It should be appreciated that the foregoing list is merely exemplary, and that the list is not intended to be exhaustive, restrictive, or limiting in any way. Moreover, a content item may include or be associated with corresponding RSS data, HTML code, and/or other characterizing information or data.
For purposes of this example, it is assumed that the content item is a post that includes a plurality of words (e.g., a blog post, a posted article, a user comment, or other text-based post). In certain embodiments, the process 300 is designed to disregard relatively short content items and, conversely, to only consider content items having at least a minimum number of words. Accordingly, the process 300 checks whether the content item under analysis is at least a minimum size (query task 304). For this example, the size represents the word count of the content item, and the threshold word count is within the range of about 100 to 500 words. In alternative implementations, the size could represent the sentence count, the paragraph count, the file size, or any suitable metric. If the content item does not satisfy the minimum size requirement (the “No” branch of query task 304), then the process 300 skips the aggregated content detection routine and exits without flagging or marking the content item.
If the content item under analysis satisfies the minimum size threshold (the “Yes” branch of query task 304), then the process 300 continues by generating a characterizing signature for the content item (task 306). As explained in more detail below with reference to
After obtaining the characterizing signature of the content item of interest, the process 300 searches for a previously-saved instance of the same characterizing signature (task 308). In practice, task 308 may search a cache memory architecture of the host system and/or any appropriate memory element to find the characterizing signature. If the characterizing signature is not found (the “No” branch of query task 310), then the process 300 assumes that the content item has not been previously analyzed or reviewed. Accordingly, the newly-generated characterizing signature is saved for future reference (task 312). The characterizing signature is preferably saved in association with the uniform resource locator (URL) of the content item and/or the data that defines and specifies the content item itself, such that the system can review and analyze the source of the content item as needed. Additional information or data related to the content item may also be saved in a manner that is linked to or otherwise associated with the saved characterizing signature (e.g., some or all of the written content itself, metadata, HTML code, HTML tags, RSS information, or the like). For the sake of processing speed and efficiency, the characterizing signature and other data associated with the content item is saved in the cache memory architecture of the host system.
The process 300 may also initialize a timeout counter and/or set a time stamp for the recently-saved characterizing signature (task 314). This timer feature can be used to designate a limited active time period for each saved signature. A limited lifespan is desirable because research has found that aggregators tend to copy and republish relatively new content, and that old content rarely appears on aggregator websites. Moreover, old content is usually removed from aggregator websites after a period of time. Thus, the use of a timeout ensures that the system does not search for old characterizing signatures that are not likely to represent aggregated content, and increases the efficiency of the cache memory architecture. After setting the timeout counter or time stamp for the saved signature, the process 300 exits.
If a previously-saved instantiation of the newly generated characterizing signature is found (the “Yes” branch of query task 310), then the process 300 assumes that the content item under analysis (or a virtually identical copy thereof) has been received and characterized before. This conclusion is reached because each version of the same characterizing signature will have a different URL associated or saved therewith. Thus, when the newly generated characterizing signature matches a previously saved signature, the process 300 presumes that at least one of the content items is aggregated content. Accordingly, the process 300 retrieves certain information or data for the content item that is associated with or linked to the previously-saved instance of the signature (task 316). For ease of description, the previously-considered content item will be referred to herein as the “saved” content item. Again, the saved content item is also characterized by the signature generated at task 306.
At this point, the process 300 attempts to determine which of the two content items (the newly obtained content item or the saved content item) is aggregated content. To this end, the process 300 assumes that one of the two content items is aggregated content (although in reality this assumption may not always be accurate). Consequently, after the process 300 flags one of the two content items as an aggregated content item, it flags the other content item as “original” content. Note that one content item will be flagged as “original” content relative to the flagged aggregated content item, whether or not the flagged “original” content was actually obtained from the true original source. In practice, the host system may be suitably configured such that the same content item (sourced from the same URL) is not redundantly processed. Alternatively, the process 300 may have certain safeguarding measures to handle the scenario where the content item under analysis is identical to the saved content item.
As explained in more detail below with reference to
The process 300 may update a database (e.g., the cache memory architecture of the host system) in response to the determination made at task 318. More specifically, if the process 300 determines that the new content item under analysis is the “original” content, then the memory is updated to save information related to the new content item, in association with the saved characterizing signature (task 320). In other words, the previously-saved information (corresponding to the saved content item) is replaced with new information that corresponds to the new content item. For this particular embodiment, task 320 saves the source URL of the new content item in association with the characterizing signature. In addition, the process 300 may delete the source URL of the other content item, such that the characterizing signature is no longer saved in association with the other content item. Of course, other information and data related to the new content item may also be saved at this time, including any or all of the information described above with reference to task 312. If task 320 is performed to save a new source URL for a new content item, then the timeout counter for the saved characterizing signature is reset, preferably to its initial value or state (task 322). The timeout counter is reset for the reasons explained above with reference to task 314.
In contrast, if task 318 determines that the new content item is an aggregated content item (relative to the saved content item), then task 320 need not update any records or data saved in association with the previously-saved signature. In other words, the saved content item is simply maintained as the baseline “original” content item for purposes of ongoing comparisons to other content items that have a matching signature. Moreover, the timeout counter for the previously-saved signature will not be reset. This allows the process 300 to be repeated in an ongoing manner to update the designated original content item as needed, or until the timeout counter expires.
As explained above, the process 300 generates and compares characterizing signatures of online content items to detect the presence of aggregated content. Although the particular type, format, and configuration of the characterizing signatures may vary from one implementation to another, certain preferred embodiments employ the signature generation scheme depicted in
The signature generation process 400 may begin by extracting or identifying the relevant online content of interest (task 402), which may be taken from a webpage, a blog post, a forum entry, a user comment, a published article, or the like. For this particular example, the extracted content represents the written text-based content of a post, excluding HTML tags, and excluding any “hidden” codes, metadata, or the like. Identifying the text content of interest enables the process 400 to select only a portion of the words that appear in the text content. More specifically, the selection routine may begin by selecting the text to be processed, in accordance with a defined word selection algorithm (task 404).
Although the particular word selection algorithm may vary from one embodiment to another, this example chooses an initial number of paragraphs (or sentences) from the content item. The routine selects the paragraphs that appear at the beginning of the content because research has shown that most aggregators tend to copy the beginning portion of original content. Thus, the accuracy of the process 400 is not compromised by selecting only some of the initial paragraphs from the beginning of a post. Although the exact number of paragraphs chosen at task 404 may vary to suit the needs of the given system, this example assumes that the first four paragraphs are chosen. This example also assumes that each paragraph contains at least a minimum number of sentences and/or at least a minimum number of words as needed to carry out the remainder of the process 400. If for some reason any of these baseline requirements are not met, then task 404 may follow an alternative scheme as a backup measure.
Next, the process 400 eliminates or disregards any filler words that appear in the content (task 406). More specifically, the process 400 disregards filler words that appear in the subset of sentences/paragraphs chosen at task 404. As used here, filler words are any designated words that are defined by the system such that the process 400 selectively disregards them. Although not always required, most common, ordinary, and short words can be defined as filler words. For example, common filler words may include any of the following, without limitation: the; a; for; we; and; that; is; are; of; be; to; some; from; in; on; do; all; at. After eliminating or disregarding the filler words, a set of “significant” words will remain intact for consideration. In accordance with this example, a “significant” word must satisfy the following rules: (1) the word must appear in the chosen text; and (2) the word is not eliminated as a filler word. Thus, the act of filtering out the filler words inherently results in a set of significant words, at least for this particular example.
The process 400 continues by generating and obtaining a document key from at least some of the remaining significant words (task 408). In certain embodiments, the word selection algorithm selects only a portion of the significant words to generate the document key. For this particular example, task 408 chooses, from each of the sentences/paragraphs under consideration, a leading number of the significant words (alternatively, the number of significant words taken from the paragraphs may vary from one paragraph to another). Although the exact number of significant words chosen at task 408 may vary to suit the needs of the given system, this example assumes that the five leading significant words are chosen. Of course, this example assumes that each sentence/paragraph contains at least a minimum number of significant words as needed to carry out the remainder of the process 400. If for some reason a chosen paragraph has less than five significant words, then task 408 may follow an alternative scheme as a backup measure.
The document key represents an ordered sequence of the selected significant words. The document key need not be intelligible, and it need not convey any meaningful context. As mentioned above, this example considers the leading four paragraphs of the content, removes all filler words, and then selects the leading five significant words from each paragraph. This scheme results in twenty significant words, which may be arranged in any desired order. In accordance with the simple embodiment presented here, the significant words are arranged in order of appearance. Consider the four paragraphs in following example:
We propose the following fast and efficient method for detecting any two posts containing similar content. We generate a content key for each individual post by combining a predefined number of short phrases from its content.
Using short phrases allows us to be able to catch tricky aggregators such as those taking only a few paragraphs or changing common words in the original content.
This is a test message for a demonstration. We generate a content key for each individual post by combining a predefined number of short phrases from its content.
We developed a heuristic method to differentiate between the original and the copied posts. Note that the first crawled post is not always the original. There are situations in which the aggregators are crawled long before the original post.
The five leading significant words in each paragraph are italicized in the above excerpt. The non-italicized words represent filler words or significant words that are not selected for purposes of generating the document key. The document key for this example will be as follows: propose following fast efficient method using short phrases allows catch test message demonstration generate content developed heuristic method differentiate between. Note that this document key contains twenty significant words, arranged in the same order in which they appear in the four paragraphs shown above.
Next, the process 400 applies a hash function, a translation algorithm, an encoding algorithm, or any suitable transformation formula to the document key (task 410), which results in a hashed document key. Although any appropriate algorithm or formula may be utilized at task 410, in certain non-limiting embodiments, task 410 applies the well-known SHA-1 hash function to the document key (which results in a 160-bit hash value). Notably, given the same significant words selected from two content items, task 508 will generate the same hashed document key (hash value).
In certain embodiments, the characterizing signature for the content item is created from the hashed document key and a language identifier (task 412). In this regard, the language identifier is a code, a number, or any information that indicates the language used to author the content item of interest. For example, the language identifier may be a two-character code that specifies the language in which the content item is written. Task 412 may generate or derive the characterizing signature as a function of the hashed document key and the language identifier. In some embodiments, the language identifier is appended to the hashed document key, e.g., at the beginning or end of the hashed document key. Moreover, the signature may include an appropriate separator character (such as a colon) between the language identifier and the hashed document key.
As explained above, the process 300 can be used to compare two content items that share the same characterizing signature, for purposes of designating one as original content and the other as aggregated content (relative to each other). Although the specific comparison and analysis methodology may vary from one embodiment to another, certain preferred embodiments employ the content differentiation scheme depicted in
Although not always required in all embodiments, the content differentiation process 500 is designed to perform a series of checks in a prioritized manner. In this regard, if the decision criteria for a higher level check is satisfied, then the process 500 makes its determination based on that check (and it need not continue with any of the other checks). In accordance with some embodiments, the process 500 may compare information related to the two content items against each other, or individually against the predetermined decision criteria. In alternative embodiments, the process 500 may analyze the two content items and the decision criteria in a comprehensive manner to form the basis of the “original” versus “aggregated” content decision.
The illustrated embodiment of the process 500 begins by checking the volume and/or update frequency associated with the feeds or sources of the content items (task 502). In practice, the process 500 may calculate or obtain the update frequency of the source website or webpage of each content item, and compare the update frequency to a threshold value that is chosen as a way to identify whether or not an online source may be a content aggregator. In this regard, a typical content aggregator site will be updated at a relatively high frequency (measured in number of posts or content items per unit of time), while a legitimate originator of content will be updated at a relatively low frequency. Thus, the process 500 may use a frequency threshold as the predetermined update frequency criteria for purposes of distinguishing original content from aggregated content.
If the predetermined update frequency criteria is satisfied for only one of the two content items (the “Yes” branch of query task 504), then the process identifies only one content item as the “original” content item (task 506). Task 506 may also identify the other content item as the “aggregated” content item. If the update frequency of both content items is less than the predetermined threshold frequency, then the “No” branch of query task 504 is followed. If the update frequency of both content items is greater than the predetermined threshold frequency, then the process 500 may follow the “No” branch of query task 504, under the assumption that the update frequency cannot be utilized to make a decision (alternatively, the process 500 may designate the content item associated with the higher update frequency as the “aggregated” content item, and designate the content item associated with the lower update frequency as the “original” content item).
If the update frequency criteria is not satisfied for either content item (the “No” branch of query task 504), then the process 500 continues by checking the outbound links associated with the two content items (task 508). Accordingly, when the content items are not determined to be aggregated content, based on the update frequency criteria, the process 500 performs another check, which is of lower ranking or priority. This example assumes that the webpages or websites that represent the sources of the two content items include outbound links to other webpages, websites, or online content. Thus, task 508 may investigate those outbound links to determine whether or not they lead to noise content, spam sources, buy/sell sites, advertisement sites, pornography, revenue-generating sites, or the like. In practice, the process 500 could simply count the number of suspicious or illegitimate outbound links (corresponding to each content item) and compare the count against a predetermined threshold count value that is chosen as a way to identify whether or not an online source may be a content aggregator. In this regard, the process 500 assumes that a source page having a high number of suspicious or revenue-generating outbound links is likely to be an aggregator site. Thus, the process 500 may use a count threshold as the predetermined outbound link criteria for purposes of distinguishing original content from aggregated content.
If the outbound link criteria is satisfied for only one of the two content items (the “Yes” branch of query task 510), then the process identifies one of the two content items as the “original” content item (task 506), and the other content item as the “aggregated” content item. If the number of outbound links associated with both content items is less than the count threshold, then the “No” branch of query task 510 is followed. If the outbound link count for both content items is greater than the threshold count value, then the process 500 may follow the “No” branch of query task 510, under the assumption that the outbound link criteria cannot be utilized to make a decision (alternatively, the process 500 may designate the content item associated with the higher count as the “aggregated” content item, and designate the content item associated with the lower count as the “original” content item).
If the outbound link criteria is not satisfied for either content item (the “No” branch of query task 510), then the process 500 continues by checking the stated publication dates of the content items (task 512). In practice, the process 500 may calculate the age of each content item from the respective publication date and the current date. The ages of the content items could be compared to each other, or they could be compared to a threshold age or time value that is chosen as a way to identify whether or not a given content item may be provided by a content aggregator. In this regard, a typical content aggregator site will focus on relatively recent content, while legitimate original content may have relatively old publication dates. Thus, the process 500 may use a threshold corresponding to a time period, an age, or a number of days as the predetermined publication date criteria for purposes of distinguishing original content from aggregated content. As another example, the process 500 may calculate the difference between the two publication dates and compare the difference to a threshold difference value. In accordance with this methodology, if a subsequently published content item was published more than a threshold number of days or months after the previously published content item, then the subsequently published content item can be flagged as the aggregated content item.
If the predetermined publication date criteria is satisfied for only one of the two content items (the “Yes” branch of query task 514), then the process identifies one of the content items as the “original” content item (task 506), and the other content item as the “aggregated” content item. If both content items were published before the threshold date, or if both content items are older than the threshold period of time, then the “No” branch of query task 514 is followed. If, however, both content items are relatively new or fresh, then the process 500 may follow the “No” branch of query task 514, under the assumption that the publication dates cannot be utilized to make a decision (alternatively, the process 500 may designate the newer content item as the “aggregated” content item, and designate the older content item as the “original” content item).
If the publication date criteria is not satisfied for either content item (the “No” branch of query task 514), then the process 500 continues by checking the identified authors (if any) of the two content items (task 516). Accordingly, when the content items are not determined to be aggregated content, based on the publication date criteria, the process 500 performs another check, which is of lower ranking or priority. This example assumes that both of the content items include authorship credit, an author field, or the like. Thus, task 516 may review the names or identities of the authors to determine whether or not the named authors are suggestive of aggregated content. In practice, the host system could maintain a list of names or words that, when used to identify an author, are indicative of aggregated content. For example, aggregated content may be indicated if the author of a content item is: Admin; Administrator; Anonymous; or System. It should be appreciated that this list is merely exemplary in nature, and is not intended to limit or restrict the scope of the described subject matter in any way. Thus, the process 500 may use certain predefined words or phrases as predetermined authorship criteria for purposes of distinguishing original content from aggregated content.
If the authorship criteria is satisfied for only one of the two content items (the “Yes” branch of query task 518), then the process identifies one of the two content items as the “original” content item (task 506), and the other content item as the “aggregated” content item. If the process 500 determines that both content items appear to have legitimate authors, then the “No” branch of query task 518 is followed and the process 500 exits (for this scenario, the process 500 may simply preserve the status quo and maintain the “original” designation of the content item associated with the previously-saved signature). If the process 500 determines that the stated authorship of both content items is suspicious, then the process 500 may follow the “No” branch of query task 518, under the assumption that the authorship criteria cannot be utilized to make a decision.
It should be appreciated that additional decision criteria could be used if so desired. Moreover, the process 500 need not be performed in a hierarchical or priority-based manner. In other words, an alternative embodiment of process 500 may consider all of the various checks and decision criteria described above before distinguishing the original content from the aggregated content, wherein the decision is influenced by the different criteria. Furthermore, it should be appreciated that alternative thresholding schemes and/or criteria could be used for the decisions made during the process 500. For example, different threshold values could be used to accommodate different operating conditions, days of the week, categories of genres of content under investigation, or the like. As another example, more complicated decision algorithms could be implemented rather than straightforward examples mentioned above. These and other options are contemplated by this description.
Techniques and technologies may be described herein in terms of functional and/or logical block components, and with reference to symbolic representations of operations, processing tasks, and functions that may be performed by various computing components or devices. Such operations, tasks, and functions are sometimes referred to as being computer-executed, computerized, software-implemented, or computer-implemented. It should be appreciated that the various block components shown in the figures may be realized by any number of hardware, software, and/or firmware components configured to perform the specified functions. For example, an embodiment of a system or a component may employ various integrated circuit components, e.g., memory elements, digital signal processing elements, logic elements, look-up tables, or the like, which may carry out a variety of functions under the control of one or more microprocessors or other control devices.
The foregoing detailed description is merely illustrative in nature and is not intended to limit the embodiments of the subject matter or the application and uses of such embodiments. As used herein, the word “exemplary” means “serving as an example, instance, or illustration.” Any implementation described herein as exemplary is not necessarily to be construed as preferred or advantageous over other implementations. Furthermore, there is no intention to be bound by any expressed or implied theory presented in the preceding technical field, background, or detailed description.
While at least one exemplary embodiment has been presented in the foregoing detailed description, it should be appreciated that a vast number of variations exist. It should also be appreciated that the exemplary embodiment or embodiments described herein are not intended to limit the scope, applicability, or configuration of the claimed subject matter in any way. Rather, the foregoing detailed description will provide those skilled in the art with a convenient road map for implementing the described embodiment or embodiments. It should be understood that various changes can be made in the function and arrangement of elements without departing from the scope defined by the claims, which includes known equivalents and foreseeable equivalents at the time of filing this patent application.
This application is a continuation of U.S. patent application Ser. No. 14/021,977, filed Sep. 9, 2013, which claims the benefit of U.S. provisional patent application No. 61/701,504, filed Sep. 14, 2012.
Number | Name | Date | Kind |
---|---|---|---|
5577188 | Zhu | Nov 1996 | A |
5608872 | Schwartz et al. | Mar 1997 | A |
5649104 | Carleton et al. | Jul 1997 | A |
5715450 | Ambrose et al. | Feb 1998 | A |
5761419 | Schwartz et al. | Jun 1998 | A |
5819038 | Carleton et al. | Oct 1998 | A |
5821937 | Tonelli et al. | Oct 1998 | A |
5831610 | Tonelli et al. | Nov 1998 | A |
5873096 | Lim et al. | Feb 1999 | A |
5918159 | Fomukong et al. | Jun 1999 | A |
5963953 | Cram et al. | Oct 1999 | A |
6092083 | Brodersen et al. | Jul 2000 | A |
6161149 | Achacoso et al. | Dec 2000 | A |
6169534 | Raffel et al. | Jan 2001 | B1 |
6178425 | Brodersen et al. | Jan 2001 | B1 |
6189011 | Lim et al. | Feb 2001 | B1 |
6216135 | Brodersen et al. | Apr 2001 | B1 |
6233617 | Rothwein et al. | May 2001 | B1 |
6266669 | Brodersen et al. | Jul 2001 | B1 |
6295530 | Ritchie et al. | Sep 2001 | B1 |
6324568 | Diec et al. | Nov 2001 | B1 |
6324693 | Brodersen et al. | Nov 2001 | B1 |
6336137 | Lee et al. | Jan 2002 | B1 |
D454139 | Feldcamp et al. | Mar 2002 | S |
6367077 | Brodersen et al. | Apr 2002 | B1 |
6393605 | Loomans | May 2002 | B1 |
6405220 | Brodersen et al. | Jun 2002 | B1 |
6434550 | Warner et al. | Aug 2002 | B1 |
6446089 | Brodersen et al. | Sep 2002 | B1 |
6535909 | Rust | Mar 2003 | B1 |
6549908 | Loomans | Apr 2003 | B1 |
6553563 | Ambrose et al. | Apr 2003 | B2 |
6560461 | Fomukong et al. | May 2003 | B1 |
6574635 | Stauber et al. | Jun 2003 | B2 |
6577726 | Huang et al. | Jun 2003 | B1 |
6601087 | Zhu et al. | Jul 2003 | B1 |
6604117 | Lim et al. | Aug 2003 | B2 |
6604128 | Diec | Aug 2003 | B2 |
6609150 | Lee et al. | Aug 2003 | B2 |
6621834 | Scherpbier et al. | Sep 2003 | B1 |
6654032 | Zhu et al. | Nov 2003 | B1 |
6665648 | Brodersen et al. | Dec 2003 | B2 |
6665655 | Warner et al. | Dec 2003 | B1 |
6684438 | Brodersen et al. | Feb 2004 | B2 |
6711565 | Subramaniam et al. | Mar 2004 | B1 |
6724399 | Katchour et al. | Apr 2004 | B1 |
6728702 | Subramaniam et al. | Apr 2004 | B1 |
6728960 | Loomans | Apr 2004 | B1 |
6732095 | Warshavsky et al. | May 2004 | B1 |
6732100 | Brodersen et al. | May 2004 | B1 |
6732111 | Brodersen et al. | May 2004 | B2 |
6754681 | Brodersen et al. | Jun 2004 | B2 |
6763351 | Subramaniam et al. | Jul 2004 | B1 |
6763501 | Zhu et al. | Jul 2004 | B1 |
6768904 | Kim | Jul 2004 | B2 |
6772229 | Achacoso et al. | Aug 2004 | B1 |
6782383 | Subramaniam et al. | Aug 2004 | B2 |
6804330 | Jones et al. | Oct 2004 | B1 |
6826565 | Ritchie et al. | Nov 2004 | B2 |
6826582 | Chatterjee et al. | Nov 2004 | B1 |
6826745 | Coker | Nov 2004 | B2 |
6829655 | Huang et al. | Dec 2004 | B1 |
6842748 | Warner et al. | Jan 2005 | B1 |
6850895 | Brodersen et al. | Feb 2005 | B2 |
6850949 | Warner et al. | Feb 2005 | B2 |
7062502 | Kesler | Jun 2006 | B1 |
7069231 | Cinarkaya et al. | Jun 2006 | B1 |
7181758 | Chan | Feb 2007 | B1 |
7289976 | Kihneman et al. | Oct 2007 | B2 |
7340411 | Cook | Mar 2008 | B2 |
7356482 | Frankland et al. | Apr 2008 | B2 |
7401094 | Kesler | Jul 2008 | B1 |
7412455 | Dillon | Aug 2008 | B2 |
7508789 | Chan | Mar 2009 | B2 |
7620655 | Larsson et al. | Nov 2009 | B2 |
7698160 | Beaven et al. | Apr 2010 | B2 |
7779475 | Jakobson et al. | Aug 2010 | B2 |
8014943 | Jakobson | Sep 2011 | B2 |
8015495 | Achacoso et al. | Sep 2011 | B2 |
8032297 | Jakobson | Oct 2011 | B2 |
8082301 | Ahlgren et al. | Dec 2011 | B2 |
8095413 | Beaven | Jan 2012 | B1 |
8095594 | Beaven et al. | Jan 2012 | B2 |
8209308 | Rueben et al. | Jun 2012 | B2 |
8275836 | Beaven et al. | Sep 2012 | B2 |
8457545 | Chan | Jun 2013 | B2 |
8484111 | Frankland et al. | Jul 2013 | B2 |
8490025 | Jakobson et al. | Jul 2013 | B2 |
8504945 | Jakobson et al. | Aug 2013 | B2 |
8510045 | Rueben et al. | Aug 2013 | B2 |
8510664 | Rueben et al. | Aug 2013 | B2 |
8566301 | Rueben et al. | Oct 2013 | B2 |
8646103 | Jakobson et al. | Feb 2014 | B2 |
20010044791 | Richter et al. | Nov 2001 | A1 |
20020072951 | Lee et al. | Jun 2002 | A1 |
20020082892 | Raffel | Jun 2002 | A1 |
20020129352 | Brodersen et al. | Sep 2002 | A1 |
20020140731 | Subramaniam et al. | Oct 2002 | A1 |
20020143997 | Huang et al. | Oct 2002 | A1 |
20020162090 | Parnell et al. | Oct 2002 | A1 |
20020165742 | Robbins | Nov 2002 | A1 |
20030004971 | Gong | Jan 2003 | A1 |
20030018705 | Chen et al. | Jan 2003 | A1 |
20030018830 | Chen et al. | Jan 2003 | A1 |
20030066031 | Laane et al. | Apr 2003 | A1 |
20030066032 | Ramachandran et al. | Apr 2003 | A1 |
20030069936 | Warner et al. | Apr 2003 | A1 |
20030070000 | Coker et al. | Apr 2003 | A1 |
20030070004 | Mukundan et al. | Apr 2003 | A1 |
20030070005 | Mukundan et al. | Apr 2003 | A1 |
20030074418 | Coker et al. | Apr 2003 | A1 |
20030120675 | Stauber et al. | Jun 2003 | A1 |
20030151633 | George et al. | Aug 2003 | A1 |
20030159136 | Huang et al. | Aug 2003 | A1 |
20030187921 | Diec et al. | Oct 2003 | A1 |
20030189600 | Gune et al. | Oct 2003 | A1 |
20030204427 | Gune et al. | Oct 2003 | A1 |
20030206192 | Chen et al. | Nov 2003 | A1 |
20030225730 | Warner et al. | Dec 2003 | A1 |
20040001092 | Rothwein et al. | Jan 2004 | A1 |
20040010489 | Rio et al. | Jan 2004 | A1 |
20040015981 | Coker et al. | Jan 2004 | A1 |
20040027388 | Berg et al. | Feb 2004 | A1 |
20040128001 | Levin et al. | Jul 2004 | A1 |
20040186860 | Lee et al. | Sep 2004 | A1 |
20040193510 | Catahan et al. | Sep 2004 | A1 |
20040199489 | Barnes-Leon et al. | Oct 2004 | A1 |
20040199536 | Barnes Leon et al. | Oct 2004 | A1 |
20040199543 | Braud et al. | Oct 2004 | A1 |
20040249854 | Barnes-Leon et al. | Dec 2004 | A1 |
20040260534 | Pak et al. | Dec 2004 | A1 |
20040260659 | Chan et al. | Dec 2004 | A1 |
20040268299 | Lei et al. | Dec 2004 | A1 |
20050050555 | Exley et al. | Mar 2005 | A1 |
20050091098 | Brodersen et al. | Apr 2005 | A1 |
20060021019 | Hinton et al. | Jan 2006 | A1 |
20070288459 | Kashiyama | Dec 2007 | A1 |
20080249972 | Dillon | Oct 2008 | A1 |
20080294647 | Ramaswamy | Nov 2008 | A1 |
20090019013 | Tareen | Jan 2009 | A1 |
20090063414 | White et al. | Mar 2009 | A1 |
20090100342 | Jakobson | Apr 2009 | A1 |
20090177744 | Marlow et al. | Jul 2009 | A1 |
20110173180 | Gurumurthy | Jul 2011 | A1 |
20110218958 | Warshavsky et al. | Sep 2011 | A1 |
20110247051 | Bulumulla et al. | Oct 2011 | A1 |
20120042218 | Cinarkaya et al. | Feb 2012 | A1 |
20120233137 | Jakobson et al. | Sep 2012 | A1 |
20130054688 | Rourke | Feb 2013 | A1 |
20130191740 | Bell | Jul 2013 | A1 |
20130212497 | Zelenko et al. | Aug 2013 | A1 |
20130218948 | Jakobson | Aug 2013 | A1 |
20130218949 | Jakobson | Aug 2013 | A1 |
20130218966 | Jakobson | Aug 2013 | A1 |
20130247216 | Cinarkaya et al. | Sep 2013 | A1 |
20130275438 | Ajmera | Oct 2013 | A1 |
Entry |
---|
USPTO, Non-final Office Action issued in U.S. Appl. No. 14/021,977, dated Jan. 30, 2015. |
USPTO, Notice of Allowance issued in U.S. Appl. No. 14/021,977, dated Jul. 10, 2015. |
Number | Date | Country | |
---|---|---|---|
20160034581 A1 | Feb 2016 | US |
Number | Date | Country | |
---|---|---|---|
61701504 | Sep 2012 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 14021977 | Sep 2013 | US |
Child | 14879676 | US |