This invention relates to a method and system for obtaining script related information for the purpose of website crawling.
The World Wide Web available on the Internet provides a variety of specially formatted documents called web pages. The web pages are traditionally formatted in a language called HTML (HyperText Markup Language). Many web pages include links to other web pages which may reside in the same website or in a different website, and allow users to jump from one page to another simply by clicking on the links. The links use Universal Resource Locators (URLs) to jump to other web pages. URLs are the global addresses of web pages and other resources on the World Wide Web.
As web technology evolves, websites become more and more complex. The tendency in website development is to move from using purely static HTML to using HTML and script code to provide enhanced functionality. As a result, it is now common to use script code to construct web page links, i.e., to create URLs dynamically. Often the process of dynamically constructing URLs involves many variables and some rather complex script code. This makes it very difficult to resolve, i.e., extract and obtain, such URLs, when it comes to website crawling.
Website crawling or spidering is a process to automatically scan contents of websites by following links and fetching the web pages. Web crawling agents or “spiders” are software programs for performing the crawling over websites. Typically, existing web crawling agents are used to find specific information of interest in the Web.
Before the introduction of script code into Web pages, crawling agents could parse HTML code for standard URLs. Since all URLs had to be coded to the HTML specification, this task was relatively easy. However, as sites evolved they increasingly relied upon script code to provide more advanced functionality that standard HTML did not allow for. The format of the URLs in the script code varies widely from implementation to implementation. Unlike static HTML, there is no standard that the script code must follow for encoding URLs. Accordingly, script code presents problems for crawling agents that need to parse URLs. There is no longer a common syntax or format for the URLs and thus they are difficult to find consistently.
An existing approach to this problem is to use customizable pattern matching algorithms that statically read through the script code on a page or in a script file, and based on pattern matching try to “guess” what in that script code might be a URL. The pattern matching provides some utility but the use of the pattern matching algorithms has two basic problems: 1) the algorithms invariably miss URLs in the script code and 2) the algorithms do not always extract the entire URL correctly.
Also, existing approaches were directed to resolution of URLs only and did not detect other script related information created by the script code.
It is therefore desirable to provide a new mechanism that can provide more complete script related information during website crawling.
It is an object of the invention to provide a novel system and method for obtaining script related information for website crawling.
The present invention transforms HTML documents in web pages into XML documents to obtain information generated by script code.
In accordance with an aspect of the present invention, there is provided a virtual browser for obtaining script related information for website crawling. The virtual browser comprises an HTML transformer, a DOM builder, a script extractor, a BOM provider and a script execution engine. The HTML transformer is provided for transforming an HTML document included in a web page of the website into an XML document. The DOM builder is provided for building a document object model (DOM) based on the XML document. The script extractor is provided for extracting one or more scripts from the DOM. The BOM provider is provided for providing a browser object model (BOM) containing BOM objects and methods that are usable by the scripts for script execution. The script execution engine is provided for executing the scripts extracted by the script extractor using one or more of the objects and methods of the BOM provided by the BOM provider to capture script related information generated by execution of the scripts.
In accordance with another aspect of the invention, there is provided a web crawler system for crawling website. The web crawler system comprises a website crawler for automatically crawling website; and the virtual browser.
In accordance with another aspect of the invention, there is provided a method of obtaining script related information for website crawling. The method comprises the steps of receiving a web page of a website; transforming an HTML document included in the web page into an XML document; building a document object model (DOM) based on the XML document; extracting one or more scripts from the DOM; providing a browser object model (BOM) containing BOM objects and methods that are usable by the scripts for script execution; executing the scripts extracted by the script extractor using one or more of the objects and methods of the BOM; and capturing script related information generated by the execution of the scripts.
In accordance with another aspect of the invention, there is provided a computer readable medium storing instructions or statements for use in the execution in a computer of the method of obtaining script related information for website crawling.
In accordance with another aspect of the invention, there is provided a propagated signal carrier carrying signals containing computer executable instructions that can be read and executed by a computer, the computer executable instructions being used to execute the method of obtaining script related information for website crawling.
Other aspects and features of the present invention will be readily apparent to those skilled in the art from a review of the following detailed description of preferred embodiments in conjunction with the accompanying drawings.
The above and other features of the invention including various novel details of construction and combinations of parts, and other advantages, will now be more particularly described with reference to the accompanying drawings and pointed out in the claims. It will be understood that the particular method and device embodying the invention are shown by way of illustration and not as a limitation of the invention. The principles and features of this invention may be employed in various and numerous embodiments without departing from the scope of the invention.
The invention will be further understood from the following description with reference to the drawings in which:
The present invention is suitably used to check the integrity of links in a website. For example, a website 10 shown in
For example, in
Script code for dynamically creating script URLs may be JavaScript, JScript or VBScript and others.
The examination of script code at step 204 may be carried out by explicitly executing the script code. Alternatively, it may be done by examining the script code to obtain the script URLs without explicitly executing the script code. The script URL resolution component 140 may examine the script code or it may use another component to examine the script code, as described below in relation with other embodiments.
Therefore, the URL resolution system 100 allows automatic resolution of script URLs from embedded script code in websites in the context of website crawling, i.e., by locating script code while crawling a website or websites. Since the script code is examined to dynamically obtain the script URLs, complete URLs can be accurately obtained. Unlike the conventional pattern matching which resolve URLs statically, there are minimal possibilities that the URL resolution system 100 will miss script URLs in the website that is being crawled. Thus, the URL resolution system 100 produces accurate results of website crawling.
The URL resolution system 100 may have a function by which users can set the extent of the crawling, as described below.
Other embodiments of the present invention are described referring to
The website crawler 320 has script code detector 322 and crawling controller 324. The crawling controller 324 controls crawling carried out by the website crawler 320. The crawling controller 324 controls the website crawler 320 to crawl individual web pages included in website 310 or other websites to locate web pages that use script code to dynamically create script URLs. The crawling controller 324 receives output of the script URL resolution component 340 and uses the output to control the website crawler 320, as further described below.
To locate web pages that use script code to dynamically create script URLs, the website crawler 320 uses the script code detector 322 to determine if the script code contained in the web page should be executed by determining if it uses a specific part of the script code to dynamically create at least one script URL. The script code detector 322 issues a notification to the script URL resolution component 340 when a web page having such script code is found. The notification includes an identification of the web page.
The script URL resolution component 340 is activated in response to a notification generated by the script code detector 322 of the website crawler 320. The website crawler 320 crawls all web pages on the original website, but it only passes the web pages containing relevant script code to the script URL resolution component 340.
The script URL resolution component 340 controls a web page examiner 360. The web page examiner 360 is a component capable of loading the contents of web pages and executing the entire or a specific part of script code in the loaded web pages. The web page examiner 360 may be a web browser having these functions, or a combination of a web page parser and a script code examiner. The URL resolution system 300 uses an external web page examiner 360. Alternatively, as shown in
The script URL resolution component 340 has a web page loading controller 342 and a script code execution controller 344. The web page loading controller 342 notifies or instructs the web page examiner 360 to load relevant web pages. The script code execution controller 344 instructs the web page examiner 360 to execute specific parts of the script code that will result in dynamically created script URLs. For example, when the script URL resolution component 340 receives a notification from the script code detector 322, the web page loading controller 342 instructs the web page examiner 360 to load the contents of the web page identified in the notification. Then, the script code execution controller 344 executes the script code by interfacing with the web page examiner 360 and using its interface functions to force the execution of the specific parts of the script code in the loaded web pages. The web page examiner 360 captures the script URL(s) resulting from the script code execution and returns these script URLs to the script code execution controller 344. The script code execution controller 344 may instruct the web page examiner 360 to execute the entire script code, rather than only the specific parts thereof.
The script code execution controller 344 outputs the execution results to the website crawler 320. When the execution of the script code is successful, the execution result includes one or more resolved script URLs. When the execution of the script code is unsuccessful, the execution result includes a failure result.
The URL resolution systems 300, 400 may also have a presentation unit 480 or use an external presentation unit 380 to present to users the execution results. The presentation unit 380, 480 may be a user interface, a result log file, an email or other output unit or form. The execution results presented to users may include only the failure results or only resolved script URLs or both. Thus, an administrator of the website may attend to the failures.
Users may also use an input unit (not shown) to initiate or terminate the crawling, or set parameters of the crawling controller 324. For example, the crawling controller 324 may be set such that it crawls a website regularly in a predetermined interval and/or it may start crawling when the website is modified. Also, users may set the extent of the crawling, i.e., users may set the website crawler 320 to crawl only within the original website from which the crawling is initiated, or allow crawling of web pages residing in external websites when web pages in the external websites are linked. In the latter case, it is desirable to limit the extent or depth of the crawling of the external websites. For example, in
The website crawler 320 crawls a website 310 (500). Crawling of website 310 may start anywhere in the website 310. During the crawling, the script code detector 322 checks script code embedded in each web page in the website 310 to determine if the web page uses script code to dynamically create one or more script URLs. When the script code detector 322 locates a web page with script code that dynamically creates one or more script URLs (502), the script URL resolution component 340 is activated. The script code detector 322 sends a notification to the script URL resolution component 340 to this end.
In the script URL resolution component 340 the web page loading controller 342 instructs the web page examiner 360 to load the web page with the script code (504). The script code execution controller 344 then instructs the web page examiner 360 to execute the specific interface methods or functions that dynamically execute the script code and create one or more script URLs (506). The script code execution controller 344 may instruct the web page examiner 360 to execute the entire script code or only the relevant portions of the script code. Script URLs are thus resolved by the script code execution. The script code execution controller 344 receives the resolved script URLs from the web page examiner 360, and sends the received script URLs back to the crawling controller 324 (508).
The website crawler 320 continues the crawling (510). It may continue crawling on web pages identified by the resolved script URLs. The website crawler 320 may crawl those web pages immediately when the resolved script URLs are returned, or put them in a queue for crawling at a later time. The website crawler 320 may crawl multiple web pages in parallel.
The process of
The steps of crawling a website (500) to executing script code (506) are similar to those shown in
When the execution of the script code is unsuccessful, a failure result is output by the script URL resolution component 340 (530). The failure result is also returned to the website crawler 320. In the website crawler 320, the error result is logged (532), and the crawling of the current website is continued (534).
The process is repeated until crawling of the original website is completed.
The failure results logged at step 532 may be presented to users during and/or after the scanning.
Referring now to
The website crawler 820 has a script URL gatherer 822 for gathering script URLs from the advanced web page examiner 860. The advanced web page examiner 860 has a web page loader 862 for loading web pages, and a script code examiner 864 for executing script code in the loaded web pages. The advanced web page examiner 860 may be a part of the URL resolution system 800 or a component external to the system 800.
In operation, the website crawler 820 crawls a website 810. For each URL found on each of those web pages, the script URL gatherer 822 calls a function on the advanced web page examiner 860. It also calls the function for the URL of each web page on which the website crawler 820 crawls. The function takes the received URL as an input parameter and activates the web page loader 862 to load the contents of a web page identified by the received URL.
Then the function activates the script code examiner 864 to examine the loaded web page to obtain any script URLs created by script code in the web page. For example, during the examination of the loaded web page, the script code examiner 864 executes script code found in the loaded web page to obtain script URLs if any. The script code examiner 864 may execute all script code in the loaded web page or only script code that is used to create one or more script URLs. Also, the script code examiner 864 may execute the entire script code or only relevant portions of script code.
The function returns a collection of zero or more resolved script URLs as an output parameter to the script URL gatherer 822. The website crawler 820 may crawl web pages identified by the resolved script URLs. The crawling of those web pages may be carried out immediately or later. The website crawler 820 may crawl those pages in parallel with other web pages.
The website crawler 820 may have a crawling controller similar to crawling controller 324 shown in
The advanced web page examiner 860 may be a part of the URL resolution system 900 or a component external to the system 900.
In the embodiments shown in
In the above embodiments, the elements of the URL resolution system are described separately, however, two or more elements may be provided as a single element, or one or more elements may be shared with other components in a computer system in which the URL resolution system is installed. For example, in the embodiment shown in
The web crawler system 2000 allows the automated web crawler 120 to find script related information generated by execution of scripts embedded in web pages. The script related information may be URLs generated by scripts, HTML content generated by scripts, cookies generated by scripts, and/or HTTP requests initiated by scripts, and/or other information associated with the information generated by script execution.
The web crawler system 2000 is described further using JavaScripts embedded in web pages. A different embodiment may be applied to different scripts.
As shown in
The HTML transformation 2012 provides HTML to XML transformation. The web page contains one or more HTML documents. Each HTML document may contain one or more scripts. Scripts in HTML documents are typically written in JavaScript or similar script language. In order for JavaScripts to be provided programmatic access to elements of an HTML document, the virtual browser 2010 parses each HTML document into a tree structure, as is done by a web browser 112. To simplify the parsing process, the virtual browser 2010 uses the HTML transformer 2012 to transform or convert each HTML document into an XML document. XML documents can be easily parsed into a tree structure.
In order to perform the HTML to XML transformation, the HTML transformer 2012 matches the case of start and end tags, terminates empty elements, closes non-empty elements, resolves tag nesting problems, adds missing quotes around attribute values, removes duplicate attributes, eliminates attributes that have no value, e.g., CHECKED, and provides a value. The HTML transformer 2012 makes script blocks containing unparseable characters contained in an XML data section, e.g., CDATA section, in the XML document. The HTML transformer 2012 also transforms specific characters, such as <, >, &, ″ and ′, within the HTML document into an appropriate XML character entity. For example, the HTML transformer 2012 transforms to  .
In order to resolve tag nesting problems to create a tree structure, the HTML transformer 2012 may use heuristic algorithms or processes used by an existing web browser, e.g., heuristic algorithms from a Mozilla web browser. By using these heuristics, the HTML transformer 2012 can convert HTML documents to XML documents in a manner that simulates a web browser's handling of these issues.
The result of the HTML to XML transformation is an in-memory object that represents the HTML page as an XML document object. A single HTML page is typically represented as a single XML documentobject. HTML pages containing multiple documents (framesets) may be represented as a single XML document object or as a set of XML document objects. The DOM builder 2014 builds a DOM based on the XML document object. The DOM has a tree structure representing how elements or objects in the HTML web page, such as text, images, headers and links, are represented by the XML document object. The DOM also defines what attributes are associated with each object, and how the objects and attributes can be manipulated.
The DOM builder 2014 builds the DOM so that the resultant XML document object is capable of being queried to find executable scripts, and queried during the execution of scripts for data as required. Also, the XML document object is capable of being updated by the execution of scripts, so that it may be dynamically modified by the execution of scripts.
The DOM builder 2014 may also provide the DOM to the information analyzer 2024 so that the XML document object is made available to other parts of the automated crawler 120 for further analysis which are unrelated to JavaScript execution.
The script extractor 2016 identifies and extracts a relevant script or scripts from the DOM.
As shown in
The script location list 2040 is a list of potential locations for a script to reside in a DOM. For instance, the list includes scripts related with specified tags, such as inline scripts contained inside SCRIPT tags and scripts contained in separate files included using SCRIPT or LINK tags, and various event handlers, such as onclick, onchange, onmouseover event handlers.
The location query set 2042 is a set of location queries that permit the extraction of script contained in event handlers. Location queries are typically XPath queries that identify and extract XML elements for processing.
Some samples of location queries are:
The script locator 2030 identifies scripts that could potentially be executed using the script location list 2040.
The script extraction handler 2032 extracts the identified scripts. The mechanism used for extracting script depends on the script location. The script extraction handler 2032 may extract scripts contained in SCRIPT tags and LINK tags as the DOM is built. The script extraction handler 2032 may extract scripts contained in event handlers out of the DOM by performing relevant location queries using the location query set 2042.
The BOM provider 2018 provides a Browser Object Model (BOM) containing objects and methods that can be used by a script as it is executed. The BOM provider 2018 provides an implementation of the BOM that is used by typical web browsers 112.
The BOM provider 2018 also implements interfaces for the BOM objects that are exposed by the virtual browser 2010 to JavaScripts to run the JavaScripts found in a web page effectively. The interface of relevant BOM objects, i.e., its external appearance, provided by the BOM provider 2018 is substantially identical to that of a typical web browser 112 so that the script execution controller 2020 can execute scripts in a substantially same manner as a typical web browser 112 executes the scripts.
The BOM objects implemented by the BOM provider 2018 have different behaviours from those of a typical web browser 112. A web browser 112 provides various functions. Thus, the BOM objects of such a web browser 112 provide various behaviours, some of which may be irrelevant or undesirable for performing web crawling. The BOM objects implemented by the BOM provider 2018 of the virtual browser 2010 provide a means for capturing information that are generated by scripts. The BOM objects implemented by the BOM provider 2018 of the virtual browser 2010 also provide a means for the script to retrieve information contained in the DOM and a means for adding or modifying information in the DOM. Also implemented by the BOM provider 2018 is the XmlHttpRequest object. This object is exposed as part of the BOM in some web browsers and as an additional ActiveX in other web browsers. The BOM objects provided in the virtual browser 2010 do not have behaviours that are irrelevant or undesirable for performing web crawling.
The BOM provider 2018 exposes the BOM into the script execution environment in order to obtain meaningful results when script is executed.
The script execution engine 2020 executes the extracted scripts using the BOM.
The script execution engine 2020 determines entry points for the script execution. For instance, the script execution engine 2020 determines script not enclosed in a function in a script tag, and script in event handlers, as entry points.
The script execution engine 2020 executes each entry point. During the execution, the script execution engine 2020 allows the associated script to make calls into BOM objects, which results in the detection of script related information, such as URLs, HTTP requests, cookies, and/or changes of document content. Changes of document content may be additions, deletions, modifications or retrieval of document content.
The information handler 2022 interfaces with the BOM objects and captures the script related information generated by the script execution.
For instance, a JavaScript that invokes document.cookie calls into the cookie property on the document object, provided as part of the virtual browser 2010 in the BOM. The implementation of the document object in the BOM of the virtual browser 2010 allows the information handler 2022 to capture the name, value and other information of the cookie generated by the script, such that the captured information can be used by the automated web crawler 120.
The script execution engine 2020 also updates the DOM based on the execution of the scripts using the BOM. It is possible for scripts to modify content in the DOM, to delete content in the DOM, or to add new content to the DOM. The BOM provides objects that work closely with objects in the DOM. When a JavaScript calls BOM methods that cause changes in the document content, the BOM provider 2018 interacts with the DOM in order to update the DOM as required. Similarly, if a JavaScript seeks to retrieve information from the DOM by calling a BOM method, the BOM provider 2018 interacts with the DOM in order to return the required information to the script.
The DOM itself provides methods that allow data within the DOM to be retrieved, modified, deleted and added. The BOM also provides methods that allow data within the DOM to be retrieved, modified, deleted, and added. When a BOM method to retrieve, modify, delete or add data to the document is invoked by executing a script, the BOM method calls the corresponding method on the DOM in order to effect the necessary change in the DOM.
The virtual browser 2010 receives a web page HTML document from the website crawler 120 (2060), and performs HTML to XML transformation to transform the HTML document into an XML document using the HTML transformer 2012 (2062). The DOM builder 2014 of the virtual browser 2010 builds a DOM having a tree structure representing elements of the HTML document using the XML document (2064).
The script extractor 2016 extracts from the DOM one or more scripts that may potentially be executed (2066). The script extraction may be carried out by identifying potentially executable scripts using the script locations list, and extracting the identified scripts as the DOM is built, or by performing one or more location queries, depending on the type of the scripts as described above.
The BOM provider 2018 provides a BOM (2068).
The virtual browser 2010 loads and exposes the extracted scripts into the script execution environment along with the BOM (2070). In the script execution environment, the script execution engine 2020 of the virtual browser 2010 determines entry points and executes each entry point. During the execution, the associated script makes calls into BOM objects that results in the detection of script related information, such as URLs, HTTP requests, cookies, and/or changes of document content.
The virtual browser 2010 interfaces with the BOM objects and captures the name, value and/or other script related information detected during the script execution so that the captured information can be used by the automated web crawler 120 (2072).
The virtual browser 2010 also updates the DOM based on the execution of scripts through the BOM (2074).
The virtual browser 2010 may also make the DOM available to other parts of the automated crawler for further analysis unrelated to JavaScript execution (2076).
Thus, the virtual browser 2010 replicates the script processing capabilities of typical web browsers, and allows automated web crawling without actually navigating through web pages using the web browser 112.
The script extraction (2036) is further described using the following example of a script that may be found in a web page, in which line numbers are added for the convenience of the description:
The script extractor 2016 has a list 2040 listing possible locations where a script is allowed in an HTML document. For instance, there is an entry in the list 2040 that indicates that a script may be expected to be found inside a <SCRIPT> tag.
Using this entry in the list 2040, the script extractor 2016 extracts the first script in the example, which is:
Line 6 can be executed in the JavaScript engine 2020 without any external objects. However, the objective of the virtual browser 2010 is to determine the content that is written to the HTML document. To achieve this objective, the engine 2020 also executes Line 7. Since the JavaScript code was originally written to be executed inside a web browser 112, the script code makes use of the objects and methods provided by a web browser 112 through its BOM. In this case, the script code is written to use the document object and the write method of the BOM of a web browser 112. In order to execute Line 7 successfully, the virtual browser 2010 provides a BOM containing its own version of the document object with a write method. While the behaviour of the document object of the BOM of the virtual browser 2010 differs from the document object of the BOM provided by the browser 112, the interface of the object, i.e., its external appearance, of the virtual browser 2010 is substantially identical to that of the browser 112. Because its interface is substantially identical, the script execution controller 2020 can execute the script. The behaviour of the object is different because the virtual browser 2010 needs simply to capture the content that are generated by the script, rather than actually navigating to the related web page by the browser 112. Actual navigation to related web pages by the browser 112 involves various features, such as invocation of pop up windows, which are often irrelevant to web crawling.
Likewise, the script extractor 2016 extracts the second script:
Lines 10 and 11 can be executed in the JavaScript engine 2020 without any external objects. However, the objective of the virtual browser 2010 is to determine the cookie that is created by this second script. To achieve this objective, the virtual browser 2010 also executes Line 12 in the JavaScript engine 2020. Since this JavaScript code was originally written to be executed inside a web browser 112, it makes use of the objects and methods provided by the web browser 112: in this case the document object and the cookie method. In order to execute Line 12 successfully, the virtual browser 2010 provides its own version of the document object with a cookie method. While the behaviour of the document object provided by the virtual browser 2010 differs from the document object provided by the browser 112, the interface of the object provided by the virtual browser 2010 is substantially identical to that provided by the browser 112. Since the interface is substantially identical, the script execution engine 2020 can execute the script. The BOM object of the virtual browser 2010 provides the behaviour simply to capture the cookie that has been generated by the script.
Likewise, for scripts that make use of objects to initiate HTTP requests, the virtual browser 2010 provide BOM objects that allow the information handler 2022 to intercept the request URLs. The interception of the request URLs is described using the following example of JavaScript, in which line numbers are added for convenience of description:
In order to capture the URL of an HTTP request initiated from the JavaScript, the virtual browser 2010 provides BOM objects that replicate the external interfaces of the XMLHttpRequest object. The following three lines in this example create a similar object that is used by the script to initiate HTTP request from the scripts:
The scripts are written this way to provide compatibility with multiple web browsers.
In order to execute this script without errors and to eventually obtain the correct URL for the HTTP request, the virtual browser 2010 provides a replica or facsimile of the object expected to be created by the scripts contained in Lines 9, 16 and 19. The virtual browser 2010 provides, for Line 9, a BOM object that has the substantially same interface as XMLHttpRequest, so that the script execution engine 2020 can execute the JavaScript. The behaviour of the BOM object representing XMLHttpRequest implemented by the virtual browser 2010 is not to initiate a request, but rather to capture the URL provided in the call to the open method on Line 27:
27. req.open(“GET”, url, true);
As described above, the virtual browser 2010 allows the automated web crawler 120 to more accurately simulate the navigation behaviour of a human user using a web browser 112 to navigate a web site. The virtual browser 2010 allows the content that is created by scripts to be discovered. The automated web crawler 120 is able to perform the same analyses on this “dynamic content” as is applied to traditional “static content”. The virtual browser 2010 also allows cookies that are created by scripts to be discovered. The automated web crawler 120 is able to perform the standard analyses on these discovered cookies. The automated web crawler 120 is able to send these cookies with future HTTP requests in order to improve the automated web crawl. The virtual browser 2010 also allows HTTP requests initiated by scripts to be detected. Web applications broadly referred to as “AJAX applications” use JavaScripts to initiate HTTP requests in order to update state on the web server and to obtain updated data. The virtual browser 2010 allows the automated web crawler 120 to discover these HTTP requests in order to simulate, within an automated web crawler 120, the content and behaviour of an “AJAX” web application.
The web crawler system and virtual browser of the present invention may be implemented by any hardware, software or a combination of hardware and software having the above described functions. The software code, either in its entirety or a part thereof, may be stored in computer readable memory. Further, a computer data signal representing the software code which may be embedded in a carrier wave may be transmitted via a communication network. Such a computer readable memory, a computer data signal and a carrier wave are also within the scope of the present invention, as well as the hardware, software and the combination thereof.
While particular embodiments of the present invention have been shown and described, changes and modifications may be made to such embodiments without departing from the true scope of the invention.
This application is a Continuation-in-Part of U.S. application Ser. No. 10/064,176, filed on Jun. 19, 2002, which is incorporated herein by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
Parent | 10064176 | Jun 2002 | US |
Child | 11367752 | Mar 2006 | US |