The present invention relates to identification of elements of a web page and more particularly to a system and method to identify and detect a particular element that may have changed some of its attributes.
Web pages and web sites in general are composed of a large number of smaller elements. An element can be a portion of text, a picture or photograph, an other type of graphic, a portion of the screen where data can be enter, a link, a background or any other type of static or dynamic entity that can be displayed on the screen.
Web pages are typically described in a language known as html (Hypertext Markup Language) at the level of page display (even though pages and sites may be developed and deployed in different languages and by different tools). The html version of a web page is the level that is transmitted from a server to a client computer over a network where it is displayed by a program on the client computer known as a browser.
An element is an entity that typically can be identified by a human viewer even if it is slightly changed. For example, of the font of a text box is changed, a human can recognize that it is the same element. If a photo is repositioned to a different location on the screen, a human can immediately find and identify the element.
It would be very desirable in a variety of fields of endeavor to be able to detect and identify an element from the point of view of a human viewer, and then be able to recognize that a somewhat changed version of the element appearing on a different page or on the same page at a different time is really the known element.
Each element on a page has a set of html attributes that describe how it should be displayed including, among many others, the attributes of size and location on the screen. Prior art systems identify an element simply by the collection of its html attributes, or by a particular subset of its attributes. However, this can lead to unsatisfactory results since the set of attributes can change significantly for different representations of the same element.
It would be advantageous to assign an identification to an element that is not simply based on its html attributes alone, but rather using artificial intelligence to attempt to identify an element as a human would, namely by its overall appearance. This way, if the element is moved to a different position, resized, re-colored, or uses a different font, it can be detected and identified as the same element.
The present invention represents a system and method for detecting and identifying a previously known element when some of its attributes have changed. Machine-learning is used in the form of a neural network or other machine learning system to detect differences between elements. This avoids the problem of having to have a different neural network for each element of interest.
Potential elements for a match of a known element are: 1) one of the selectors matches, 2) one or more of the attributes match, 3) the element is located in the same position, 4) the content (text, graphic) inside the element is the same.
The present invention generally requires more than one condition to match. In a search of a web page for a known element, a set of candidate elements appearing on that page is returned. If the set is reasonably small, a probability for each candidate is returned. If the probability exceeds a predetermined threshold, a match can be declared.
The present invention “understands” differences between two elements. Instead of presenting the problem in the form of “given these attributes, what is the probability that this is the same element”, the present invention asks the question “given the differences between the candidate element and the known element, what is the probability that this is the same element.” This avoids the problem of multiple neural networks (NN). The present invention teaches one NN which element's differences are usually considered SIMILAR or NOT SIMILAR.
Attention is now directed to several drawings that illustrate features of the present invention.
Several figures and illustrations have been provided to aid in understanding the present invention. The scope of the present invention is not limited to what is shown in the figures.
The present invention relates to a utility that helps users identifying web page elements by their look and content in a manner a human would decide. Human perception of similarity is accomplished by the visual representation of the element, While the ‘browser’ expects specific attributes (chosen by the developer) to consider an element as “similar. The present invention attempts to imitate human perception for element similarity with the following steps:
When choosing element, information regarding the element is collected in order to later choose the potential elements, and to query a Neural network (NN)) to make the predication as accurate as possible.
The Problems with the Prior Art:
In order to identify DOM elements on a loaded html page, usually either a css selector or the elements XPath is used in the prior art. Css/jquery selector:—Some advanced platforms backend systems build and return their elements with different selection attributes on each page load or each user that loads the page. [Note: a DOM element is a Document object model. The DOM is the way Javascript sees its containing pages' data. It is an object that includes how the HTML/XHTML/XML is formatted, as well as the browser state].
For example, a server side developer decides to construct the ‘id’ fields (the ‘id is considered as a very “strong” selector) of a certain module with a prefix of the users’ unique system identifier. So every user that logs in will receive a different ‘id’ attribute. When a page's content is slightly changed, or if there is a change of design or layout, or if different dynamic content is loading onto the page, the detection of the element the system is looking for is changed. The XPath that the system initially “captured” will not necessarily point to the same element.
Because of the problems named above with using one type of selector, some companies/competitors that need to detect elements to later on match them at runtime developed algorithms that take more than one identifier, and by using a weighted calculation and thresholds, they try and detect the element. An editor can sometimes enter/insert a preference to one identifiers/selector instead of the other.
Other problems may occur with responsive design and detecting an element across different resolutions or with a slight text or page's structure change.
Potential are candidate elements are elements that match at least one of the following conditions:
To overcome performance issues, the system ignores potential candidate elements that match only one condition—the typical philosophy of the present invention is that if the element is the “correct” element, it should return by more than one condition as a candidate element; furthermore, the system will not accept elements that return more than a predetermined number of candidate elements (such as 15 in some embodiments).
The present invention generates a neural network (NN) that “understands” differences between two elements, and avoids a solution that would require a neural network “per element”. Instead of asking the NN “Giving these attributes, what is the probability that this is the same element?”—which would require a NN per element since the machine would need to be trained on what “this element” refers to for each element, the system asks the NN “Given these differences, what is the probability that this is the target element.” The system teaches ONE machine which elements' differences are usually considered SIMILAR or NOT-SIMILAR instead of asking questions regarding a specific element which will require teaching the machine on each specific element.
The invention trains the machine learning algorithm to “understand” what the odds are that two elements are the same element, even if some things change in the elements.
A client administrator enters the system editor to create a walkthrough. He chooses elements on the screen to attach an “action” (text bubbles, visual coach-marks, highlights, or any other visual action) element to (to be target elements). The system captures information from the elements and saves it. An end-user receives a codeline, and retrieves the rule engine and all the files along with the machine learning trained result set and weighted nodes from the trained set. The end-user runs the walkthroughs.
The system gathers attributes regarding the state of the browser such as:
The Element Identifier” consists of the following main components:
Machine-Learning Algorithm
In order to predict the probability of two samples to represent the same element the present invention can use a multi-layer perception neural network (MLP NN).
The element Identifier is a classification algorithm that is built to receive a difference “diff” object which is a product of the distance between two elements-samples as the input, and outputs the probability of the two elements to be the same element.
The model is trained with both data generated by real-users as well as data generated by automation scripts.
In order to identify an element in the future, the system generates a sample that is combined with both information regarding the style/display of the element and its position and structure in the nom.
This sample will later (when the system wants to check for the probability of an element to be the same as the sample) will be used to produce a diff object (Identificatinn Diff object) that can be used to query the NN.
When sampling an element, the “Element Identifier” looks for elements with similar structure in its ancestors in the DOM.
In case multiple elements are detected with the same structure, it is required to provide an element ‘preference’ (Value other then AUTO for the Biasing algorithm) (For more info please read—Biasing algorithm below).
The “diff” object is the product of calculating the distance (edit distance [physical separation on page, font difference, etc.], color distance, numeric distance) between to identification object samples. These basically represent the distance between the elements, or how different they are from each other.
String (that result as Boolean): Checking whether the values are similar (comparison). String (that result as distance): Checking the distance between the element (Levenshtein distance, sift4).
color (distance): distance between colors (dE76, dE94, CMC 1:c, dEOO).
Number (that result as Boolean): Checking whether the values are similar (comparison).
Number (that result as distance): the distance between the values.
Distance—values are normalized using hyperbolic tangent.
Boolean—values are normalized to −1 and 1.
The Following are the Algorithms used for each Calculation Type:
It is very common for a page in a website to have multiple elements that share the same appearance and element structure (e.g. navigation menu, table structure, tails layout).
Trying to detect such elements results in the following issues:
Biasing Attribute States:
I he identification process using the “Element Identifier” is about 10-30 costly (in terms of performance) and due to that, takes longer then the average element querying using the browser API (with the function document[getElementById/QuerySelector]).
To solve this performance issue and to minimize the performance differences between the “Element Identifier” and the native browser behavior, the present invention uses caching algorithm.
The caching algorithm tries to reduce the usage of NN and normalize as possible by generating a unique Id consists of it's current ‘Sample object’.
The “Identification object” is a JSON object that describes an HTML element in its a current state (current browser, current resolution, current attributes etc). [Note: a JSON (JavaScript Object Notation) is a lightweight data-interchange format that is easy for humans to read and write]. It is easy for machines to parse and generate. This element's JSON object, is later used in real time, on an end-user's machine while executing the detection algorithm, to compare and determine the probability that a different sample element is indeed the same element.
Composing the “Identification object” requires first overcoming the differences in the value of some properties between browsers and browser versions.
To address the issue of differences between browsers and browser versions, the sampling algorithm is ‘normalizing’ all values based on their type. (Color, String, Number)
Below is a list of all the properties used by the “element identifier” and the way it is normalized and the way the diff object is created when comparing two objects.
The three tables (HTML element attributes, HTML style attributes and additional custom attributes) represented by
When an end user watches a walkthrough, and the system needs to attach a visual/button/text bubble/any other element to an existing element or to the HTML, the system uses algorithms to deflect the number of elements it passes to the machine learning algorithm to reduce performance issues.
The system checks multiple selectors such as:
The system then iterates on all the candidate elements, and for each candidate element it normalizes the difference between the current iteration element attributes and the original target element's attributes (that were taken through the editor) to values that it can pass to the machine learning algorithm or NN.
The system also buffers the elements. For each normalized comparison, the machine learning will return the probability that these two elements are the same element. The system will then choose the element with the highest probability to be the right element. Under a certain threshold, the system will consider the element as not having been found.
The training dataset is composed of an array of “one-to-many” elements. Each element is compared to multiple elements creating a “diff” object teaching the NN if the given diff was generated from the same element or not. The training dataset is automatically generated by using tools/libraries of the system (Protracter, Webdrive, Selenium, grunt over nodjs). Data is generated on trival cases where a traditional CSS selector would suffice, and additional data is generated based on scenarios of element detection that failed using a traditional CSS selector method, but could easily be recognized by the a human eye. Some or the scenarios the NN is trained on are:
The following illustrate operation of the present invention:
The present invention represents a system and method for detecting and identifying a previously known element when some of its attributes have changed. Machine-learning is used in the form of a neural network to detect differences between elements. This avoids the problem of having to have a different neural network for each element of interest.
It is clear that the system and method of the present invention can be implemented on a processor executing stored instructions from a memory. The processor may be any type of computer including a PC, laptop, smartphone, tablet, microprocessor, microcontroller or any other type of computing circuit including analog computing and direct-wired logic. The neural network can be implemented in hardware or software running on a separate processor, or on the main processor. The memory can be any type of memory including semi-conductor memory, disk, tape, mass storage that can be read only ROM, random access RAM or any other type of storage device.
Several descriptions and illustrations have been presented to aid in understanding the present invention. One with skill in the art will realize that numerous changes and variations may be made without departing from the spirit of the invention. Each of these changes and variations is within the scope of the present invention.
This application is related to, and claims priority from, U.S. Provisional Patent Application No. 62/545,821 filed Aug. 15, 2017. Application 62/545,821 is hereby incorporated by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
62545821 | Aug 2017 | US |