This application is a national stage application under 35 U.S.C. 371 and claims the benefit of PCT Application No. PCT/CN2018/124812 having an international filing date of 28 Dec. 2018, which designated the United States, the entire contents of which are incorporated herein by reference.
With increasing use of computer systems, it becomes increasingly important to have reliable software. As such, software applications are often subjected to extensive testing to detect and eliminate errors. For software application with graphical user interfaces (GUIs), such testing may involve interacting with screen elements in different combinations and/or scenarios. For example, a tester may use a computer mouse to select various screen elements such as buttons, sliders, hyperlinks, menus, and so forth.
Some implementations are described with respect to the following figures.
Testing of computer applications with graphical user interfaces (GUIs) can be time-consuming and expensive. For example, human testers may have to manually perform various GUI input commands, such as mouse clicks, keyboard inputs, and so forth. As such, the ability to automate the testing of GUIs may reduce the time and cost associated with software validation. Such automated testing may require information about the screen locations of the GUI elements that receive user inputs. However, some GUIs may include input elements that do not have pre-defined screen locations. Accordingly, it may be difficult to perform automated testing of GUIs with input elements that do not have pre-defined screen locations.
As described further below with reference to
As shown, the computing device 100 can include processor(s) 110, memory 120, and machine-readable storage 130. The processor(s) 110 can include a microprocessor, microcontroller, processor module or subsystem, programmable integrated circuit, programmable gate array, multiple processors, a microprocessor including multiple processing cores, or another control or computing device. The memory 120 can be any type of computer memory (e.g., dynamic random access memory (DRAM), static random-access memory (SRAM), etc.).
In some implementations, the machine-readable storage 130 can include non-transitory storage media such as hard drives, flash storage, optical disks, etc. As shown, the machine-readable storage 130 can include a GUI analysis module 140, a testing module 150, and target patterns 160.
As shown in
In one or more implementations, the GUI analysis module 140 may automatically identify input elements within a GUI screen. In some examples, the GUI analysis module 140 may identify input elements in GUIs having some characteristics that may interfere with automated identification of input elements. Examples of such GUIs are described below with reference to
In one or more implementations, the GUI analysis module 140 may perform a blob detection analysis to identify potential input elements of a GUI. As used herein, “blob detection” refers to computer image analysis that detects regions that are different from surrounding regions in terms of one or more properties (e.g., brightness, color, etc.). For example, blob detection may be based on differential analysis (e.g., using derivatives of a function relative to position), on the local maxima and minima of a function, and so forth. The term “blob” may refer to a detected region that is different from its surrounding regions. In some implementations, the detected blobs may be identified as screen elements that could possibly be input elements (referred to as “potential input elements”). In one or more implementations, the GUI analysis module 140 may form rows and columns based on subsets of potential input elements, and may determine the intersections of the rows and columns. Furthermore, the GUI analysis module 140 may identify the intersections as the set of input elements to be used for testing. In some implementations, the testing module 150 may perform automated testing of a GUI using the identified set of input elements. The functionality of the GUI analysis module 140 is described further below with reference to
Referring now to
Assume that one or more characteristics of the GUI 200 may cause difficulty in performing automated testing. For example, the input elements 215 may be numeric keys that are displayed in a randomized order each time the GUI 200 is displayed. Such randomization may be performed to improve password security (e.g., to make it difficult to determine the PIN code based on observations of inputs on various portions of the GUI 200). However, because the numeric keys are not presented in consistent locations, it may be difficult to automate the entry of numeric codes in the GUI 200. Further, the presence of other screen elements that are not related to entry of the numeric code may interfere with the automatic determination of the locations of the input elements 215. For example, a computer vision algorithm may incorrectly identify an advertisement as an input element, thereby causing errors during automated testing.
Referring now to
Assume that one or more characteristics of the GUI 260 may cause difficulty in performing automated testing. For example, the separations between the input elements 265 may vary when the GUI 260 on device screens of different sizes, aspect ratios, and/or resolutions. Accordingly, because the input elements 265 are not presented in consistent locations, it may be difficult to automate the entry of swipe gestures that are defined according to the input elements 265. Further, the presence of other screen elements that are not related to entry of swipe gestures may interfere with the automatic determination of the locations of the input elements 265.
Referring now to
Block 310 may include identifying, based on a blob detection analysis, a plurality of potential input elements in a graphical user interface (GUI). For example, referring to
In one or more implementations, the detected blobs may be used to define a set of potential input elements (i.e., candidate regions that could potentially be input elements of the GUI screen). For example, a potential input element may be defined in terms of the location, shape, and/or size of a detected blob. In some examples described herein, the potential input elements may be represented by a mapping to respective regions of the GUI screen.
Referring now to
Referring again to
Referring again to
Referring again to
In some implementations, the GUI analysis module 140 may use the target patterns 160 as part of (or to assist in) identifying the set of input elements. For example, the target patterns 160 may store data indicating desired or expected patterns of input element locations within a GUI (e.g., a three-by-three grid, a four-by-three grid, etc.). In some examples, the GUI analysis module 140 may optionally compare the potential input elements located at the intersections 470 to a desired target pattern 160, and may ignore a potential input element that fails to conform to the target pattern 160. Stated differently, the potential input element that does not match the target pattern 160 is not included in the identified set of input elements for use testing.
In some implementations, the GUI analysis module 140 may perform optical character recognition (OCR) to determine any text characters associated with the identified set of input elements. For example, referring to
Referring again to
Referring now to
Instruction 510 may be executed to identify, based on a blob detection analysis, a plurality of potential input elements in a graphical user interface (GUI). For example, referring to
Instruction 520 may be executed to determine a set of rows including potential input elements that are in a horizontal alignment and in a same size range. For example, referring to
Instruction 530 may be executed to determine a set of columns including potential input elements that are in a vertical alignment and in a same size range. For example, referring to
Instruction 540 may be executed to determine a set of input elements comprising multiple potential input elements that are located at intersections of the identified set of rows and the identified set of columns. For example, referring to
Instruction 550 may be executed to perform an automated testing of the GUI using the determined set of input elements. For example, referring to
Referring now to
Instruction 610 may be executed to identify, based on a blob detection analysis, a plurality of potential input elements in a graphical user interface (GUI). Instruction 620 may be executed to determine a set of rows including potential input elements that are in a horizontal alignment and in a same size range. Instruction 630 may be executed to determine a set of columns including potential input elements that are in a vertical alignment and in a same size range. Instruction 640 may be executed to determine a set of input elements comprising multiple potential input elements that are located at intersections of the identified set of rows and the identified set of columns. Instruction 650 may be executed to perform an automated testing of the GUI using the determined set of input elements.
Note that, while
In accordance with some implementations, examples are provided for automated identification of input elements in a GUI. In some examples, the identification technique may include performing a blob detection analysis on an image of the GUI. The blob detection analysis may identify potential input elements that can be used to form rows and columns. The intersections of the rows and columns may be used to automatically identify a set of input elements for automated testing of the GUI. Accordingly, some implementations may provide improved automated testing of GUIs.
Data and instructions are stored in respective storage devices, which are implemented as one or multiple computer-readable or machine-readable storage media. The storage media include different forms of non-transitory memory including semiconductor memory devices such as dynamic or static random access memories (DRAMs or SRAMs), erasable and programmable read-only memories (EPROMs), electrically erasable and programmable read-only memories (EEPROMs) and flash memories; magnetic disks such as fixed, floppy and removable disks; other magnetic media including tape; optical media such as compact disks (CDs) or digital video disks (DVDs); or other types of storage devices.
Note that the instructions discussed above can be provided on one computer-readable or machine-readable storage medium, or alternatively, can be provided on multiple computer-readable or machine-readable storage media distributed in a large system having possibly plural nodes. Such computer-readable or machine-readable storage medium or media is (are) considered to be part of an article (or article of manufacture). An article or article of manufacture can refer to any manufactured single component or multiple components. The storage medium or media can be located either in the machine running the machine-readable instructions, or located at a remote site from which machine-readable instructions can be downloaded over a network for execution.
In the foregoing description, numerous details are set forth to provide an understanding of the subject disclosed herein. However, implementations may be practiced without some of these details. Other implementations may include modifications and variations from the details discussed above. It is intended that the appended claims cover such modifications and variations.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/CN2018/124812 | 12/28/2018 | WO |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2020/133201 | 7/2/2020 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
5708730 | Itonori | Jan 1998 | A |
9008443 | Dejean | Apr 2015 | B2 |
9424167 | Lee | Aug 2016 | B2 |
9465726 | Kozhuharov | Oct 2016 | B2 |
9792895 | Khintsitskiy | Oct 2017 | B2 |
9984471 | Becker et al. | May 2018 | B2 |
10339206 | Hoford | Jul 2019 | B2 |
10733754 | Dayanandan | Aug 2020 | B2 |
10936864 | Janardhanan | Mar 2021 | B2 |
20110131551 | Amichai | Jun 2011 | A1 |
20120124495 | Amichai | May 2012 | A1 |
20130159196 | DiZoglio | Jun 2013 | A1 |
20130343658 | Dejean | Dec 2013 | A1 |
20140165040 | Augustin | Jun 2014 | A1 |
20140366005 | Kozhuharov | Dec 2014 | A1 |
20150339213 | Lee | Nov 2015 | A1 |
20160171329 | Khintsitskiy | Jun 2016 | A1 |
20170337321 | Hoford | Nov 2017 | A1 |
20180024901 | Tankersley | Jan 2018 | A1 |
20180203571 | Dayanandan | Jul 2018 | A1 |
20190377942 | Janardhanan | Dec 2019 | A1 |
Number | Date | Country |
---|---|---|
WO-2020133201 | Jul 2020 | WO |
Entry |
---|
International Search Report and Written Opinion prepared by the ISA/CN on Sep. 12, 2019, for International Application No. PCT/CN2018/124812. |
“DNS Firewall,” Cloudflare Inc., retrieved Aug. 15, 2018, 10 pages [retrieved online from: www.cloudflare.com/dns/dns-firewall]. |
“Image Segmentation with Watershed Algorithm,” OpenCV, Aug. 15, 2018, 4 pages [retrieved online from: docs.opencv.org/ref/master/d3/db4/tutorial_py_watershed.html]. |
Abastillas “Real-Time Hand Gesture Detection and Recognition Using Simple Heuristic Rules,” Massey University, Jun. 2011, 56 pages. |
Acevedo-Avila et al. “A Linked List-Based Algorithm for Blob Detection on Embedded Vision-Based Sensors,” Sensors, Jun. 2016, vol. 16, No. 6, Article 782, 25 pages. |
Gupta et al. “Study on Object Detection using Open CV—Python,” International Journal of Computer Applications, Mar. 2017, vol. 162, No. 8, pp. 17-21. |
International Search Report/Written Opinion; PCT/CN2018/124812; Mailed Sep. 26, 2019; 6 pages. |
Patil et al. “Blob Detection Technique Using Image Processing for Identification of Machine Printed Characters,” International Journal of Innovations in Engineering Research and Technology Oct. 2015 vol. 2 No. 10 8 pages. |
Prince “Announcing 1.1.1.1: the fastest, privacy-first consumer DNS service,” Cloudflare Inc., Apr. 1, 2018, 13 pages [retrieved online from: blog.cloudlfare.com/announcing-1111]. |
Number | Date | Country | |
---|---|---|---|
20220107883 A1 | Apr 2022 | US |