The present application claims priority to Chinese Patent Application No. 202210879401.3, filed on Jul. 25, 2022 and entitled “Apparatus and Method for Generating Testing Plans,” which is incorporated by reference herein in its entirety.
The field relates generally to information processing, and more particularly to management of information processing systems.
Software development processes typically include multiple environments, such as one or more development environments, an integration testing environment, a staging environment, and a production environment. New software code may be created by individual developers or small teams of developers in respective ones of the development environments. The integration environment provides a common environment where software code from the multiple developers is combined and tested before being provided to the staging environment. The staging environment is designed to emulate the production environment and may be used for final review and approval before new software code is deployed in production applications in the production environment.
Illustrative embodiments of the present disclosure provide techniques for generating testing plans including execution order for test cases and mapping of test cases to test bed pools.
In one embodiment, an apparatus comprises at least one processing device comprising a processor coupled to a memory. The at least one processing device is configured to perform the steps of identifying a plurality of test cases and a plurality of test beds on which the plurality of test cases are configured to run, the plurality of test beds comprising information technology assets of an information technology infrastructure, and creating a plurality of test bed pools, wherein each of the plurality of test bed pools is associated with one of the plurality of test cases and comprises at least one of the plurality of test beds, and wherein a given one of the plurality of test bed pools associated with a given one of the plurality of test cases comprises at least a subset of the plurality of test beds having test bed configurations matching one or more test bed specifications of the given test case. The at least one processing device is also configured to perform the steps of determining a priority level of each of the plurality of test cases, wherein a given priority level of the given test case is determined based at least in part on one or more test case property specifications of the given test case, and determining a dependency degree of each of the plurality of test beds, wherein a given dependency degree for a given one of the plurality of test beds is determined based at least in part on a number of the plurality of test bed pools that the given test bed is part of. The at least one processing device is further configured to perform the step of generating a testing plan for testing a given product, the testing plan comprising a test case execution order for the plurality of test cases and a mapping of the plurality of test cases to the plurality of test beds, wherein the test case execution order is determined based at least in part on the priority levels of the plurality of test cases, and wherein the mapping of the plurality of test cases to the plurality of test beds is determined based at least in part on the dependency degrees of the plurality of test beds. The at least one processing device is further configured to perform the step of executing the testing plan for testing the given product.
These and other illustrative embodiments include, without limitation, methods, apparatus, networks, systems and processor-readable storage media.
Illustrative embodiments will be described herein with reference to exemplary information processing systems and associated computers, servers, storage devices and other processing devices. It is to be appreciated, however, that embodiments are not restricted to use with the particular illustrative system and device configurations shown. Accordingly, the term “information processing system” as used herein is intended to be broadly construed, so as to encompass, for example, processing systems comprising cloud computing and storage systems, as well as other types of processing systems comprising various combinations of physical and virtual processing resources. An information processing system may therefore comprise, for example, at least one data center or other type of cloud-based system that includes one or more clouds hosting tenants that access cloud resources.
The IT assets 106 of the IT infrastructure 105 may host applications that are utilized by respective ones of the client devices 102, such as in accordance with a client-server computer program architecture. In some embodiments, the applications comprise web applications designed for delivery from assets in the IT infrastructure 105 to users (e.g., of client devices 102) over the network 104. Various other examples are possible, such as where one or more applications are used internal to the IT infrastructure 105 and not exposed to the client devices 102. It should be appreciated that, in some embodiments, some of the IT assets 106 of the IT infrastructure 105 may themselves be viewed as applications or more generally software or hardware that is to be tested. For example, ones of the IT assets 106 that are virtual computing resources implemented as software containers may represent software that is to be tested. As another example, ones of the IT assets 106 that are physical computing resources may represent hardware devices that are to be tested.
The testing plan design system 110 utilizes various information stored in the testing database 108 in designing testing plans for use in testing the IT assets 106, applications or other software running on the IT assets 106, etc. In some embodiments, the testing plan design system 110 is used for an enterprise system. For example, an enterprise may subscribe to or otherwise utilize the testing plan design system 110 for generating and running testing plans (e.g., on the IT assets 106 of the IT infrastructure 105, on client devices 102 operated by users of the enterprise, etc.). As used herein, the term “enterprise system” is intended to be construed broadly to include any group of systems or other computing devices. For example, the IT assets 106 of the IT infrastructure 105 may provide a portion of one or more enterprise systems. A given enterprise system may also or alternatively include one or more of the client devices 102. In some embodiments, an enterprise system includes one or more data centers, cloud infrastructure comprising one or more clouds, etc. A given enterprise system, such as cloud infrastructure, may host assets that are associated with multiple enterprises (e.g., two or more different business, organizations or other entities).
The client devices 102 may comprise, for example, physical computing devices such as IoT devices, mobile telephones, laptop computers, tablet computers, desktop computers or other types of devices utilized by members of an enterprise, in any combination. Such devices are examples of what are more generally referred to herein as “processing devices.” Some of these processing devices are also generally referred to herein as “computers.” The client devices 102 may also or alternately comprise virtualized computing resources, such as VMs, containers, etc.
The client devices 102 in some embodiments comprise respective computers associated with a particular company, organization or other enterprise. Thus, the client devices 102 may be considered examples of assets of an enterprise system. In addition, at least portions of the information processing system 100 may also be referred to herein as collectively comprising one or more “enterprises.” Numerous other operating scenarios involving a wide variety of different types and arrangements of processing nodes are possible, as will be appreciated by those skilled in the art.
The network 104 is assumed to comprise a global computer network such as the Internet, although other types of networks can be part of the network 104, including a wide area network (WAN), a local area network (LAN), a satellite network, a telephone or cable network, a cellular network, a wireless network such as a WiFi or WiMAX network, or various portions or combinations of these and other types of networks.
The testing database 108, as discussed above, is configured to store and record various information that is used by the testing plan design system 110 in designing testing plans for use in testing the IT assets 106, applications or other software running on the IT assets 106. Such information may include, but is not limited to, information regarding test bed requirements for different test cases (e.g., where the test bed requirements represent hardware, software and configuration requirements or other limitations for where test cases may be run), information regarding test case properties for different test cases (e.g., representing factors or criteria that may be used in determining a prioritization among different test cases), etc. The testing database 108 in some embodiments is implemented using one or more storage systems or devices associated with the testing plan design system 110. In some embodiments, one or more of the storage systems utilized to implement the testing database 108 comprises a scale-out all-flash content addressable storage array or other type of storage array.
The term “storage system” as used herein is therefore intended to be broadly construed, and should not be viewed as being limited to content addressable storage systems or flash-based storage systems. A given storage system as the term is broadly used herein can comprise, for example, network-attached storage (NAS), storage area networks (SANs), direct-attached storage (DAS) and distributed DAS, as well as combinations of these and other storage types, including software-defined storage.
Other particular types of storage products that can be used in implementing storage systems in illustrative embodiments include all-flash and hybrid flash storage arrays, software-defined storage products, cloud storage products, object-based storage products, and scale-out NAS clusters. Combinations of multiple ones of these and other storage products can also be used in implementing a given storage system in an illustrative embodiment.
Although not explicitly shown in
The client devices 102 are configured to access or otherwise utilize the IT infrastructure 105. In some embodiments, the client devices 102 are assumed to be associated with system administrators, IT managers or other authorized personnel responsible for managing the IT assets 106 of the IT infrastructure 105 (e.g., where such management includes performing testing of the IT assets 106, or of applications or other software that runs on the IT assets 106). For example, a given one of the client devices 102 may be operated by a user to access a graphical user interface (GUI) provided by the testing plan design system 110 to manage testing plans (e.g., create, review, execute, etc.). The testing plan design system 110 may be provided as a cloud service that is accessible by the given client device 102 to allow the user thereof to manage testing plans. In some embodiments, the IT assets 106 of the IT infrastructure 105 are owned or operated by the same enterprise that operates the testing plan design system 110 (e.g., where an enterprise such as a business provides support for the assets it operates). In other embodiments, the IT assets 106 of the IT infrastructure 105 may be owned or operated by one or more enterprises different than the enterprise which operates the testing plan design system 110 (e.g., a first enterprise provides support for assets that are owned by multiple different customers, business, etc.). Various other examples are possible.
In other embodiments, the testing plan design system 110 may provide support for testing of the client devices 102, instead of or in addition to providing support for the IT assets 106 of the IT infrastructure 105. For example, the testing plan design system 110 may be operated by a hardware vendor that manufactures and sells computing devices (e.g., desktops, laptops, tablets, smartphones, etc.), and the client devices 102 represent computing devices sold by that hardware vendor. The testing plan design system 110 may also or alternatively be operated by a software vendor that produces and sells software (e.g., applications) that runs on the client devices 102. The testing plan design system 110, however, is not required to be operated by any single hardware or software vendor. Instead, the testing plan design system 110 may be offered as a service to provide support for computing devices or software that are sold by any number of hardware or software vendors. The client devices 102 may subscribe to the testing plan design system 110, so as to provide support for testing of the client devices 102 or software running thereon, for testing hardware or software products that are to be deployed as the IT assets 106 and/or the client devices 102, etc. Various other examples are possible.
In some embodiments, the client devices 102 may implement host agents that are configured for automated transmission of information regarding test cases, test beds and test case execution (e.g., test bed tags and test case property tags as discussed in further detail below, results of test case attempts, etc. which are periodically provided to the testing database 108 and/or the testing plan design system 110). Such host agents may also or alternatively be configured to automatically receive from the testing plan design system 110 commands to execute remote actions (e.g., to run various testing plans or portions thereof on the client devices 102 and/or the IT assets 106 of the IT infrastructure 105, such as instructions to attempt test cases on particular test beds hosted on the client devices 102 and/or the IT assets 106 of the IT infrastructure 105). Host agents may similarly be deployed on the IT assets 106 of the IT infrastructure 105.
It should be noted that a “host agent” as this term is generally used herein may comprise an automated entity, such as a software entity running on a processing device. Accordingly, a host agent need not be a human entity.
The testing plan design system 110 in the
It is to be appreciated that the particular arrangement of the client devices 102, the IT infrastructure 105 and the testing plan design system 110 illustrated in the
At least portions of the testing plan generation logic 112 and the testing plan execution logic 114 may be implemented at least in part in the form of software that is stored in memory and executed by a processor.
The testing plan design system 110 and other portions of the information processing system 100, as will be described in further detail below, may be part of cloud infrastructure.
The testing plan design system 110 and other components of the information processing system 100 in the
The client devices 102, IT infrastructure 105, the testing database 108 and the testing plan design system 110 or components thereof (e.g., the testing plan generation logic 112 and the testing plan execution logic 114) may be implemented on respective distinct processing platforms, although numerous other arrangements are possible. For example, in some embodiments at least portions of the testing plan design system 110 and one or more of the client devices 102, the IT infrastructure 105 and/or the testing database 108 are implemented on the same processing platform. A given client device (e.g., 102-1) can therefore be implemented at least in part within at least one processing platform that implements at least a portion of the testing plan design system 110.
The term “processing platform” as used herein is intended to be broadly construed so as to encompass, by way of illustration and without limitation, multiple sets of processing devices and associated storage systems that are configured to communicate over one or more networks. For example, distributed implementations of the information processing system 100 are possible, in which certain components of the system reside in one data center in a first geographic location while other components of the system reside in one or more other data centers in one or more other geographic locations that are potentially remote from the first geographic location. Thus, it is possible in some implementations of the information processing system 100 for the client devices 102, the IT infrastructure 105, IT assets 106, the testing database 108 and the testing plan design system 110, or portions or components thereof, to reside in different data centers. Numerous other distributed implementations are possible. The testing plan design system 110 can also be implemented in a distributed manner across multiple data centers.
Additional examples of processing platforms utilized to implement the testing plan design system 110 and other components of the information processing system 100 in illustrative embodiments will be described in more detail below in conjunction with
It is to be appreciated that these and other features of illustrative embodiments are presented by way of example only, and should not be construed as limiting in any way.
It is to be understood that the particular set of elements shown in
It is to be appreciated that these and other features of illustrative embodiments are presented by way of example only, and should not be construed as limiting in any way.
An exemplary process for generating testing plans including execution order for test cases and mapping of test cases to test bed pools will now be described in more detail with reference to the flow diagram of
In this embodiment, the process includes steps 200 through 210. These steps are assumed to be performed by the testing plan design system 110 utilizing the testing plan generation logic 112 and the testing plan execution logic 114. The process begins with step 200, identifying a plurality of test cases and a plurality of test beds on which the plurality of test cases are configured to run, the plurality of test beds comprising IT assets of an IT infrastructure.
A plurality of test bed pools are created in step 202. Each of the plurality of test bed pools is associated with one of the plurality of test cases and comprises at least one of the plurality of test beds. A given one of the plurality of test bed pools associated with a given one of the plurality of test cases comprises at least a subset of the plurality of test beds having test bed configurations matching one or more test bed specifications of the given test case. A given test bed configuration for a given one of the plurality of test beds comprises at least one of a hardware and a software configuration of a given one of the IT assets of the IT infrastructure on which the given test bed runs. The one or more test bed specifications of the given test case may comprise at least one of one or more hardware configuration requirements and one or more software configuration requirements.
In step 204, a priority level of each of the plurality of test cases is determined. A given priority level of the given test case is determined based at least in part on one or more test case property specifications of the given test case. The one or more test case property specifications of the given test case may specify a type of testing performed during the given test case. The type of testing may comprise at least one of regression testing, new feature coverage testing, and benchmark testing. The one or more test case property specifications of the given test case may also or alternatively specify one or more results of previous attempts to perform the given test case. The one or more results of the previous attempts to perform the given test case may indicate at least one of: whether the given test case has passed during the previous attempts to perform the given test case; and bugs encountered during the previous attempts to perform the given test case.
The given priority level for the given test case may be determined as a weighted average of weights assigned to the one or more test case property specifications. The given priority level may be determined utilizing a time-based analytic hierarchy process which takes into account a current one of a plurality of testing stages of a test life cycle of the testing plan. The weight values assigned to the one or more test case property tag specifications may be dynamically updated at different ones of the plurality of testing stages of the test life cycle of the testing plan. The time-based analytic hierarchy process may utilize a dynamic judgment matrix, and the weighted average may be computed by determining a geometric mean of each row vector of the dynamic judgment matrix and normalizing the weight values of the one or more test case property specifications.
The
Test execution plays an important role in product development and release, where the products being tested may include IT assets, such as physical and virtual computing resources, firmware, software, etc. With the continuous addition of new features for products, the number of test cases required also increases. Within an organization, a project management team may formulate a testing plan and expect that all test cases in the testing plan can or will be executed or at least attempted in time (e.g., prior to product release), especially for important or high priority test cases which can impact whether the product releases on time. From test execution experience, however, it is often the case that not all test cases in a testing plan are able to be executed on time. There are various reasons that test cases in a testing plan are not able to be executed in time, including but not limited to blocking issues and environmental issues.
Blocking issues are typically encountered at the beginning of a test life cycle, as test cases that are executed in the early stage of product development may generate critical product problems which block or prevent execution of other test cases in later stages of product development. Environmental issues may be encountered throughout the test life cycle. Consider, as an example, storage system testing where a given test case may be executed on a given test bed which includes one or more storage products (e.g., hardware and/or software storage IT assets) and associated network configurations. Various environmental issues may happen on the given test bed, such as storage product or network interface reconfiguration, hosts being down, services being down on host restart, IO tool upgrades, etc. Test engineers can solve such environmental issues from time to time throughout the test life cycle, but this can occupy a significant amount of test execution time which can prevent some test cases in the testing plan from being attempted on time.
From a project management perspective, the objectives of testing include: maximizing the execution rate of test cases in a given time period; and completing test cases with higher importance or priority as early as possible. Illustrative embodiments provide technical solutions for enabling smart test execution process optimization. In some embodiments, analytic hierarchy process (AHP) and linear programming algorithms are leveraged to improve test execution from multiple stages to meet the objectives of maximizing (or at least increasing or improving) the execution rate of test cases in a given time period, and completing test cases with higher importance or priority as early as possible.
In a product development process, a project management team may arrange the execution of test cases manually in a testing plan, and then make some temporary adjustments to the scheduling of test cases in the testing plan according to project progress. The test execution team will run the tests accordingly. Such an approach, however, is not intelligent when facing blocking issues which can impact test case attempt rate, and lacks optimization of the test execution ordering and processing. The technical solutions described herein provide smart test execution process optimization approaches which can improve test execution during product development from multiple levels, including: increasing test case attempt rate by creating test bed pools for test cases; and prioritizing important test cases such that they are executed as early as possible. In some embodiments, AHP is leveraged to update test case priority dynamically along with project progress and linear programming is leveraged to generate a test case execution ordering in a testing plan.
During a product development process, there are a number of test cases that need to run at each development stage. Normally, one test case reserves one test bed, and the test case releases that test bed after the test case is attempted (e.g., which may result in the test case passing or failing).
Illustrative embodiments provide technical solutions for smart test execution process optimization. To do so, some embodiments are able to refine the simple mapping relationship between test cases 401 and test beds 405 shown in
The creation of the test bed pools 503 can save considerable resources, and is efficient to implement. In practice, there are usually no or very few test cases that can only be run on one specific test bed. In other words, for any given test case, there is normally multiple different test beds on which the given test case may be attempted. When designing a test case, test bed requirements (e.g., multiple hardware, software and configuration requirements or other limitations) may be specified for the test beds on which the test case may be attempted. For example, a given test case may require 32G Fibre Channel (FC) installed on a target storage product. Thus, if a given test bed has 32G FC installed, the given test bed matches the test bed requirements of the given test case and can be added to the given test case's test bed pool. It should be appreciated that test cases may have multiple test bed requirements (e.g., multiple hardware, software and configuration requirements or other limitations).
Test bed pools allows mapping of test cases to more available test beds, which will increase the attempt rate for test cases as compared to the example of
The creation of test bed pools for test cases provides a first level of improvements, which smart test execution planning providing additional improvements at a second level. As mentioned above, smart test case execution planning has a goal of running important test cases at a higher priority. Test cases may have test bed tags that specify test bed requirements (e.g., hardware, software and configuration requirements and other limitations). Test cases may also have test case property tags, where such test case property tags may be used to determine prioritization or importance of test cases. Various types of test case properties may be tagged to test cases. As an example, a test case which has never passed before may have a test case property tag of “never passes.” As another example, an important or high priority test case may have a test case property tag of “benchmark,” a test case with the most encountered bugs may have a test case property tag of “most bugs,” etc. Various other examples of test case property tags will be described below.
To achieve the goal of smart test execution planning, it is desired to increase the test case attempt rate and ensure that important or high priority test cases are executed first or earlier in the test execution process. Illustrative embodiments provide technical solutions for finding optimal matches between test cases and test beds, and for determining test case execution order. It should be noted that for a test case, its tags may change along the test life cycle. For example, a test case may have a test case property tag of “never passes” if it has not passed in previous test cycles. If that test case passes in a current test cycle, however, the “never passes” tag will be removed for subsequent test cycles. Thus, the priority of a test case is not necessarily fixed.
Test cases may be continually generated along the product development life cycle. Determining how to execute as many test cases, including more important test cases, is a significant technical problem in project management. The technical solutions described herein provide an approach for generating smart test execution plans which improve test execution from two levels (e.g., through the creation of test bed pools, and through intelligent ordering and mapping of test cases to test beds in the test bed pools) to achieve the goals of increasing test case attempt rate and prioritizing important test cases for execution first.
In some embodiments, the technical solutions for smart test plan generation comprehensively consider multiple important factors, including but not limited to test case priority (e.g., based on test case property tags) and test bed dependency degree. The technical solutions may use a dynamic analytic hierarchy process (AHP) to calculate the priorities of test cases, and may use linear programming mathematical modeling techniques to generate test execution plans (e.g., an ordering of test cases and assignment of test cases to test beds within test bed pools that the test cases are mapped to).
In the description below, TC is used to denote a test case set, where TC=[tc1, tc2, . . . tcm]. TB is used to denote a test bed set, where TB=[tb1, tb2, . . . tbn]. Test tags are denoted as {[hw], [sw]}, where [hw] denotes the test bed tags and [sw] denotes the test case property tags. The test bed tags [hw] represent test case requirements for test beds (e.g., hardware, software and configuration requirements and other limitations). The test case property tags [sw] represent test case “soft” requirements or test case properties, where [sw] tags may be updated along a test case's life cycle. ptb(tci) denotes the test bed pool for test case tci, d(tbj) denotes the dependency degree of test bed tbj, and ptc(tbj) denotes the test case pool for test bed tbj
The priority of test cases in TC are calculated in step 1007 based on the [sw] tags of the test cases. As discussed above, various test case property tags may be predefined for the test cases, such as: “never passes” denoting a test case which has never passed before; “most bugs” denoting a test case that finds many bugs; “benchmark” denoting a test case that ensures one or more basic product functions work as expected; “new feature coverage” denoting a test case designed for one or more new product features; “regression” denoting a test case for regression testing; and “GR gate” denoting a test case that is a golden run (GR) gate test which needs to be attempted before the golden run. Each of the [sw] tags may have an associated weight value. Assuming there are r different [sw] tags, the weight values may be represented as W=[w1, w2, . . . wr], where Σ1rwi=1.
In some embodiments, the weights may be distributed evenly (e.g., each weight is assigned the same value). In other embodiments, the weights may be dynamically assigned and updated throughout different testing stages, as the meaning or importance of different ones of the [sw] tags may be different in different testing stages. For example, in the whole test life cycle across testing stages, test cases with the “benchmark” tag are important and are expected to be 100% attempted. At earlier test stages, test cases with “regression,” “most bugs” and “new feature coverage” tags may be more important than other test cases without those tags, although there may be exceptions such as test cases with the “benchmark” tag. At later test stages, test cases with “never passes” or “GR gate” tags may be more important than other test cases without those tags, although again there may be exceptions such as test cases with the “benchmark” tag. It should be appreciated that this is just an example of the differing importance or priority of test case property tags, and that other embodiments may use various other test case property tags and test case property tag weighting in addition to or in place of one or more of these examples.
AHP is a structured technique for organizing and analyzing complex decisions, and represents an accurate approach for quantifying the weights of decision criteria such as the test case property tags. Test case property tag analysis gives insight that conditions vary over time, such that making a good decision regarding test case priority involves judgments of what is more likely or more preferred over different time periods (e.g., different testing stages).
where aij>0, aij(t)=aij−1(t), and aij
The geometric mean of each row vector of matrix A(t) is determined (e.g., using a square root method) and normalized. The weight of each tag and the eigenvector W is thus obtained according to:
Suppose that there are a total of 12 test cycles planned for a test life cycle.
The process flow 1000 continues with step 1009, calculating the dependency degrees of test beds in the test bed pools. Each test case tci has an associated test bed pool, ptb(tci). A given test bed, however, may be added to multiple test bed pools. Thus, each test bed tbj has an associated test case pool ptc(tbj). To ensure that important test cases are executed first or earlier in the test life cycle, two conditions should be met: important test cases should have higher priority (e.g., an important test case tci has higher p(tci) than less important test cases); and test beds with minimal dependency degree should be selected for important test cases (e.g., for the most important test case tci, the test bed tbj in ptb(tci) with minimal dependency degree is assigned, where the minimal dependency degree means that the test bed tbj is more free or stable than other test beds in ptb(tci)). The dependency degree of test bed tbj, d(tbj) may be determined according to the following equation:
A Z-score normalization method is used to normalize p (tci) and d (tbj) according to the following equations:
The process flow 1000 continues with step 1011, generating a test execution table using an objective function that is based on test case priority and test bed dependency degree. Linear programming mathematical modeling techniques are used in some embodiments to determine the optimal test execution process order. In some embodiments, the following objective function is used:
max E=ωp*p(tci)′+ωd*(1−d(tbj)′)
where ωp and ωd denote weights of p(tci) and d(tbj), respectively, and where ωp+ωd=1. By tuning the weights, better overall balancing results may be achieved.
In step 1013, the test execution table is followed to start the test life cycle and begin executing test cases. Test cases may run in parallel, sequentially, or combinations of in parallel and sequentially according to exclusivity. The test execution table may be refreshed when test cases are attempted.
As the test cases are executed in step 1013, the step 1015 determination may be performed. In some embodiments, the step 1015 determination is performed continually, or after each test case is attempted. The step 1015 determination may also or alternatively be performed periodically on some defined schedule, in response to explicit user requests, in response to detecting some designated conditions (e.g., that testing has moved from one testing stage to another, such that weight or tag updates should be performed), etc. In step 1015, a determination is made as to whether all test cases in the test execution plan have been attempted. If the result of the step 1015 determination is no, the process flow 1000 proceeds to step 1017 where a determination is made as to whether any of the tags for any of the test cases in TC or test beds in TB are to be updated. If the result of the step 1017 determination is yes, then the process flow 1000 proceeds to step 1019 where TC and TB are updated. Following step 1019, the process flow 1000 may return to step 1003. If the result of the step 1017 determination is no, then the process flow 1000 returns to step 1007. The process flow 1000 may continue until the result of the step 1015 determination is yes, at which point the process flow 1000 proceeds to step 1021 where testing is complete.
It is to be appreciated that the particular advantages described above and elsewhere herein are associated with particular illustrative embodiments and need not be present in other embodiments. Also, the particular types of information processing system features and functionality as illustrated in the drawings and described above are exemplary only, and numerous other arrangements may be used in other embodiments.
Illustrative embodiments of processing platforms utilized to implement functionality for generating testing plans including execution order for test cases and mapping of test cases to test bed pools will now be described in greater detail with reference to
The cloud infrastructure 1500 further comprises sets of applications 1510-1, 1510-2, . . . 1510-L running on respective ones of the VMs/container sets 1502-1, 1502-2, . . . 1502-L under the control of the virtualization infrastructure 1504. The VMs/container sets 1502 may comprise respective VMs, respective sets of one or more containers, or respective sets of one or more containers running in VMs.
In some implementations of the
In other implementations of the
As is apparent from the above, one or more of the processing modules or other components of information processing system 100 may each run on a computer, server, storage device or other processing platform element. A given such element may be viewed as an example of what is more generally referred to herein as a “processing device.” The cloud infrastructure 1500 shown in
The processing platform 1600 in this embodiment comprises a portion of information processing system 100 and includes a plurality of processing devices, denoted 1602-1, 1602-2, 1602-3, . . . 1602-K, which communicate with one another over a network 1604.
The network 1604 may comprise any type of network, including by way of example a global computer network such as the Internet, a WAN, a LAN, a satellite network, a telephone or cable network, a cellular network, a wireless network such as a WiFi or WiMAX network, or various portions or combinations of these and other types of networks.
The processing device 1602-1 in the processing platform 1600 comprises a processor 1610 coupled to a memory 1612.
The processor 1610 may comprise a microprocessor, a microcontroller, an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), a central processing unit (CPU), a graphical processing unit (GPU), a tensor processing unit (TPU), a video processing unit (VPU) or other type of processing circuitry, as well as portions or combinations of such circuitry elements.
The memory 1612 may comprise random access memory (RAM), read-only memory (ROM), flash memory or other types of memory, in any combination. The memory 1612 and other memories disclosed herein should be viewed as illustrative examples of what are more generally referred to as “processor-readable storage media” storing executable program code of one or more software programs.
Articles of manufacture comprising such processor-readable storage media are considered illustrative embodiments. A given such article of manufacture may comprise, for example, a storage array, a storage disk or an integrated circuit containing RAM, ROM, flash memory or other electronic memory, or any of a wide variety of other types of computer program products. The term “article of manufacture” as used herein should be understood to exclude transitory, propagating signals. Numerous other types of computer program products comprising processor-readable storage media can be used.
Also included in the processing device 1602-1 is network interface circuitry 1614, which is used to interface the processing device with the network 1604 and other system components, and may comprise conventional transceivers.
The other processing devices 1602 of the processing platform 1600 are assumed to be configured in a manner similar to that shown for processing device 1602-1 in the figure.
Again, the particular processing platform 1600 shown in the figure is presented by way of example only, and information processing system 100 may include additional or alternative processing platforms, as well as numerous distinct processing platforms in any combination, with each such platform comprising one or more computers, servers, storage devices or other processing devices.
For example, other processing platforms used to implement illustrative embodiments can comprise converged infrastructure.
It should therefore be understood that in other embodiments different arrangements of additional or alternative elements may be used. At least a subset of these elements may be collectively implemented on a common processing platform, or each such element may be implemented on a separate processing platform.
As indicated previously, components of an information processing system as disclosed herein can be implemented at least in part in the form of one or more software programs stored in memory and executed by a processor of a processing device. For example, at least portions of the functionality for generating testing plans including execution order for test cases and mapping of test cases to test bed pools as disclosed herein are illustratively implemented in the form of software running on one or more processing devices.
It should again be emphasized that the above-described embodiments are presented for purposes of illustration only. Many variations and other alternative embodiments may be used. For example, the disclosed techniques are applicable to a wide variety of other types of information processing systems, testing plans, testing tasks, testing actions, etc. Also, the particular configurations of system and device elements and associated processing operations illustratively shown in the drawings can be varied in other embodiments. Moreover, the various assumptions made above in the course of describing the illustrative embodiments should also be viewed as exemplary rather than as requirements or limitations of the disclosure. Numerous other alternative embodiments within the scope of the appended claims will be readily apparent to those skilled in the art.
Number | Date | Country | Kind |
---|---|---|---|
202210879401.3 | Jul 2022 | CN | national |