Software code sharing is important, as the current state-of-the-art allows for the sharing of subroutines (sometimes called methods) and libraries of subroutines. The term “subroutine” in computer-science typically refers to a named block of code which may have a parameter list and which may have a return value. This block of code can be accessed from within another code block via the use of its name and parameter list. There can be significant amounts of code within the subroutine. Sharing portions of a subroutine is not possible unless the to-be-shared code portion is itself a subroutine. Rather than requiring the entire subroutine be shared, it is more efficient to share only that portion of the subroutine that is required to be shared.
Furthermore, in prior art software development environments, code and software design quickly become disassociated, thus making difficult the task of maintaining code/design and file/database/design association.
The introduction of any new technology requires a bridging mechanism between past solutions and new capability. The present method forms a bridge between conventional programming and an advanced programming method by analyzing existing source code for process and control elements, then encapsulating the control elements as augmented state machines and process elements as kernels. The new elements can then have meta-data attached, allowing software code sharing at the sub-subroutine level and automatic code/file/database upgrading, thus transforming the older technology into advanced technology.
Automatic code-design and file/database-design association allows a developer to simply perform the design, while locating and associating code or files/databases become automatic. Contrast this with source-code sharing models that require the developer to first find, then analyze, and finally associate blocks of code or locate and verify files and databases. Once code/files/databases and design can be reliably associated, then new, better code/files/databases can also be automatically located and used to replace existing code blocks, effectively allowing automatic code/file/database upgrading.
The following terms and concepts used herein are defined below.
Data transformation—A data transformation is a task that accepts data as input and transforms the data to generate output data.
Control transformation—A control transformation evaluates conditions and sends and receives control to/from other control transformations and/or data transformations.
Control bubble—A control bubble is a graphical indicator of a control transformation. A control bubble symbol indicates a structure that performs only transitions and does not perform processing.
Process bubble—A process bubble is a graphical indicator of a data transformation.
Control Kernel—A control kernel is a software routine or function that contains only the following types of computer language constructs: declaration statements, subroutine calls, looping statements (for, while, do, etc), decision statements (if- -else, etc.), arithmetic statements (including increment and decrement operators), relational operators, logical operators, type declarations and branching statements (goto, jump, continue, exit, etc.).
Process Kernel—A process kernel is a software routine or function that contains the following types of computer language constructs: assignment statements, looping statements, arithmetic operators (including increment and decrement operators), and type declaration statements Information is passed to and from a process kernel via global memory using RAM.
Function—a software routine, or more simply an algorithm that performs one or more transformations.
Node—A node is a processing element comprised of a processing core, or processor, memory and communication capability.
Metadata—Metadata is information about an entity, rather than the entity itself.
MPT Algorithm—An MPT algorithm comprises control kernels, process kernels, and MPT algorithms.
MPT Data Transfer Model—The MPT data transfer model comprises a standard model for transferring information to/from a process kernel. The model includes a key, a starting address, a size, and a structure_index. The key is the current job number, the starting address is the information starting address, the size is the number of bytes the data construct uses, and the structure_index points to the struct definition that is used by the process kernel to interpret the memory locations accessed.
MPT State Machine—An MPT state machine is a two-dimensional matrix which links together all relevant control kernels into a single non-language construct that calls process kernels. Each row in a MPT state machine consists of an index, the subroutine to be called (or the symbol “NOP”), a conditional statement, an index to the next accessible row (when the condition is true, or an end-of-job symbol is encountered), and an index to the next accessible row (when the condition is false, or when an end-of-job symbol is encountered). Process kernels form the “states” of the state-machine while the activation of those states form the state transition. This eliminates the need for software linker-loaders.
State Machine Interpreter—for the purpose of the present document, a State Machine Interpreter is a method whereby the states and state transitions of a state machine are used as active software, rather than as documentation.
Computing Environment
System 100 is coupled to a host management system 145, which provides management of system functions, and issues system requests. Algorithm execution module 125 initiates execution of kernels invoked by algorithms that are executed. Algorithm execution system 135 may comprise any computing system with multiple computing nodes 140 which can execute kernels stored in system 100. Management system 145 can be any external client computer system which requests services from the present system 100. These services include requesting that kernels or algorithms be added/changed/deleted from a respective library within the current system.
The software for system services that are indicated below as being initiated by various corresponding ‘buttons’ is stored in data and program storage area 190.
In addition, management system 145 can request that a kernel/algorithm be executed. It should be noted that the present system is not limited to the specific file names, formats and instructions presented herein. The methods described herein may be executed via system 100, or other systems compatible therewith.
Software Functional Structure
Standard software is constructed using functions (sometimes also called methods, routines, or algorithms) and code segments to instantiate application concepts. A code segment is comprised of one or more code statements. Functions typically contain code segments bound together with branching or looping structures, as illustrated in the exemplary diagram of
Table 1, below, shows the branching and looping commands used by the C language, for example.
There are two additional types of statements in the C language: storage declaration and operator, as respectively shown in Table 2 and Table 3, below. Note that although C language code is shown in all examples, any programming language can be analyzed similarly.
At step 325, meta-data 360 is then associated with these newly-created control and process design elements. The meta-data can be used to associate the newly extracted design elements with code other than the original code used in the extraction process, as described further below.
Example Source Code for MPT Algorithm
Extracting Subroutines
All procedural computer languages have the concept of subroutine. A subroutine is a sequence of instructions for performing a particular task. This sequence can be called from multiple places within a computer program. Subroutines can call other subroutines, including themselves (called recursion). Subroutines that are called primarily for their return value are known as functions. In object-oriented programming, subroutines or functions with limited execution scope are called methods. Because programs can call subroutines which can call other subroutines, the hierarchical decomposition structure of a computer program is obtained by tracking the subroutine calls in that program. In present system, a single linear transformation having no process flow is called a control kernel. Multiple process kernels connected via flow control are called algorithms. Algorithms can contain other algorithms as well as kernels. This means that an algorithm is equivalent to a subroutine.
As shown in
Extracting Variables
Almost all control structures require accessing variables, pointers, and/or arrays. The control (looping) statement below is an example:
For (index=0; count>=sizeof(buffer_info); index++)
The statement above requires that the variable index be accessed. Accessing variables, pointers, and arrays requires determining their starting address and type. Therefore, at step 410, the starting address and type is determined for each of these entities.
In the case of “buffer_info”, it also requires running “malloc( )” and “sizeof( )” functions prior to running the entire code segment to determine the number of bytes used by the “buffer_info” data structure.
In the C and C++ languages, the use of the following commands creates the required dynamic memory allocation: “malloc ( )”, “calloc ( )”, “realloc ( )”, and “new type ( )”. In addition, there are arrays that are dynamically allocated at runtime. All of these structures dynamically allocate heap space. Thus, for every command that dynamically allocates memory, the required dynamic memory allocation is created for each routine for each program thread. The C language also has the ability to take the address of any variable and write any value starting at that address.
Table 5, below, shows the extracted variables, constants, structures, and #defines (all of which are highlighted) for the example code segment shown in Table 4. This table is known as the Variables and Constants Table or VCT 412.
bufferinfo−>sample_buffer2
4
4
bufferinfo−>sample_buffer2−>buffer1
4,194,304
4,194,304
bufferinfo−>sample_buffer2−>buffer2
4,194,304
4,194,304
Buffer−>sample_buffer2−>test
10
10
bufferinfo−>buffer1
4,194,304
4,194,304
bufferinfo−>buffer2
4,194,304
4,194,304
bufferinfo−>test
10
10
sampleinfo−>test1
4
4
sampleinfo−>test2
4
4
sampleinfo−>test3
4
4
The variables, pointers, and arrays shown in Table 5 are constructed variables. Constructed variables are all possible variables that can be constructed using the structure definitions given. Not all constructed variables are used in the present sample code, but all are possible.
Before variables can be extracted, the “#defines” and “structs” are extracted by parsing these elements from the source code, at step 415, wherein the source code file is opened and scanned for any “#defines” or “structs”. Any found items are placed into a file 402 with the same name as the source code file but with an “.ETR” file name extension. In Table 6, below, the found “#defines” and “structs” are indicated by italics.
Table 7, below, shows the placement of a function that is used within the source code file of the example code to update the “ETR” file 402. In the present example, the function “mptStartingAddressDetector( )” (or equivalent), highlighted in bold text below, is used to determine the starting address of the “malloc( )'ed” variables. The starting addresses are then stored by the system. The newly augmented source code file 403 uses the same name as the source code segment file 401 with the file extension changed to “.AUG”.
At step 425, control and memory allocation statements are separated by modifying the “if” control statements that contained “malloc( )” commands by separating the “malloc( )” function from each “if” statement.
char *fileName;
FILE *fileNamePointer;
strcpy(mptFile,argv[0]);
strcat(mptFile, “.ETR”);
mptStartingAddressStart(filename,fileNamePointer);
mptStartingAddressDetector(fileNamePointer, “index”,
(uint)&index);
mptStartingAddressDetector(fileNamePointer,
“test_string”,(uint)&test_string);
bufferinfo = (buffer_info *) malloc(sizeof(buffer_info));
if (bufferinfo == NULL) {
}
mptStartingAddressDetector(
“bufferinfo”,
(uint) bufferinfo);
mptStartingAddressDetector(
“bufferinfo->test”,
(uint) bufferinfo->test);
bufferinfo->sample_buffer2=(sample_buffer *) malloc(
sizeof(sample_buffer));
if (bufferinfo->sample_buffer2 == NULL) {
mptStartingAddressEnd (fileNamePointer);
}
mptStartingAddressDetector(
fileNamePointer,
“bufferinfo->sample_buffer2”,
(uint) bufferinfo->sample_buffer2);
mptStartingAddressDetector(
fileNamePointer,
“bufferinfo->sample_buffer2->buffer1[ ]”,
(uint) bufferinfo->sample_buffer2->buffer1);
mptStartingAddressDetector(
fileNamePointer,
“bufferinfo->sample_buffer2->buffer2[ ]”,
(uint) bufferinfo->sample_buffer2->buffer2);
mptStartingAddressDetector(
fileNamePointer,
“bufferinfo->sample_buffer2->test”,
(uint) bufferinfo->sample_buffer2->test);
sampleinfo = (sample_buffer1*)malloc(sizeof(sample_buffer1));
mptStartingAddressDetector(
fileNamePointer,
“sampleinfo”,
(uint) sampleinfo);
if (sampleinfo == NULL) {
}
index = 0;
If (index < sizeof(buffer_info){
index++;
goto MPTForLoop1;
}
mptStartingAddressEnd (fileNamePointer);
}
mptStartingAddressStart(char *fileName, File *mptFilePointer) {
FILE *fopen(
);
if (fileName == NULL){
printf(“illegal file name”);
exit(10000);
}
else {
if (mptFilePointer = fopen(mptFile, “a”)== NULL){
printf(“Cannot open file”);
exit(10001);
}
}
return(0);
}
mptStartingAddressDetector (File *fileNamePointer, char *variable
Name, uint address)
{
fprintf(fileNamePointer, “variable Name: “%s” Address: “%u,
variable Name, address);
return(0);
}
mptStartingAddressEnd (File *fileNamePointer) {
fclose (fileNamePointer);
}
Next, “for loops” are converted into an “if . . . goto” form, at step 430. The “if . . . goto” form exposes the process kernel and a control vector.
At step 435, at the beginning of the code segment 401, the function “mptStartingAddressStart( )” is inserted into the code segment 401. When the “mptStartingAddressStart( )” is then called, it opens the ETR file with the same name as the source code file, but with the file extension set to “ETR”. Prior to any program exit or return call, the “mptStarting AddressEnd( )” function is called, which closes the ETR file. See table 5. All language-defined functions/methods are treated as part of the language, rather than as user defined functions or methods. In the case of the C language, this means that code blocks are not extracted from the function types listed in Table 8, below, which shows the C language functions:
Extracting Process and Control Kernels
At step 440, the present system accesses the “.AUG” file 403 and creates a set of kernel files. Each kernel file includes the source code file name concatenated with either the letter P (for process) or the letter C (for control), along with consecutive numbering. Examples of kernel file names are shown below:
sourceCodeFile_P1 ( ), sourceCodeFile_P2( ), . . . , sourceCodeFile_PN( )
or
SCF_P1 ( ), SCF_P2( ), . . . , SCF_PN( )
sourceCodeFile_C1( ), sourceCodeFile_C2( ), . . . , sourceCodeFile_CN( )
or
SCF_P1 ( ), SCF_C2( ), . . . , SCF_CN( )
Each added kernel indicates that it has completed, using the MptReturn kernel tracking variable. In an exemplary embodiment, this tracking variable is a sixty-four bit integer variable that saves the same process number as is placed on the kernel file name. The kernel number is placed prior to exiting the kernel. The “MptReturn” kernel variable is used by the MPT state machine to perform linear kernel transitions. The structural difference between a kernel and a function (in the C language) occurs at the parameter level.
A function has a parameter list, that is, an ordered group of input/output variables used by other functions and the main program to communicate with the target function. The information is communicated using either pass-by-reference or pass-by-value techniques. The only difference between the two techniques is that a copy of the data is created and made accessible when the pass-by-value technique is used, while a pointer to the actual data location is used during pass-by-reference.
The ordered-list nature of the parameter list adds a barrier to using a particular function. A kernel uses a parameter set, not a parameter list, so the order of the parameters makes no difference. Before a kernel can be made, the functions that will become the kernels must be generated. These functions are called proto-process kernels, and the example in Table 9, below, shows how they are extracted.
int_64 MptLastReturnedKernal = 0;
int main(int argc, char *argv[ ]) {
mptStartingAddressDetector(argv[0],”.ETR”, “index”, &index);
mptStartingAddressDetector(argv[0],”.ETR”, “test_string”,
&test_string);
if (bufferinfo == NULL) {
mptStartingAddressDetector(
argv[0],
”.ETR”,
“bufferinfo”,
bufferinfo);
mptStartingAddressDetector(
argv[0],
”.ETR”,
“bufferinfo->test”,
bufferinfo->test);
if (MptReturn == 1) SCF_P2 (bufferinfo->sample_buffer2);
if (bufferinfo->sample_buffer2 == NULL) {
mptStartingAddressDetector(
argv[0],
”.ETR”,
“bufferinfo->sample_buffer2”,
bufferinfo->sample_buffer2);
mptStartingAddressDetector(
argv[0],
”.ETR”,
“bufferinfo->sample_buffer2->buffer1[ ]”,
bufferinfo->sample_buffer2->buffer1);
mptStartingAddressDetector(
argv[0],
”.ETR”,
“bufferinfo->sample_buffer2->buffer2[ ]”,
bufferinfo->sample_buffer2->buffer2);
mptStartingAddressDetector( argv[0],
”.ETR”,
“bufferinfo->sample_buffer2->test”,
bufferinfo->sample_buffer2->test);
if (MptReturn == 2) SCF_P3 (sampleinfo);
mptStartingAddressDetector( argv[0],
”.ETR”,
“sampleinfo”,
sampleinfo);
if (sampleinfo == NULL) {
If MptReturn == 4) SFC_P4(index);
MPTForLoop1:
If (index < sizeof(buffer_info){
If (MptReturn == 4) SFC_P5(bufferinfo, index);
goto MPTForLoop1;
If (MptReturn == 5) SFC_P6(bufferinfo, sampleinfo);
int SCF_P1(buffer_info *bufferinfo) {
bufferinfo = (buffer_info *) malloc(sizeof(buffer_info));
MptReturn = 1;
}
int SCF_P2 (sample_buffer *)bufferinfo->sample_buffer2)
bufferinfo->sample_buffer2 = (sample_buffer *) malloc( sizeof
(sample_buffer));
MptReturn = 2;
}
int SCF_P3 (sample_buffer1 *sampleinfo) {
sampleinfo = (sample_buffer1*)malloc(sizeof(sample_buffer1));
MptReturn = 3;
}
int SCF_P4 (int index){
index = 0;
MptReturn = 4;
}
int SCF_P5 (buffer_info *bufferinfo, int index) {
bufferinfo->sample_buffer2->buffer1[index] = index;
bufferinfo->sample_buffer2->buffer2[index] = index + 1;
index++;
MptReturn = 5;
}
int SCF_P6 (buffer_info *bufferinfo, sample_buffer1 *sampleinfo) {
bufferinfo->sample_buffer2->test = “testtesttest”;
bufferinfo->test = “testtesttest”;
sampleinfo->test1 = 1;
sampleinfo->test2 = 2;
MptReturn = 6;
Once the proto-process kernels are identified, their parameter lists are transformed into a parameter set, completing the kernel extraction process.
The proto-process kernel parameters lists are converted into parameter sets as follows:
Groups of proto-process kernels that are linked together with control flows are considered algorithms. Groups of algorithms that are linked together with control flows are also considered algorithms.
All parameters are now associated with input and output data-flows. All input and output data-flows are associated with kernels and algorithms.
At step 445, kernels are transformed into kernel processes (they do not decompose) and, at step 450, algorithms are transformed into algorithm type processes (they do decompose). These processes are used to generate a high level design, such as that shown in the graph in
At step 455, kernel and algorithm code is extracted and saved as components comprising separately executable code 460 and associated meta-data 360 (e.g., keyword list 1407 (
If a parameter resolves to an address then that parameter represents a pass-by-reference. In the “C” programming language this is indicated by an asterisk in the parameter definition. Since a pass-by-reference requires that the data be copied to separate data store variables, the mptStartingAddressDetector( ) function obtains the addresses, types and sizes of all variables for the data dictionary, described in the following section.
All of the interface, data movement, data storage, and control found in the original software are represented in the example decomposition diagrams. As can be seen, the example 0.0 decomposition shown in
If an input/output parameter uses pass-by-value technology, the receiving routine has an additional kernel attached called, for example, “MPTCopyValue” which performs the pass-by-value copy, as shown in the decomposition example 800 of
Sharing Sub-Subroutine Level Software
If a system design is functionally decomposed until it reaches the point where the lowest decomposition level consists of only the “Basic Blocks” (herein called McCabe code blocks) of a program as defined in McCabe's cyclomatic complexity analysis, and as described above with respect to
Decomposition to McCabe Code Blocks
The following are decomposition rules of the present method, which are used to generate the
Automatic Code/File/Database Search/Test/Design Association
Metadata
For automatic association of code with database search/test/design in accordance with the present method, code-associated metadata comprises a keyword list 1407 for each McCabe code block and a list of all inputs and outputs to/from the code block. Similarly, in an exemplary embodiment, each decomposition design element (process bubble) also has an associated keyword list, input/output list (from the design), and associated test procedures.
Once a code block has been displayed on screen 1500 in block 1509, a decomposition object function, such as “Add keyword list”, is selected in a drop-down box 1506, in response to which, a list 1507 of keywords (or other appropriate data) to be associated with the code block is entered in block 1508. When the user has completed entering the desired information (such as a group of keywords), the association between the entered information and the selected object is stored in keyword list 1507 in digital memory (e.g., in data and program storage area 190). Loop values for a process can be set and viewed by selecting a loop symbol 1503, and I/O metadata in data flow can be set and viewed by selecting a corresponding arrow 1504.
With both the code block and the transformation process having associated keyword lists 1407 and 1507 respectively, a list of candidate code blocks may be created for any particular transformation process.
List 1610 is normally too long, as only one code block name is normally required.
Unlike traditional systems, the present method does not associate test procedures with code, but with transformation processes instead. Associating test procedures with design allows one test procedure to be run against all remaining code blocks. Since a test procedure consists of input and associated expected outputs, one can determine which code blocks generate the correct answers and which do not.
After step 1330, there are typically only a few code blocks left. To further decrease the number of code blocks to a single one, an additional step may be performed, in which developer goals are evaluated. Here, the developer defines the overall goal to be achieved with the design. This goal is defined by a list of possible goals, examples of which are shown in Table 11 below.
A developer can mix and match goals to produce a desired result. At step 1335, the code block that best meets the selected goals is selected, via a comparison of developer goals, such as those shown in Table 12 below, with metadata for the remaining code blocks 1710.
The final selection criteria indicated by the developer are compared against candidate code blocks 1710 to yield the code block closest to the developer's goals. Automatically associating a code block with a design element means that code and design can no longer drift apart. Not being able to associate a design element with a code block means either the code must be re-written or the design must be further decomposed.
Data Store Extension
A data store is equivalent to a “C” or “C++” language data structure. What is still desired is a method for attaching FILES and DATABASES to processes. Attaching files and databases to processes is accomplished via a new data store type, the “F” (file) type. An example of an F-type object symbol is shown below:
F-Type Data Store Definition
A file definition list, such as that shown in Table 13, below, may be displayed in response to a user request.
Flat File Selection
Once the flat file has been defined, the present system can serialize any input dataset properly and save the data in a cloud or other environment. This data can then be used by any design by selecting the correct file name with the correct keyword list 1901 and field names/types. Standard file calls are treated as if they were database queries.
Database Selection
At step 1910, a developer associates a database file with one or more keywords. Selection of a ‘database’ or equivalent button causes the database information description to be displayed as shown in Table 15 below.
Select Database Type
Selecting the Database Type option causes a list of supported database types to be shown. An example of this list is shown in Table 16 below.
Schema
At step 1915, the developer enters the database schema for each selected database type, as shown in Table 17 below.
The first time a table is defined it is placed into the selected database using, for example, the SQL CREATE TABLE command (for SQL databases) or similar command for noSQL databases. Adding data to an existing database table is performed using the SQL UPDATE (for SQL databases) or similar command for noSQL databases to +be generated. Changing the SQL schema is accomplished using an ALTER, DROP, DELETE, or TRUNCATE command for SQL databases.
Queries
At step 1920, selection of ‘queries’ allows the developer to enter a numbered list of queries to access the current database. A query can be accessed from the program by selecting the query number corresponding to the required query as a dataflow into the database, with the return value returning on the return data flow, as shown in Table 18 below.
The first time data is placed into the selected database will cause a SQL CREATE TABLE (for SQL databases) or similar command for noSQL databases. Adding data to an existing database will cause a SQL UPDATE (for SQL databases) or similar command for noSQL databases to be generated. Changing the Schema will cause an ALTER command to be generated for SQL databases.
A set of queries is attached to any database so that the database can be tested for correctness. An exemplary set of test queries is shown below in Table 19.
n exemplary set of file ‘queries’ is shown in Table 20 below.
Automatic Attachment of Databases to Design Element
Since a file or a database can exist outside of a program it is very useful to be able to locate the proper file or database. Consider that the file format (for flat files) and schemas (for SQL databases) and keys (for key-value type noSQL databases) all define how to access the data. These data access methods can be used to find the correct file or database as well.
As shown in
List 2110 is further culled, as shown in
Having described the invention in detail and by reference to specific embodiments thereof, it will be apparent that modifications and variations are possible without departing from the scope of the invention defined in the appended claims. More specifically, it is contemplated that the present system is not limited to the specifically-disclosed aspects thereof.
This application is a continuation of U.S. patent application Ser. No. 13/490,345, filed Jun. 6, 2012, which is a continuation-in-part of U.S. patent application Ser. No. 13/425,136 entitled “Parallelism From Functional Decomposition”, filed Mar. 12, 2012. Each of the above mentioned applications are incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
7617484 | Fienblit et al. | Nov 2009 | B1 |
8283449 | Galipeau et al. | Oct 2012 | B2 |
8527966 | Hosokawa et al. | Sep 2013 | B2 |
8762946 | Howard | Jun 2014 | B2 |
20050188364 | Cockx et al. | Aug 2005 | A1 |
20050223361 | Belbute | Oct 2005 | A1 |
20080263505 | StClair et al. | Oct 2008 | A1 |
20090055625 | Howard et al. | Feb 2009 | A1 |
20090150860 | Gschwind et al. | Jun 2009 | A1 |
20100049941 | Howard | Feb 2010 | A1 |
20100094924 | Howard et al. | Apr 2010 | A1 |
20110067009 | Hosokawa et al. | Mar 2011 | A1 |
20120066664 | Howard | Mar 2012 | A1 |
20120290883 | Kahlon | Nov 2012 | A1 |
20130067443 | Howard | Mar 2013 | A1 |
20130254743 | Howard | Sep 2013 | A1 |
20130254751 | Howard | Sep 2013 | A1 |
20130290935 | Hosokawa et al. | Oct 2013 | A1 |
20130311968 | Sharma | Nov 2013 | A1 |
20130332777 | Howard | Dec 2013 | A1 |
20130332903 | Howard | Dec 2013 | A1 |
20140109049 | Kaulgud et al. | Apr 2014 | A1 |
20140298286 | Howard | Oct 2014 | A1 |
20140310678 | Howard | Oct 2014 | A1 |
20140310680 | Howard | Oct 2014 | A1 |
20140310684 | Howard | Oct 2014 | A1 |
20140310689 | Howard | Oct 2014 | A1 |
Entry |
---|
Watson et al., “Structured Testing: A Testing Methodology Using the Cyclomatic Complexity Metric”, NIST Special Publication 500-235, Sep. 1996, retrieved from <http://www.mccabe.com/pdf/mccabe-nist235r.pdf>. |
U.S. Appl. No. 13/425,136 Notice of Allowance dated Oct. 8, 2014, 49 pages. |
Zhang, Fubo, et al. “Using Hammock Graphs to Structure Programs,” IEEE Transactions on Software Engineering, vol. 30, No. 4, pp. 231-245, Apr. 2004. |
U.S. Appl. No. 13/913,190 Office Action dated Nov. 6, 2014, 15 pages. |
Number | Date | Country | |
---|---|---|---|
20140304684 A1 | Oct 2014 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 13490345 | Jun 2012 | US |
Child | 14312639 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 13425136 | Mar 2012 | US |
Child | 13490345 | US |