This invention relates generally to the field of computer programming, and more specifically to the assessment of programming techniques and adherence to programming standards for secure system design and application execution.
There are a myriad of testing and assessment techniques for validating various properties of software applications and network implementations. However, one of the most critical processes for ensuring that the deployment of software does not expose an organization to unacceptable risks is security and vulnerability testing. Some of the conventional techniques used to perform such testing includes static analysis (automated code review), dynamic analysis (automated penetration testing) and manual analyses such as code review, design review, and manual penetration testing. All of these analysis techniques are aimed at finding security weaknesses and vulnerabilities in an application and typically provided in report format to the programmers, product managers and quality assurance (QA) staff. The report can provide detailed results (e.g., program names, line numbers, variable names, data connections, etc.) as well as a summary of the results. The report may be a conventional document such as a text file or a structured XML file.
To assist developers in steering clear of many of the well-know pitfalls, system security professionals have developed, over time, a number of best practices. These best practices are typically published as documents, text books, wiki pages or other reference materials. The best practices can include, for example, adherence to certain secure coding standards, use of enhanced-security code libraries, avoidance of code constructs or libraries known to be risky, etc.
There are a number of tools that attempt to identify potential or actual security problems in application code, thus providing “negative feedback” to the developers on suspect and, in some cases, suggesting potential steps to improve the code. However, to date there have not existed any automated mechanisms for explicitly identifying the developer's affirmative use of more-secure best practices, or of providing “positive feedback” to the developer on their coding. As such, developers who implement certain well-designed coding or design techniques may not fully benefit from a comprehensive knowledge base regarding particular best practices.
The foregoing examples of the related art and limitations related therewith are intended to be illustrative and not exclusive. Other limitations of the related art will become apparent upon a reading of the specification and a study of the drawings.
In the drawings, like reference characters generally refer to the same parts throughout the different views. Also, the drawings are not necessarily to scale, emphasis instead generally being placed upon illustrating the principles of the invention
The techniques and supporting systems described herein provide a comprehensive and customizable approach to identifying certain best practices used during the design and development of software applications, as well as recommending additional enhancements or courses of action that may be implemented to further improve the application. As referred to hereinafter, software applications may include (but are not necessarily limited to) any sort of instructions for a machine, including, for example, without limitation, a component, a class, a library, an script, an applet, a logic table, a data block, or any combination or collection of one or more of any one or more of these.
The appropriate type of software security analysis and best practice implementation depends on many factors, including (but not necessarily limited to) the technical details of an application (e.g., the language in which it is written and the platform on which is to be deployed) as well as the business context in which the application operates. For a non-limiting example, an application that is “customer-facing” and facilitates high-volume, secure transactions such as banking or ecommerce will require rigorous testing to ensure that customer data is not jeopardized. Conversely, applications such as document-control systems or desktop applications that are implemented entirely within an organization and operated behind secure firewalls require less stringent testing. Therefore, balancing the added costs for executing additional security assessments and testing with the risks of potential for losses is critical.
In the example of
In the example of
In the example of
In general, various embodiments of the analysis engine 125 provide a software system and methods of using the system that examine code of the target software application received and identify specific application security best practices that are applicable to the target software application. The analysis engine 125 then identifies locations in the target application's code where the various best practices ought to be implemented and determines for each location whether the relevant best practices appear to have been implemented. At each location, the analysis engine 125 determines to what extent the relevant best practices appear to have been implemented correctly, and to what extent they may be implemented incompletely or incorrectly and provides positive feedback to the developers for what appears to be their correct implementation of best practices.
In some embodiments, the analysis engine 125 interacts with various testing engines and code review modules, as well with assessment and threat databases, and includes benchmarking and reporting capabilities for comparing assessment results among applications, developers, teams and/or organizations.
In one embodiment, for a non-limiting example, the analysis engine 125 interacts with a dynamic testing engine 130, a static testing engine 135, a pen testing engine 140 and a module for performing manual code review 145. In some embodiments, the dynamic analysis engine 130 interacts with the target software application 110 as an external entity and executes the application 110 in a manner that mirrors or emulates the runtime environment in which it operates. In some embodiments, the dynamic analysis engine 130 receives a description of the interfaces to the application 110, sends test and/or simulation data to the application via the interfaces, and analyzes the received responses. The test data may be application-specific (e.g., provided with the application as a library, data file, or structured input) or application-agnostic, such as data and/or scripts known to exploit application vulnerabilities. Based on the responses, the dynamic analysis engine 130 determines whether any security defects exist in the application 110 and the extent to which it may be vulnerable to certain threats. The defects and best practices may be reported in real-time (e.g., via the communications server 120) and/or stored in a database for subsequent analysis and reporting.
In some embodiments, the static analysis engine 135 receives a binary or bytecode version of the target software application 110 as input. For example, a high-level semantic model of the application 10 is created containing control-flow and data-flow graphs of the application 110, and this model then analyzed for the use of best practices and/or quality defects, including security flaws, by a set of analysis scans.
In some embodiments, the pen testing engine 140 performs penetration testing of the application 110. Penetration testing includes, for example, simulating and analyzing various web-based interactions between a client and the server on which the application 110 operates. This includes executing standard HTTP commands such as GET and POST, analyzing FORM elements and scripting elements (both client and server-side), and manipulating inputs to elicit known vulnerabilities.
In some embodiments, the analysis engine 125 may also receive input from manual review processes executed using a manual code review module 145. Manual review processes typically include a human operator visually reviewing source code to determine if proper coding form and standards have been followed, and looking for “extra” functions often left in applications such as trap doors, Easter eggs, and similar undocumented functionality.
In some embodiments, the data, scripts and functions used to identify the best practices and operate the various testing engines and the analysis engine 125 may be stored in a security-threat database 150. The database 150 may be operated as a stand-alone server or as part of the same physical server on which the analysis engine 125 operates. Portions of the threat database 150 may, in some cases, be provided by entities other than the entity operating the platform 105 on a subscription basis, allowing the database 150 to be kept up to date as threats and malware evolve over time. Likewise, the results of each test and the overall analysis process may be stored in an assessment-results database 155. In some embodiments, the applications and analysis results are stored in an encrypted format using a unique key provided to the owner of the analyzed application 110 such that only it can access and review the results of the analysis. In such cases, decryption of the analysis is limited to authorized personnel and all traces of the analysis are deleted from memory (other than the database 155) following completion. Non-limiting examples of database applications that may provide the necessary features and services include the MySQL Database Server by Sun Microsystems, the PostgreSQL Database Server by the PostgreSQL Global Development Group of Berkeley, Calif., or the ORACLE Database Server offered by ORACLE Corp. of Redwood Shores, Calif.
In some embodiments, the examination of the target application by the analysis engine 125 can be done through parsing of the application source code, or the compiled bytecode (as in Java, .NET, and others) or binary executable code (e.g. a compiled C/C++ application). In one non-limiting instantiation, the analysis engine 125 examines the application through a combination of source code and binary code parsing. In addition to parsing the structure of the program, the analysis engine 125 constructs control-flow and data-flow graphs that represent the behavior of the program. These “deep analyses” allow the analysis engine 125 to analyze the intended and actual behavior of even complex, object-oriented programs.
In some embodiments, the analysis engine 125 may identify which best practices might or should apply to a particular target application in a number of ways. In one particular embodiment, the mapping of the best practices that apply to an application may be expressed by the analysis engine 125 as a series of IF-THEN rules, such as “IF the target application communicates with a database, THEN the database security rules apply.” The rules may relate to the technical architecture or environment of the application (e.g., the operating system used, the communication protocol(s) used, whether encryption is used, etc.) and/or the business implementation of the application. For a non-limiting example, certain rules may be used to identify good coding practices for consumer-facing financial transaction systems (ecommerce, banking, etc.) whereas others may be used for less risky back office applications such as document management, workflow processing, inventory control, etc. In another embodiment, the use profile of an application (heavy transactional versus content delivery, mobile application etc.) may trigger the application of certain rules. The rules may be applied by the analysis engine 125 in a “forward-chaining” approach, a hierarchical approach, or a multi-path approach such that the applicability of certain rules may be dependant upon the evaluation of other higher-order rules.
Once a set of rules has been identified that apply to a particular application, the analysis engine 125 identifies locations in the target application where the certain best practices might be applied. While there may be numerous methods for identifying the locations, the analysis engine 125 may adopt one method, which uses a series of pattern-matching rules, such as “At every location ‘L’ in the target application where the target application sends a command string ‘S’ to a database, and where the command string ‘S’ could potentially include data from an untrusted source ‘U’, a best practice dictates that a known-good ‘cleanser function’ ‘F’ be applied to the command string ‘S’ at a point in the dataflow between the untrusted source ‘U’ and the location ‘L’ where the command string ‘S’ is sent to the database. Furthermore, no additional untrusted data, whether from ‘U’ or otherwise, should be allowed to enter the command string ‘S’ between when it is cleansed and when it is sent to the database.” The above-described application can be modeled as:
Location ‘L1’:
Code §A. Untrusted data ‘U1’ enters the system
Code §B . . . . (cleanser function ‘F’ MUST be applied here) . . . .
Code §C. A database command string ‘S’ is prepared, including data from ‘U1’
Code §D . . . . (NO further untrusted data ‘U2’ may be allowed to enter ‘S’)
Code §E. The command string ‘S’ is sent to the database
Based on the results of the evaluation of the rules, the analysis engine 125 can determine whether the relevant best practices appear to have been implemented. Again, the approach depends on the particular best practice in question. Taking the example above, the analysis engine 125 may scan the code of the target software application for the presence of the cleanser function ‘F’ in the sections of code ‘between’ (in a data-flow and control-flow sense, not merely in terms of physical code layout) Code §A where the untrusted data ‘U1’ enters the system, and Code §C where the database command string ‘S’ is assembled. In this case, the presence of the cleanser function ‘F’ might indicate that the developer had attempted to implement the relevant “best practice.”
To determine to what extent the relevant best practices appear to have been correctly and completely implemented may also depend on the particular best practice in question. Taking the same example above again, the analysis engine 125 scans the code of the target software application for common errors of correctness or completeness, such as (1) additional untrusted data ‘U2’ being added to the command string ‘S’, or (2) control flow paths in the target application that cause execution to jump from Code §A to Code §C, therefore bypassing execution of the cleanser function ‘F’.
In one embodiment of the methodology, if no implementation errors are detected, the analysis engine 125 flags the implementation of the best practice using a moniker such as P+OK, “apparently present” (P) and “apparently correct” (OK). If implementation errors are detected, the implementation of the best practice may be flagged as P+Err, “apparently present” (P) and “incorrect” (Err), or as P+Inc, “apparently present” (P), and “incomplete” (Inc), or as P+Err+Inc, a combination of incorrect and incomplete implementation. If the wrong best practice for the particular situation appears to have been implemented (e.g., cleanser function ‘F1’ is required, but cleanser function ‘F2’ is found instead), the implementation may be flagged as P+Mis, “apparently present”, and “mismatched for the situation” (Mis).
Once completed with the scan and analysis, the analysis engine 125 provides positive feedback to developers for apparently complete and correct implementation of a “best practice.” Certain locations are flagged as P-OK (implementation apparently present and apparently correct and complete) and communicated to the developers, either as individual locations, or in aggregate, depending on the particular “best practice,” the number of instances occurring in the target application, and possibly other factors.
In some embodiments, the analysis engine 125 provides mixed positive and negative feedback to the developers for locations where it appears that the developers attempted to implement a certain best practice, but the implementation of the best practice is either incomplete or incorrect. Reports are provided the analysis engine 125 by identifying locations flagged as P+Err, P+Inc, P+Err+Inc, or P+Mis to the developers, either as individual locations or in aggregate, depending on the particular best practice, the number of instances occurring in the target application, and possibly other factors. In effect, this informs the developer that they have used correct security and coding practices but the code needs additional work to implement these features in a manner that they have their full and intended effect.
Reporting to the developer may take many forms. As one non-limiting example, the results may be reported in separate documents, emails, web pages, or other messages to the developer. In some cases, however, the report may take the form of visual indicators added into the developer's development environment such as green indicators or icons next to the lines of code where a best practice has been completely and accurately utilized, or shading the background of the “good” code light green. Other colors (yellow, red, etc.) may be used to indicate or highlight code that used the best practices but needs additional attention, or places where no attempt was made to implement the best practices.
So as not to generate a surfeit of gratuitous positive feedback, and thus dilute the value of a small amount of well-deserved positive feedback, the analysis engine 125 may explicitly excludes several particular situations. In particular, the analysis engine 125 may not give positive feedback in situations where there is no need for implementation of a best practice. If there is no actual security threat in a particular area of the code, there is no feedback given there, regardless of what the developer has implemented.
In the example of
Using various embodiments, the analysis engine 125 may “package” or “bound” the security analysis and vulnerability testing results to the actual software they describe. As used herein, “software” may refer to a single program, a component, a page of mark-up language (e.g., HTML), a server instance (either actual or virtual), a script, a collection of programs, or an entire application. In some cases, the software may be a commercially-available product delivered via traditional methods such as CD-ROM or download, whereas in other cases the software may be a website or collection of websites that provide the software and/or services over the Internet, commonly referred to as software as a service, or “SaaS”. In still other cases, software may refer to a collective of otherwise unrelated applications and services available over the internet, each performing separate functions for one or more enterprises, (i.e., “cloud” computing). By linking the report to the software itself, downstream users of the software can access information about the software, make informed decisions about implementation of the software, and analyze the security risk across an entire system by accessing all (or most) of the reports associated with the executables running on the system and summarizing the risks identified in the reports.
The methods and techniques describe above may be implemented in hardware and/or software and realized as a system for producing, storing, retrieving and analyzing security and vulnerability reports for software applications. For example, the platform 105 may be implemented as a collection of data processing modules for ingesting, reviewing, and attacking software applications, a module for producing the security report, and a data storage appliance (or series of appliances) to store the reports. The platform 105 may in some cases also include a digital rights management module that creates, certifies and confirms hash values, secure keys, and other user and use-based restrictive elements that ensure the authenticity of the users and that the reports are bound to the correct software components. In some embodiments the module may set aside portions of a computer as random access memory to provide control logic that affects the processes described above. In such an embodiment, the program may be written in any one of a number of high-level languages, such as FORTRAN, PASCAL, C, C++, C#, Java, Tcl, or BASIC. Further, the program can be written in a script, macro, or functionality embedded in commercially available software, such as EXCEL or VISUAL BASIC. Additionally, the software could be implemented in an assembly language directed to a microprocessor resident on a computer. For example, the software can be implemented in Intel 80×86 assembly language if it is configured to run on an IBM PC or PC clone. The software may be embedded on an article of manufacture including, but not limited to, a computer-readable program means such as a floppy disk, a hard disk, an optical disk, a magnetic tape, a PROM, an EPROM, or CD-ROM, appliances, partitions (either physical or virtual) of single appliances, or combinations of the two.
One skilled in the art will realize the invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The foregoing embodiments are therefore to be considered in all respects illustrative rather than limiting of the invention described herein.
This application claims priority to and the benefit of U.S. provisional patent application Ser. No. 61/601,720, filed on Feb. 22, 2012, the entire disclosure of which is incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
4527237 | Frieder et al. | Jul 1985 | A |
4533997 | Furgerson | Aug 1985 | A |
4931928 | Greenfeld | Jun 1990 | A |
5263162 | Lundeby | Nov 1993 | A |
5325531 | McKeeman et al. | Jun 1994 | A |
5432942 | Trainer | Jul 1995 | A |
5481708 | Kukol | Jan 1996 | A |
5586328 | Caron et al. | Dec 1996 | A |
5590330 | Coskun et al. | Dec 1996 | A |
5715403 | Stefik | Feb 1998 | A |
5793374 | Guenter et al. | Aug 1998 | A |
5812851 | Levy et al. | Sep 1998 | A |
5819097 | Brooks et al. | Oct 1998 | A |
5854924 | Rickel et al. | Dec 1998 | A |
5854929 | Van Praet et al. | Dec 1998 | A |
5862382 | Kataoka | Jan 1999 | A |
5864871 | Kitain et al. | Jan 1999 | A |
5875334 | Chow et al. | Feb 1999 | A |
5881290 | Ansari et al. | Mar 1999 | A |
5892900 | Ginter et al. | Apr 1999 | A |
5918035 | Van Praet et al. | Jun 1999 | A |
5933635 | Holzle et al. | Aug 1999 | A |
5937190 | Gregory et al. | Aug 1999 | A |
5937192 | Martin | Aug 1999 | A |
6009256 | Tseng et al. | Dec 1999 | A |
6014518 | Steensgaard | Jan 2000 | A |
6026485 | O'Connor et al. | Feb 2000 | A |
6064819 | Franssen et al. | May 2000 | A |
6071317 | Nagel | Jun 2000 | A |
6078745 | De Greef et al. | Jun 2000 | A |
6125439 | Tremblay et al. | Sep 2000 | A |
6151701 | Humphreys et al. | Nov 2000 | A |
6151706 | Lo et al. | Nov 2000 | A |
6154876 | Haley et al. | Nov 2000 | A |
6175948 | Miller et al. | Jan 2001 | B1 |
6240376 | Raynaud et al. | May 2001 | B1 |
6240547 | Holzle et al. | May 2001 | B1 |
6243848 | Guignet et al. | Jun 2001 | B1 |
6249910 | Ju et al. | Jun 2001 | B1 |
6311327 | O'Brien et al. | Oct 2001 | B1 |
6336087 | Burgun et al. | Jan 2002 | B2 |
6381738 | Choi et al. | Apr 2002 | B1 |
6412106 | Leask et al. | Jun 2002 | B1 |
6457172 | Carmichael et al. | Sep 2002 | B1 |
6594761 | Chow et al. | Jul 2003 | B1 |
6601235 | Holzle et al. | Jul 2003 | B1 |
6609128 | Underwood | Aug 2003 | B1 |
6631473 | Townsend | Oct 2003 | B2 |
6668325 | Collberg et al. | Dec 2003 | B1 |
6766481 | Estep et al. | Jul 2004 | B2 |
6779114 | Gu et al. | Aug 2004 | B1 |
6820256 | Fleehart | Nov 2004 | B2 |
6892303 | Le Pennec et al. | May 2005 | B2 |
6925638 | Koved et al. | Aug 2005 | B1 |
6928638 | Parvathala et al. | Aug 2005 | B2 |
6961925 | Callahan, II et al. | Nov 2005 | B2 |
6980927 | Tracy | Dec 2005 | B2 |
7051322 | Rioux | May 2006 | B2 |
7089590 | Judge et al. | Aug 2006 | B2 |
7140008 | Chilimbi et al. | Nov 2006 | B2 |
7155708 | Hammes et al. | Dec 2006 | B2 |
7162473 | Dumais et al. | Jan 2007 | B2 |
7171655 | Gordon et al. | Jan 2007 | B2 |
7185323 | Nair et al. | Feb 2007 | B2 |
7266813 | Nistler et al. | Sep 2007 | B2 |
7284274 | Walls | Oct 2007 | B1 |
7315903 | Bowden | Jan 2008 | B1 |
7376939 | Nayak et al. | May 2008 | B1 |
7389208 | Solinsky | Jun 2008 | B1 |
7430670 | Horning et al. | Sep 2008 | B1 |
7437764 | Sobel et al. | Oct 2008 | B1 |
7458067 | Tirumalai et al. | Nov 2008 | B1 |
7548946 | Saulpaugh et al. | Jun 2009 | B1 |
7594269 | Durham et al. | Sep 2009 | B2 |
7707566 | Grover et al. | Apr 2010 | B2 |
7743336 | Louch et al. | Jun 2010 | B2 |
7752609 | Rioux | Jul 2010 | B2 |
7779394 | Homing et al. | Aug 2010 | B2 |
7779472 | Lou | Aug 2010 | B1 |
7840845 | Doddapaneni et al. | Nov 2010 | B2 |
7840951 | Wright et al. | Nov 2010 | B1 |
7856624 | Plum | Dec 2010 | B2 |
7874001 | Beck et al. | Jan 2011 | B2 |
7891003 | Mir et al. | Feb 2011 | B2 |
7930753 | Mellinger et al. | Apr 2011 | B2 |
7937693 | Victorov | May 2011 | B2 |
8069487 | Fanton et al. | Nov 2011 | B2 |
8087067 | Mahaffey et al. | Dec 2011 | B2 |
8108933 | Mahaffey | Jan 2012 | B2 |
8136104 | Papakipos et al. | Mar 2012 | B2 |
8161464 | Archambault et al. | Apr 2012 | B2 |
8161548 | Wan | Apr 2012 | B1 |
8171553 | Aziz et al. | May 2012 | B2 |
8181251 | Kennedy | May 2012 | B2 |
8225409 | Newman et al. | Jul 2012 | B2 |
8272058 | Brennan | Sep 2012 | B2 |
8281290 | Thompson | Oct 2012 | B2 |
8290245 | Turbell et al. | Oct 2012 | B2 |
8332944 | Rozenberg et al. | Dec 2012 | B2 |
8347386 | Mahaffey et al. | Jan 2013 | B2 |
8365155 | Rioux | Jan 2013 | B2 |
8819772 | Bettini | Aug 2014 | B2 |
20050108037 | Bhimani et al. | May 2005 | A1 |
20050138426 | Styslinger | Jun 2005 | A1 |
20060136424 | Nuggehalli et al. | Jun 2006 | A1 |
20060277607 | Chung | Dec 2006 | A1 |
20070180490 | Renzi et al. | Aug 2007 | A1 |
20070240218 | Tuvell et al. | Oct 2007 | A1 |
20070261061 | Staniford et al. | Nov 2007 | A1 |
20080209567 | Lockhart et al. | Aug 2008 | A1 |
20090165135 | Lomont et al. | Jun 2009 | A1 |
20100031353 | Thomas et al. | Feb 2010 | A1 |
20100058474 | Hicks | Mar 2010 | A1 |
20100058475 | Thummalapenta et al. | Mar 2010 | A1 |
20100281248 | Lockhart | Nov 2010 | A1 |
20110047594 | Mahaffey et al. | Feb 2011 | A1 |
20110145920 | Mahaffey et al. | Jun 2011 | A1 |
20110173693 | Wysopal | Jul 2011 | A1 |
20120072968 | Wysopal | Mar 2012 | A1 |
20120117650 | Nachenberg | May 2012 | A1 |
20120174224 | Thomas et al. | Jul 2012 | A1 |
20130097706 | Titonis | Apr 2013 | A1 |
Number | Date | Country |
---|---|---|
WO-0186427 | Nov 2001 | WO |
WO-2004003706 | Jan 2004 | WO |
WO-2008103286 | Aug 2008 | WO |
WO-2009097610 | Aug 2009 | WO |
Entry |
---|
Ahpah Software, Inc. SourceAain and Java Decompilation—Updated Dec. 9, 2001. White Paper: SourceAgain and Java Decompilation retrieved from http://www.ahpah.com/whitepaper.html on Dec. 4, 2002. |
Ahpah Software, Inc. SourceAgain PC Professional, Ahpah Software, Inc. / SourceAgain PC Professional retrieved from http://www.ahpah.com/sourceagain/sourceagain.sub.--professional.html on Dec. 4, 2002. |
Backer Street Software. REC—Reverse Engineering Compiler. REC Decompiler Home Page retrieved from http://www.backerstreet.com/rec/rec.htm on Dec. 3, 2002. |
Blume, W. & Eigenmann, R., “Demand-driven Symbolic Range Propagation”, University of Illinois Urbana-Champaign, 1-15 (1995). |
Blume, W.J., “Symbolic Analysis Techniques for Effective Automatic Parallelization”, University of Illinois at Urbana-Champaign (1995). |
Bohme, Rainer “A Comparison of Market Approaches to Software Vulnerability Disclosure”, Springer-Verlag Berlin Heidelberg, 2006 (14 pages). |
Breuer et al, “Decompilation: The enumeration of types and grammars”, ACM Trans. on Prog. Lang. and Sys. vo. 16, No. 5, pp. 1613-1647, 1994. |
Breuer, P.T. and Bown, J.P., (1992d), “Generating Decompilers”, Draft, Oxford University Computing Laboratory. Submitted for Publication. |
Burke, “An interval based approach to exaustive and incremental interprocedural data flow analysis”, ACM Trans. on Prog. Language and Systems, vol. 12, No. 3, pp. 341-395, 1990. |
Business Wire, “The Kernel Group Unveils Java Support for AutoTracel Serviceability Solution Expands Capabilities; Provides Java Support and Key Features that Extend its Power and Flexibility”, Jun. 4, 2001. |
Choi, J.D. & Ferrante, J., “Static Slicing in the Presence of GOTO Statements”, IBM T.J. Watson Research Center. |
Cifuentes, C., “An Environment for the Reverse Engineering of Executable Programs”, Proceedings of the Asia-Pacific Software Engineering Conference (APSEC), IEEE Computer Society Press, Brisbane, Australia, pp. 41-419, Dec. 1995. |
Cifuentes, C., “Partial Automation of an Integrated Reverse Engineering Environment of Binary Code”, Proceedings Third Working Conference on Reverse Engineering, Monterey, CA, IEEE-CS Press, pp. 50-56, Nov. 8-10, 1996. |
Cifuentes, C., “Reverse Compilation Techniques”, Queensland University of Technology (Jul. 1994). |
Cifuentes, C. et al., “Assembly to High-Level Language Translation”, University of Queensland, Brisbane, Australia, Dept. of Comp. Sci. & Elec. Eng., Technical Report 439, Aug. 1993. |
Cifuentes, C. et al., “The Design of a Resourceable and Retargetable Binary Translator”, University of Queensland, Dept. Comp. Sci. & Elec. Eng. & Ramsey, N., University of Virginia, Dept. Comp. Sci. |
Cifuentes, C. & Fraboulet, A., “Intraprocedural Static Slicing of Binary Executables”, University of Queensland, Dept. Comp. Sci., Centre for Software Maintenance. |
Cifuentes, C. & Gough, K.J., “Decompilation of Binary Programs”, Software—Practice and Experience, vol. 25, pp. 811-829, Jul. 1995. |
Cifuentes, C. & Sendall, S., “Specifying the Semantics of Machine Instructions”, University of Queensland, Dept. Comp. Sci. & Elec. Eng., Technical Report 442, Dec. 1997. |
Cifuentes, C. & Simon, D., “Precedural Abstration Recovery from Binary Code”, University of Queensland, Dept. Comp. Sci. & Elec. Eng., Technical Report 448, Sep. 1999. |
Cifuentes, C. & Van Emmerik, M., “Recovery of Jump Table Case Statements from Binary Code”, University of Queensland, Dept. Comp. Sci. & Elec. Eng., Technical Report 444, Dec. 1998. |
Cytron, R. et al., “Efficiently Computing Static Single Assignment Form and the Control Dependence Graph”, IBM Research Division, Mar. 7, 1991. |
Dejean et al, “A definition optimization technique used in a code translation algorithm”, Comm.. of the ACM, vol. 32, No. 1, pp. 94-105, 1989. |
Di Lucca Guiseppe A., et al. “Testing Web-based applications: The state of the art and future trends” Information and Software Technology, 2006 (pp. 1172-1186). |
Duesterwald et al, “A demand driven analyzer for data flow testing at the integration level”, IEEE ICSE, pp. 575-584, 1996. |
Duesterwald, E. et al., “A Practical Framework for Demand-Driven Interprocedural Data Flow Analysis”, ACM Transactions on Programming Languages and Systems 19, pp. 992-1030, Nov. 1997. |
Dyer, D. Java decompilers compared ; Our detailed examples of how 3 top decompilers handle an extensive test suite will help you determine which, if any, meet your needs. JavaWorld (Jul. 1997). |
Gough, I., Queensland University of Technology & Klaeren, H., University of Tubingen, “Eliminating Range Checks using Static Single Assignment Form”, Dec. 19, 1994. |
Gupta, R., University of Pittsburgh, “Optimizing Array Bound Checks Using Flow Analysis”, ACM SIGPLAN Conference on Programming Language Design and Implementation, White Plains, NY, Preliminary Version (1995). |
Harrold, M.J. & Soffa, M.L., “Efficient Computation of Interprocedural Definition-Use Chains”, ACM Transactions on Programming Language and Systems 16, 175-204 (Mar. 1994). |
Hollingum, J., “Arithmetic robot modules control robotic production”, Industrial Robot 22, 32-35 (1995). |
International Search Report for PCT/US2008/002025, mailing date Sep. 2, 2008 ( 4 pages). |
Kumar, S., “DisC—Decompiler for TurboC”: http://www.debugmode.com/dcompile/disc.htm modified Oct. 22, 2001. |
Liang, D. & Harrold, M.J., “Efficient Computation of Parameterized Pointer Information for Interprocedural Analyses”, Georgia Institute of Technology, Tech Report GIT-CC-00-35, 1-17 (Dec. 2000). |
Liang, D. & Harrold, M.J., “Light-Weight Context Recovery for Efficient and Accurate Program Analyses”, Proceedings of the 22nd International Conference on Software Engineering 1-10 (Jun. 2000). |
MacUser, Programming & systems; Software Review; software for the Apple Macintosh computer, Evaluation, vol. 8, 237 (Jan. 1993). |
Mittal et al, “Automatic translation of software binaries onto FPGAs”, ACM DAC, pp. 389-394, 2004. |
Mycroft, A., “Type-Based Decompilation”, Cambridge University, Computer Laboratory. |
Myreen et al, “Machine code verification for multiple architectures” IEEE, pp. 1-8, 2008. |
Orso, A. et al., “Effects of Pointers on Data Dependences”, Georgia Institute of Technology, College of Computing, Technical Report GIT-VV-00-33, 1-17 (Dec. 2000). |
Partial International Search Report for EP12184590.3 dated May 10, 2013, 7 pages. |
Patterson, J.R.C., “Accurate Static Branch Prediction by Value Range Propagation”, Proc. ACM SIGPLAN Conference on Programming Language and Design Implementation, La Jolla, San Diego, 67-78 (Jun. 1995). |
Pingali, K. & Bilardi G., “Optimal Control Dependence Computation and the Roman-Chariots Problem”, ACM Transactions on Programming Languages and Systems 19, 1-30 (May 1997). |
Reilly, D., “Decompilers—friend or foe?”, Java Coffee Break updaed Jun. 2, 2001 retrieved from http://www.javacoffeebreak.com/articles/decompilers.sub.--friend.sub.--or- .sub.--foe.html. |
Sagiv, M. et al. Precise Interprocedure Dataflow Analysis with Applications to Constant Propagation. University of Wisconsin-Madison, Computer Sciences Dept. |
Saul, J. M. Hardware/Software Codesign fir FPGA-Based Systems. Proceedings of the 32.sup.nd Hawaii International Conference on System Sciences (1999). |
Sinha, S. & Harold, M.J., “Analysis and Testing of Programs with Exception-Handling Constructs”, IEEE Transactions on Software Eng. 26, 1-24 (Sep. 2000). |
Stitt et al, “New decompilation techniques for binary level co-processor generation”, IEEE, pp. 546-553, 2005. |
University of Queensland, Comp. Sci & Elec. Eng., The dcc Decompiler, updated on May 4, 2002, retrieved from http://itee.uq.edu.au/.about.cristina/dcc.html on Dec. 4, 2002. |
Ural et al, “Modeing software for acurate data flow representaion”, IEEE, pp. 277-286, 1993. |
Van Emmerik, M. Signatures for Library Functions in Executable Files. Queensland University of Technology, Information Security Research Centre. |
Van Tyle, S., “Engineering software tools meet demands”, Electronic Design 42, 71 (Jun. 27, 1994). |
Written Opinion of the International Searching Authority for PCT/US2008/002025 mailing date Sep. 2, 2008 (6 pages). |
Xu et al, “Dynamic purity analysis for Java programs”, ACM Paste, pp. 75-82, 2007. |
Yardimci et al, “Mostly static program partitioning of binary executables”, ACM Trans. on Prog.Lang. and Sys. vo. 31, No. 5, article 7, pp. 1-46, 2009. |
Number | Date | Country | |
---|---|---|---|
20130227516 A1 | Aug 2013 | US |
Number | Date | Country | |
---|---|---|---|
61601720 | Feb 2012 | US |