Modern computers allow a computer program to be decomposed into multiple activities or threads that can execute concurrently. Emerging big-data scientific, health, social, and engineering applications require such concurrency to give results in a timely manner. The key to matching the computation needs of these applications to the available computing resources is training of a workforce to develop, maintain, and configure concurrent programs. Ongoing work on developing toolkits for teaching concurrency is challenging because instruction is particularly labor-intensive, and thus, these toolkits, on their own, cannot help instructors meet the demands for such instruction. Specifically, concurrent programs are notoriously difficult to write, and substantial instructor effort is required to evaluate the performance and correctness of these programs, and identify potential problems. This project will extend an existing instructional toolkit with a new software framework to automate assessment of concurrent programs, and using instructional workshops and university courses to validate the extended toolkit. Successful execution of the project will improve the workforce development and promote the progress of science.<br/><br/>The main research question we are exploring is: What should be the nature of a rule-based software framework for assessing concurrent programs written in multiple programming languages that improves the productivity and learning, respectively, of trainers and trainees? The key novel steps we are taking to explore this question are (a) development of a semi-automatic assessment model in which manual evaluation, integrated with automatic rules, reduces false positives and negatives of the automated checks; (b) identification of new protocols and associated architectures that leverage the capabilities of several existing powerful tools that have not been used before to address our question, (c) creation of new techniques based on the insight that solutions to a concurrent programming assignment often have a prescribed code-structure and algorithm, (d) support for layered techniques that allow rule-writers to tradeoff assessment quality for low rule-writing effort, (e) development of a meta-assessment framework to train the trainers to write rules, (f) use of the meta-assessment and assessment framework in instructional workshops and university course offerings, respectively, and (g) evaluation of the usability, programmability, effectiveness and learning gain of the frameworks through diverse mechanisms including pre-post surveys, course exit interviews, and focus groups.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.