Investigating How to Enhance Scientific Argumentation through Automated Feedback in the Context of Two High School Earth Science Curriculum Units

Information

  • NSF Award
  • 1418019
Owner
  • Award Id
    1418019
  • Award Effective Date
    9/1/2014 - 10 years ago
  • Award Expiration Date
    8/31/2018 - 6 years ago
  • Award Amount
    $ 1,952,837.00
  • Award Instrument
    Continuing grant

Investigating How to Enhance Scientific Argumentation through Automated Feedback in the Context of Two High School Earth Science Curriculum Units

With the current emphasis on learning science by actively engaging in the practices of science, and the call for integration of instruction and assessment; new resources, models, and technologies are being developed to improve K-12 science learning. Student assessment has become a nationwide educational priority due, in part, to the need for relevant and timely data that inform teachers, administrators, researchers, and the public about how all students perform and think while learning science. This project responds to the need for technology-enhanced assessments that promote the critical practice of scientific argumentation--making and explaining a claim from evidence about a scientific question and critically evaluating sources of uncertainty in the claim. It will investigate how to enhance this practice through automated scoring and immediate feedback in the context of two high school curriculum units--climate change and fresh-water availability--in schools with diverse student populations. The project will apply advanced automated scoring tools to students' written scientific arguments, provide individual students with customized feedback, and teachers with class-level information to assist them with improving scientific argumentation. The key outcome of this effort will be a technology-supported assessment model of how to advance the understanding of argumentation, and the use of multi-level feedback as a component of effective teaching and learning. The project will strengthen the program's current set of funded activities on assessment, focusing these efforts on students' argumentation as a complex science practice.<br/><br/>This design and development research targets high school students (n=1,940) and teachers (n=22) in up to 10 states over four years. The research questions are: (1) To what extent can automated scoring tools, such as c-rater and c-rater-ML, diagnose students' explanations and uncertainty articulations as compared to human diagnosis?; (2) How should feedback be designed and delivered to help students improve scientific argumentation?; (3) How do teachers use and interact with class-level automated scores and feedback to support students' scientific argumentation with real-data and models?; and (4) How do students perceive their overall experience with the automated scores and immediate feedback when learning core ideas in climate change and fresh-water availability topics through scientific argumentation enhanced with modeling? In Years 1 and 2, plans are to conduct feasibility studies to build automated scoring models and design feedback for previously tested assessments for the two curriculum units. In Year 3, the project will implement design studies in order to identify effective feedback through random assignment. In Year 4, a pilot study will investigate if effective feedback should be offered with or without scores. The project will employ a mixed-methods approach. Data-gathering strategies will include classroom observations; screencast and log data of teachers' and students' interaction with automated feedback; teachers' and students' surveys with selected- and open-ended questions; and in-depth interviews with teachers and students. All constructed-response explanations and uncertainty items will be scored using automated scoring engines with fine-grained rubrics. Data analysis strategies will include multiple criteria to evaluate the quality of automated scores; descriptive statistical abalyses; analysis of variance to investigate differences in outcomes from the designed studies' pre/posttests and embedded assessments; analysis of covariance to investigate student learning trajectories; two-level hierarchical linear modeling to study the clustering of students within a class; and analysis of screencasts and log data.

  • Program Officer
    Julio E. Lopez-Ferrao
  • Min Amd Letter Date
    8/4/2014 - 10 years ago
  • Max Amd Letter Date
    7/11/2016 - 8 years ago
  • ARRA Amount

Institutions

  • Name
    Educational Testing Service
  • City
    Princeton
  • State
    NJ
  • Country
    United States
  • Address
    Center for External Research
  • Postal Code
    085402218
  • Phone Number
    6096832734

Investigators

  • First Name
    Hee-Sun
  • Last Name
    Lee
  • Email Address
    hlee@concord.org
  • Start Date
    8/4/2014 12:00:00 AM
  • First Name
    Amy
  • Last Name
    Pallant
  • Email Address
    apallant@concord.org
  • Start Date
    8/4/2014 12:00:00 AM
  • First Name
    Ou
  • Last Name
    Liu
  • Email Address
    lliu@ets.org
  • Start Date
    8/4/2014 12:00:00 AM

Program Element

  • Text
    DISCOVERY RESEARCH K-12
  • Code
    7645