Navigation

Search IAS

Scientific Reasoning Competency 2012-2013

In 2012, the Office of Institutional Assessment and Studies initiated planning for a third assessment of undergraduate scientific reasoning competence, the previous assessments having occurred in 2004-2005 and 2009-2010.  The 2009-2010 committee had defined scientific reasoning, proposed six learning outcomes and standards, designed a test to address the six learning outcomes, and administered it to first- and fourth-year students in 2010.  Subsequently, the 2012-13 committee was charged with improving the instrument and with assessing both student competency and student learning (value-added). Please direct all questions about the assessment to Sarah Schultz Robinson (982-2321).

Definition

Scientific reasoning is a mode of thought that:

  • draws on systematic observation and description of phenomena;
  • employs established facts, theories, and methods to analyze such phenomena;
  • draws inferences and frames hypotheses consistent with that body of public knowledge and understanding;
  • subjects explanations to empirical tests, including scrutiny of their declared and latent assumptions; and
  • allows the possibility of changes in explanations as new evidence emerges.

Student Learning Outcomes

UVa expects all undergraduates to be able to employ scientific reasoning for their own purposes but especially for the purpose of evaluating the quality of scientific information, argument, and conclusions. Graduating fourth-year students at the University of Virginia are expected to:

  1. understand that, while scientific statements are in principle tentative, criteria exist by which they can be judged, including consistency with the body of scientific theory, method, and knowledge;
  2. display a grasp of experimental and non-experimental research design, including the notion of control, the idea of statistical significance, and the difference between causation and association;
  3. interpret quantitative data presented in graphical form;
  4. acknowledge the possibility of alternative accounts of events and properties and judge their relative plausibility by standard criteria;
  5. identify sources of error in scientific investigation, including errors of measurement and ambiguity of judgment;
  6. recognize unsound conclusions. 

Standards

The following standards were established for graduating fourth-years:

  • 25% of students highly competent  
  • 75% competent or above  
  • 90% minimally competent or above  
  • 10% not competent  
  • Methodology

Instrument

The overall assessment in 2012-2013 entailed three analyses that built on the results of the 2009-2010 assessment.
Longitudinal (Value-added): Using the original test and a longitudinal design, comparison of fourth-year students’ performance to their performance three years earlier as first-year students. This assessment, which was requested by the 2009-2010 committee, would inform decisions about test revision.  Thirty-five students who had taken the original test as first-years returned to take it again as fourth-years.
Cross-sectional (Value-Added): 292 students completed the revised test and were included in the representative sample: 109 first-years, 156 fourth-years, and 27 graduate students. The graduate students were included to calibrate upper-bound performance on the instrument for undergraduate students.
Competency: Using the revised test and comparison to graduate student performance, fourth-year students’ competency was assessed.

Sampling, Confidentiality, and Compensation

Sixty-six fourth-year students who had taken the test as first-years were invited by email to participate in an assessment and were offered gift cards as compensation. Students who consented to take the test were assured of confidentiality.  For the subsequent assessments, a stratified random sample of 1,411 first-year and 1,522 fourth-year students was invited by email by the Vice Provost for Academic Affairs to participate. The invitations did not specify the topic of the assessment. 

Scoring

In scoring sessions, the team of scorers applied the rubrics to each of the short answer and experimental design questions after norming for each question.  Each answer received two readings. Scoring reliability met standards.

Findings

  1. Fourth-year students, on the whole, are capable of a respectable level of rigor in scientific reasoning.
  2. Fourth-year students’ ability in scientific reasoning varies with field of study.
  3. Evidence of value-added appears to be mixed, an observation that is likely a consequence of test limitations.
  4. Graduate students’ test results serve well to calibrate the instrument and inform interpretation of undergraduate test results.
  5. While individual test questions appear to provide a valid measure of students’ competence in scientific reasoning, the test overall may underestimate student competence. 

Committee Members

  • Tony Baglioni, McIntire School of Commerce
  • Bobby Beamer, School of Continuing and Professional Studies
  • Jeanne Erickson, School of Nursing
  • Victor Luftig, Department of English, College of Arts and Sciences
  • Kirk Martini, School of Architecture
  • Aaron Mills, Department of Environmental Sciences, College of Arts and Sciences
  • Michael Palmer, Department of Chemistry, College of Arts and Sciences
  • Karen Schmidt, Department of Psychology, College of Arts and Sciences