Navigation

Search IAS

Plan Methods

Outcomes → Methods → Analyze and Use Results → Document

How, who, when, and where

The learning outcomes that you defined, if specific enough, will determine, in large part, how your assessment is organized. However, you still have several decisions to ensure that your assessment is both effective and feasible to conduct.

  1. Use direct assessment techniques whenever possible; use indirect techniques (e.g., survey results) to provide additional, explanatory information (see below).
  2. Identify where your data will come from – which courses or experiences, which population of students, which student work, which assessment tools
  3. Specify your standards, benchmarks, or targets for student performance.
  4. Determine who will conduct the assessment. Especially at first, be mindful of time, effort, and goodwill required.

Will you employ direct evidence of student work? Indirect evidence? Or both?

We encourage you to rely on direct assessments of student learning as much as possible, but to supplement your direct assessments with information from indirect assessments.

"Direct evidence of student learning is tangible, visible, self-explanatory, and compelling evidence of exactly what students have and have not learned" Linda Suskie, pg 95).  

Direct assessment may focus on:

  • Student work such as capstone projects, reports, written coursework, or presentations  
  • Exam responses
  • Standardized tests
  • Licensure/certification exams
  • Ratings of student performance as by internship supervisors, or fellow students using rubrics
  • Analyses of online class discussions
  • Student portfolio (see Tools: Portfolios page)  

"Indirect evidence provides signs that students are probably learning, but the evidence of exactly what they are learning is less clear and less convincing." (Linda Suskie, pg 95). Students' self-assessments or opinions about their learning or their satisfaction with their education are commonly sought as indirect evidence of learning. The use of well-designed and administered surveys, interview schedules, or focus group procedures increases the likelihood that the evidence will be useful. (Worth noting: thanks to inexpensive on-line survey tools, students are over-surveyed to the point that they are becoming less and less likely to respond to survey requests, especially if there is no incentive for them to participate. They are more likely to respond if they care about the sender, the program, or the issue.)

Direct and indirect evidence can complement each other. Indirect evidence may yield insights into students' experiences, ideas for assessment, or information that helps to interpret direct assessment results or guide application of results. Likewise, direct evidence can be brought to bear to test the validity of students' opinions or self-assessments.  

Where and how does the program curriculum address the learning outcome?

You’ve identified which learning outcomes to assess. Now, where in the curriculum are those outcomes taught, and in which courses or at what junctures are students expected to demonstrate mastery? A curriculum map displays where in the curriculum each learning outcome is taught, and to what end. Through curriculum mapping, faculty can identify and link course content, learning outcomes, assignments, and assessments. Curriculum mapping can also reveal gaps in the curriculum – places where learning outcomes are not covered or not covered adequately.

Curriculum maps can be general, associating learning outcomes with courses, or more detailed, associating them with specific assignments. Below is a sample curriculum map that indicates which courses address each learning outcome and with what expectation of students (Introduce to topic; Practices; Demonstrates mastery). This curriculum map also includes a concluding exam, which could be a comprehensive exam in a graduate program or a licensing exam in a professional program.

Learning Outcome

Course 1

Course 2

Course 3

Course 4

Course 5

Comprehensive Exam

 #1

Introduce

 

Practice

Practice

Demonstrate

Demonstrate

 #2

 

Introduce

 

Practice

 

Demonstrate

 #3

 

 

Introduce

Practice

Demonstrate

Demonstrate

 #4

 

Introduce

Practice

Demonstrate

Demonstrate

Demonstrate

Word to the wise: Programs and courses evolve. Revisit your curriculum map periodically.

What students’ work will provide good, solid evidence of student learning?

To demonstrate summative learning at the end of the program, you will likely assess graduating students’ work. To assess specific skills or knowledge taught in a specific course, you may assess student work from that course regardless of student class (e.g., 3rd years, 4th years).

As much as possible, use student work products – exams, projects, papers, portfolios, presentations, etc. – that are already built into courses and the curriculum. This "course-embedded assessment" takes advantage of students’ clear incentive to do their best work when completing assignments for a course or requirements for a degree program (e.g., comprehensive exam).  

In addition to or instead of course-embedded assessment, you may want to use standardized tests or licensure/certification exam results to assess learning outcomes.

Determine if the student work selected calls for objective or subjective assessment. Will you assess exam answers that are clearly right or wrong (as on a math exam) or projects or papers that require a more subjective assessment (as on an essay test, research paper, or capstone project)? If subjective assessment is called for, please consider using a scoring rubric to guide the assessment (see Tools- Rubrics page).

Course grades are generally insufficient measures of student learning outcomes. Often, grades are not useful in identifying particular areas of strength or weakness with respect to a program’s learning outcomes, e.g., the ability to construct well-supported, clearly-articulated arguments. Moreover, grades often include factors not directly related to a program’s learning outcomes, such as class attendance and participation. Finally, grading policies and practices may vary by faculty member.

Which tools will serve best for the assessment?

The three most commonly-used assessment tools are exams, grading rubrics applied to qualitative student work, and requests for student opinion through surveys, interviews and focus groups.

Exams

Exams can double as both a classroom assessment tool and a program assessment tool. Faculty should match particular exam questions or sections to each student learning outcome being assessed.  

Rubrics

Use a rubric to assess student work that does not have concrete right or wrong answers.

"A rubric is a scoring guide: a simple list, chart, or guide that describes the criteria that you and perhaps your colleagues will use to score or grade an assignment." (Suskie, pg. 124)

Rubrics serve well when assessing student work that does not have concrete right or wrong answers. There are two general types of rubrics: holistic and analytic rubrics. Holistic rubrics provide a single score based on the overall impression of the student's work. Holistic rubrics are thus best suited to tasks that can be evaluated as a whole or those that may not require extensive feedback. Analytic rubrics specify criteria to be assessed at each performance level, elicit a separate score for each criterion (which can be weighted by the relative importance of each criterion), and provide a composite score. These rubrics are more appropriate when evaluating complex tasks, or where more detailed, specific feedback to students is desired. See the Tools-Rubrics page for information on creating a rubric and for sample rubrics.

Surveys, interviews, or focus groups

For assessment of learning outcomes, these methods are useful at two stages in the assessment process. They can complement more direct measures of student learning by asking what respondents experience and perceive about their education. They can also serve as investigatory tools when programs are developing strategies to improve upon student performance. See the Surveys page.

When and how will you collect the data?

Ahead of time—before the end of term or the due date for course assignments, coordinate with faculty to ensure that student work will be collected and saved for later assessment. It is best practice to inform students that their work will be assessed for the purpose of gathering information to improve the program. Inform them that their identifying information will be deleted from their work, and that the assessment will not affect their grade or academic record.

Decide whether you will gather work from all of your students or from a sample of your students. Your decision may hinge on feasibility. For instance, you may not have the resources and time to assess 100 research papers.

If you decide to sample, you will need to determine 1) the appropriate sample size, and 2) a sampling procedure:

The appropriate sample size will depend on the size of the population being sampled, the acceptable margin of error, and the desired level of confidence. This sample size calculator will assist you in determining the appropriate sample size.  

To select the sample, you can use a random number generator.  Give each student (or student’s paper or exam) a number from 1 to x, then open the random number generator and insert the sample size you want and the number range (1 to x).  The random number generator will then give you a set of numbers. Find the students with those numbers—they comprise your sample. 

Collect clean copies of the student work, without instructors’ comments, corrections, or grades, and without student identifying information. A unique ID number should be added to each piece of work collected.

What are your standards, expectations, or targets for student performance given the assessment focus, tool, and data identified?

For each stated student learning outcome, how will you interpret the evidence of learning? What standards will you apply in order to reach a conclusion about students’ learning? The standards or targets should be stated in terms of percentages, percentiles, averages, or other quantitative measures.  

If you are not sure what to expect, you may want to use an assessment to set a benchmark or to provide information about the range and distribution of student performance. This information can be useful for setting standards for the next assessment.

Who will assess the student work?

Determine who will conduct the analysis of students' work and what training or guidance they may require.

For assessments of complex student work for which judgment is required, such as essays, research projects, comprehensive exam answers or undergraduate theses, two raters should assess each piece of student work. Often two raters are sufficient, with a third rater if there is a large discrepancy in the ratings by the first two raters.

It is best practice to begin with a norming session to ensure that raters are not differing in their perception of criteria or applying different standards.  In a norming session, after all raters assess the same one or two papers, the assessments are compared to ascertain if, where, and how much they differ in their judgments. Subsequent discussion clarifies definitions and standards. Assessment should not proceed until the raters have reached consensus on criteria and standards to apply.

Faculty (and sometimes graduate students) generally perform the assessment of student work. If they have already graded the student work in one of their courses or if they are able to identify the author of the work, they should recuse themselves from the assessment.

Assessments take time away from other pressing duties. Incentives should be offered to encourage participation.

Resources

Assessing Student Learning: A Common Sense Guide, 2009 (2nd edition)
Linda Suskie (1st edition available on loan through IAS)