Article:
Are They Learning?
Author: Doug Lederman
Publication: Inside Higher Ed
Date: September 25, 2015
In this article from IHE, Doug Lederman examines the recent completion of the first year of a faculty-driven pilot study by the
Multi-State Collaborative and the
AAC&U. This pilot hopes to bring about a set of commonly adopted Student Learning Outcomes and rubrics (based on the
VALUE rubrics adopted by MCLA and institutions nation-wide) for use in authentic assessment of a wide variety of undergraduate work. Lederman asserts that the pilot may be a good way to begin the process of allowing federal higher education policy to focus more on student learning:
The question of student learning outcomes has been largely relegated to the back burner of public policy in the last few years, displaced by recession-driven concerns over whether students are emerging from college prepared for jobs....
That's not entirely by choice, though; administration officials noted in a policy paper accompanying the Scorecard that while learning outcomes are "an important way to understand the results and quality of any educational experience … there are few recognized and comprehensive measures of learning across higher education, and no data sources exist that provide consistent, measurable descriptions across all schools or disciplines of the extent to which students are learning, even where frameworks for measuring skills are being developed."
Lederman's article asserts that the MSC pilot (in which Massachusetts institutions have participated as part of the
Vision Project) may be a good way to begin to bridge the gap between college faculty (who are given the power to select student work they view as "important" and perform the grading) and those who create academic policy who want to see student progressed assessed in common ways across institutions. Additionally, it provides opportunity for institutions to demonstrate growth over time.
Read the original article for (largely positive) reactions from faculty and policy-makers involved in the study, as well as a summary of the results of the assessment.
For another look at the pilot, you can also refer to the article
Faculty Measures to See Promise in Unified Way to Measure Student Learning from the
Chronicle of Higher Education.
But it was the subcategories within each broad skill area that were often more revealing, several faculty members said.... Such detailed feedback is particularly useful because it directly relates to actual course work, said Jeanne P. Mullaney, assessment coordinator for the Community College of Rhode Island. The results can help faculty members change their assignments, guided by a shared conception of a particular skill area. "The great thing with rubrics," she said, "is the strengths and weaknesses are readily apparent."