Monday, October 19, 2015

HBR: A Tool to Help You Reach Your Goals in 4 Steps

ArticleA Tool to Help You Reach Your Goals in 4 Steps
Author: Heidi Grant Halvorson
Publication: Harvard Business Review
Date: October 7, 2015

This article and accompanying webtool are aimed at audiences hoping to break down business goals into actionable plans, but with a little tweaking, can serve as a great way to transition from departmental Student Learning Outcomes to assessment plans across courses.

The article and webtool walk the user through breaking down goals into four components, each of which have analogs in the assessment world:

  1. Goal - A departmental Student Learning Outcome.
  2. SubGoals - topics relating to the SLO that may be covered in individual courses.
  3. Actions & Who-When-Where - the actions are the measurable skills students will be able to demonstrate upon mastering the topics outlined in step 2, and the Who-When-Where identify the courses in which those skills are taught and assessed.
  4. If-Then Plans - the assessments in which the skills from step 3 will be measured.

After completing the process, users can down load a .pdf version of their assessment plan.  I'm including an example below for a sample SLO for Biology (click to view the large version):


Try out the webtool here to see how it can apply to your own departmental SLO's.

Tuesday, October 6, 2015

Chronicle: 5 Tips for Handling Grading in Large Online Classes

Article: 5 Tips for Handling Grading in Large Online Classes [comment]
Author:Anastasia Salter
Publication: Chronicle of Higher Education
Date: October 1, 2015

I share this not for the article itself (which you should feel free to read as well!), but for a comment by user greilly which I thought brought a wonderful idea for streamlining assessment and feedback, which I will quote here in full:
Consider a general feedback for an assignment. I grade all of the class's aaignment and then type up one document that summarizes the common mistakes and errors I found. I provide examples from student papers (anonymous of course) of both Good answers and bad, and I point out the differences between them. I then post this document online for the class to read. Only failing papers get a personalized feedback automatically; the rest of the class is told that they can request personalized feedback if after reading the general feedback they still aren't sure why they got the grade they did. It used to take me a number of days to type up feedback for each paper, even when I used boilerplate. Now I can type up that one document in an evening. I also imagine it is more helpful than individual feedback because it shows both good and bad examples of the work.
This is a great way both to give feedback to students, and for teachers to notice patterns in student work, while saving everybody some time and energy. The important part of this method is using it to truly close a loop - going over it with students as well as reflecting on uncovered data for possible changes in curriculum or the assignment going forward.  While it is intended as advice for teachers of large, online classes, something similar could certainly be adapted for assignments in a liberal arts context, perhaps with the balance swung a bit more toward including individual feedback.

You can read the article in its entirety here, for some more tips on grading best-practices.

Monday, October 5, 2015

IHE: Assessment and the Value of Big, Dumb Questions

Article: Assessment and the Value of Big, Dumb Questions
Author: Matt Reed
Publication: Inside Higher Ed
Date: October 4, 2015

In this opinion piece, Reed described outcomes assessment's importance in defending and measuring the offerings of a curriculum.  He asserts that assessment initiatives can sometimes become bogged down with "false precision," in which the temptation becomes to choose student learning outcomes that are too narrowly focused.  Reed offers an alternative:
Instead, I’m a fan of the “few, big, dumb questions” approach. At the end of a program, can students do what they’re supposed to be able to do? How do you know? Where they’re falling short, what are you planning to do about it?
I think this article leads us to ask a crucial question that all assessment should start with: what do I choose to measure about my program, and does that reflect our values?  By doing deep thought about what are the essential outcomes of a program, assessment can become more streamlined and, even more importantly, more meaningful.  The trick, then, becomes defining student learning outcomes so they are both broadly applicable AND measurable with some validity.  See your friendly neighborhood assessment director for help exploring wordings that can satisfy both criteria to best fit the needs of your program or course.

Click here to read more of Reed's take on outcomes assessment.

Thursday, October 1, 2015

TiHE Podcast: Grading Exams with Integrity

Podcast Episode: Grading Exams with Integrity
Author: Bonni Stachowiak with guest Dave Stachowiak
Publication: Teaching in Higher Ed
Date: October 1, 2015

On this podcast episode, Stachowiak and Stachowiak discuss strategies for reducing bias in the grading of exams.  Many of these strategies can be extrapolated to many different forms of assessment which readers may want to use "blind" scoring for.  Some of the techniques discussed include blind grading, grading by item instead of student, ensuring inter-rater reliability among multiple graders, and providing transparency frameworks for grading practices.  They also discuss the importance of returning feedback to students to allow for adjustment and improvement.

Check out the page for the podcast episode to listen or download, and access related links recommended by the podcasts hosts.  You can also subscribe to the podcast on iTunes.

Wednesday, September 30, 2015

IHE: Admissions Revolution

Article: Admissions Revolution
Author: Scott Jaschik
Publication: Inside Higher Ed
Date: September 29, 2015

In this article, Jaschik describes a revolutionary approach to college admissions adopted by 80+ institutions of higher learning.  Developed and adopted by the newly-formed Coalition for Access, Affordability, and Success, this new protocol seeks to set up a more holistic application process for prospective students.

While the majority of the article is not directly related to assessment in higher education, I found it notable that one major component of the initiative is an online platform on which high school students will build electronic admissions portfolios, beginning in their ninth grade year:
The high school student's portfolio: This would be offered to all high school students, free, and they would be encouraged to add to it, starting in ninth grade, examples of their best work, short essays on what they most proud of, descriptions of their extracurricular activities and so forth. Students could opt to share or not share all or part of their portfolios, but college admissions leaders would provide regular prompts, appropriate for grades nine and up, and questions students should ask about how they are preparing for college.
Not only does this initiative reinforce the importance of portfolio assessment in education, it also may in time provide a commonly accepted framework for what a "successful" ePortfolio looks like.  This would be of great use to higher education institutions looking to expand upon or create portfolio or capstone forms of assessment.  Furthermore, beginning in the 2019-2020 school year, institutions of higher learning will begin enrolling students who come to college with a ready-made four-year portfolio.  How can colleges build upon these efforts?

Read the original article for more details on the other aspects of the new admissions process.

Tuesday, September 29, 2015

IHE: Are They Learning?


Article: Are They Learning?
Author: Doug Lederman
Publication: Inside Higher Ed
Date: September 25, 2015

In this article from IHE, Doug Lederman examines the recent completion of the first year of a faculty-driven pilot study by the Multi-State Collaborative and the AAC&U.  This pilot hopes to bring about a set of commonly adopted Student Learning Outcomes and rubrics (based on the VALUE rubrics adopted by MCLA and institutions nation-wide) for use in authentic assessment of a wide variety of undergraduate work.  Lederman asserts that the pilot may be a good way to begin the process of allowing federal higher education policy to focus more on student learning:
The question of student learning outcomes has been largely relegated to the back burner of public policy in the last few years, displaced by recession-driven concerns over whether students are emerging from college prepared for jobs.... That's not entirely by choice, though; administration officials noted in a policy paper accompanying the Scorecard that while learning outcomes are "an important way to understand the results and quality of any educational experience … there are few recognized and comprehensive measures of learning across higher education, and no data sources exist that provide consistent, measurable descriptions across all schools or disciplines of the extent to which students are learning, even where frameworks for measuring skills are being developed."
 Lederman's article asserts that the MSC pilot (in which Massachusetts institutions have participated as part of the Vision Project) may be a good way to begin to bridge the gap between college faculty (who are given the power to select student work they view as "important" and perform the grading) and those who create academic policy who want to see student progressed assessed in common ways across institutions.  Additionally, it provides opportunity for institutions to demonstrate growth over time.

Read the original article for (largely positive) reactions from faculty and policy-makers involved in the study, as well as a summary of the results of the assessment.

For another look at the pilot, you can also refer to the article Faculty Measures to See Promise in Unified Way to Measure Student Learning from the Chronicle of Higher Education.
But it was the subcategories within each broad skill area that were often more revealing, several faculty members said....  Such detailed feedback is particularly useful because it directly relates to actual course work, said Jeanne P. Mullaney, assessment coordinator for the Community College of Rhode Island. The results can help faculty members change their assignments, guided by a shared conception of a particular skill area. "The great thing with rubrics," she said, "is the strengths and weaknesses are readily apparent."

Tuesday, August 25, 2015

Chronicle: Does Assessment Make College Better?

Article: Does Assessment Make Colleges Better?  Let Me Count the Ways
Author: Joan Hawthorne
Publication: Chronicle of Higher Education
Date: August 19, 2015

In this commentary column, Hawthorne uncovers the ways in which she views Outcomes-Based Assessment has been beneficial to the state of higher education.  In particular, she views some of the greatest benefits to include an emphasis on being transparent and explicit in what faculty members want students to learn, and a shifted focus to student "doing" rather than simply student "knowing."  Additionally, she sees the discussions that resulted among departments around the most authentic ways to measure students' ability to "do" as leading to advancements in both assessment and curriculum development.  In summary, she says:
Regardless of the scale of a fix, assessment is effective for promoting greater thoughtfulness and purpose in teaching — and for focusing our attention on learning. That matters. On that basis alone, assessment works.
Read the original column for more of Hawthorne's views of the importance of assessment in higher education.