Monday, October 19, 2015

HBR: A Tool to Help You Reach Your Goals in 4 Steps

ArticleA Tool to Help You Reach Your Goals in 4 Steps
Author: Heidi Grant Halvorson
Publication: Harvard Business Review
Date: October 7, 2015

This article and accompanying webtool are aimed at audiences hoping to break down business goals into actionable plans, but with a little tweaking, can serve as a great way to transition from departmental Student Learning Outcomes to assessment plans across courses.

The article and webtool walk the user through breaking down goals into four components, each of which have analogs in the assessment world:

  1. Goal - A departmental Student Learning Outcome.
  2. SubGoals - topics relating to the SLO that may be covered in individual courses.
  3. Actions & Who-When-Where - the actions are the measurable skills students will be able to demonstrate upon mastering the topics outlined in step 2, and the Who-When-Where identify the courses in which those skills are taught and assessed.
  4. If-Then Plans - the assessments in which the skills from step 3 will be measured.

After completing the process, users can down load a .pdf version of their assessment plan.  I'm including an example below for a sample SLO for Biology (click to view the large version):


Try out the webtool here to see how it can apply to your own departmental SLO's.

Tuesday, October 6, 2015

Chronicle: 5 Tips for Handling Grading in Large Online Classes

Article: 5 Tips for Handling Grading in Large Online Classes [comment]
Author:Anastasia Salter
Publication: Chronicle of Higher Education
Date: October 1, 2015

I share this not for the article itself (which you should feel free to read as well!), but for a comment by user greilly which I thought brought a wonderful idea for streamlining assessment and feedback, which I will quote here in full:
Consider a general feedback for an assignment. I grade all of the class's aaignment and then type up one document that summarizes the common mistakes and errors I found. I provide examples from student papers (anonymous of course) of both Good answers and bad, and I point out the differences between them. I then post this document online for the class to read. Only failing papers get a personalized feedback automatically; the rest of the class is told that they can request personalized feedback if after reading the general feedback they still aren't sure why they got the grade they did. It used to take me a number of days to type up feedback for each paper, even when I used boilerplate. Now I can type up that one document in an evening. I also imagine it is more helpful than individual feedback because it shows both good and bad examples of the work.
This is a great way both to give feedback to students, and for teachers to notice patterns in student work, while saving everybody some time and energy. The important part of this method is using it to truly close a loop - going over it with students as well as reflecting on uncovered data for possible changes in curriculum or the assignment going forward.  While it is intended as advice for teachers of large, online classes, something similar could certainly be adapted for assignments in a liberal arts context, perhaps with the balance swung a bit more toward including individual feedback.

You can read the article in its entirety here, for some more tips on grading best-practices.

Monday, October 5, 2015

IHE: Assessment and the Value of Big, Dumb Questions

Article: Assessment and the Value of Big, Dumb Questions
Author: Matt Reed
Publication: Inside Higher Ed
Date: October 4, 2015

In this opinion piece, Reed described outcomes assessment's importance in defending and measuring the offerings of a curriculum.  He asserts that assessment initiatives can sometimes become bogged down with "false precision," in which the temptation becomes to choose student learning outcomes that are too narrowly focused.  Reed offers an alternative:
Instead, I’m a fan of the “few, big, dumb questions” approach. At the end of a program, can students do what they’re supposed to be able to do? How do you know? Where they’re falling short, what are you planning to do about it?
I think this article leads us to ask a crucial question that all assessment should start with: what do I choose to measure about my program, and does that reflect our values?  By doing deep thought about what are the essential outcomes of a program, assessment can become more streamlined and, even more importantly, more meaningful.  The trick, then, becomes defining student learning outcomes so they are both broadly applicable AND measurable with some validity.  See your friendly neighborhood assessment director for help exploring wordings that can satisfy both criteria to best fit the needs of your program or course.

Click here to read more of Reed's take on outcomes assessment.

Thursday, October 1, 2015

TiHE Podcast: Grading Exams with Integrity

Podcast Episode: Grading Exams with Integrity
Author: Bonni Stachowiak with guest Dave Stachowiak
Publication: Teaching in Higher Ed
Date: October 1, 2015

On this podcast episode, Stachowiak and Stachowiak discuss strategies for reducing bias in the grading of exams.  Many of these strategies can be extrapolated to many different forms of assessment which readers may want to use "blind" scoring for.  Some of the techniques discussed include blind grading, grading by item instead of student, ensuring inter-rater reliability among multiple graders, and providing transparency frameworks for grading practices.  They also discuss the importance of returning feedback to students to allow for adjustment and improvement.

Check out the page for the podcast episode to listen or download, and access related links recommended by the podcasts hosts.  You can also subscribe to the podcast on iTunes.