Friday, June 3, 2011

Course Design Rubric

As a follow-up to my last post presenting the idea of a design rubric, here is the first real draft of the course design rubric.

I haven't worried yet about putting the rubric items in any specific order or weighting them in importance at all. That will come later. I will probably group them together somehow so related rubric items are next to each other. They will also be fleshed out with better descriptions of what it takes to get a certain score. There's only so much space in each little box.

The way a rubric like this works is that the rater starts at 0 and moves towards 3, stopping whenever criteria are no longer being met. So you can't meet the criteria for a 3 if the criteria for a 2 were not met. In working to validate the rubric, I'll look at an overall score out of 21 by adding up the 7 individual scores, the correlations between individual rubric items and external criteria, and the correlations/dependencies among rubric items.

There were a few times that I debated which items would score higher on a given criterion when one was not naturally subsumed by the other. For example, what if the recommended learning path does build effectively but is not flexible? Or what if learning activities provide new experiences as a foundation for the content to be presented but do not build at all on student experiences prior to the course? Those are obvious holes that will need to be patched before this is production ready.

As I hinted I might do in my last post, I did decide to combine the Measurable and Clarity items into Clear and Measurable Objectives. That was a more interesting decision to make than it seems. As I wrote last time, the goal was to write a rubric that allowed for rating the design of a course before it was developed. What I've been leaning more towards, however, is a rating of a developed course. Most of the items could still be used in a design-only situation (and really should be if you're not shooting from the hip and developing without designing first), but combining those two items related to writing good objectives in effect serves to weight that portion less than some of the other items that apply more naturally to a developed course.

Of course, the real power of using a rubric is not so much that you can use it to come up with a grade but rather that it can be used during the creation process to improve the quality of the end product.

So what's missing? What's redundant? What's out of order? What can't be measured?

5 comments:

Doug Holton said...

Under Wiggins & McTighe you have:
"Assessment activities
build on learning
activities"

but I thought in their model (backward design), you create the assessments before choosing the learning activities: http://pixel.fhda.edu/id/six_facets.html

Under Learning Path, I just read a note that sort of names some of these types of paths:

Individualized: Kids see the same buckets of content but they move through them at their own pace.

Differentiated: Kids see the same buckets of content but it's presented in different ways (from lectures, to quizzes and games, to inquiry-based projects, to you-name-it).

Personalized: Kids experience both common buckets of content, delivered in a variety of ways at their own pace--in addition to information that may be uniquely interesting to them.

You might also want to include how well a course incorporates other techniques that have been shown to increasing learning and understanding, such as: formative assessment (Paul Black), blend learning, interactive engagement techniques and so forth:
http://serc.carleton.edu/introgeo/models/IntEng.html

Here are some more resources on course design & redesign:

http://www.thencat.org/ (scroll down to NCAT resources)

http://serc.carleton.edu/NAGTWorkshops/coursedesign/tutorial/index.html

http://www.qmprogram.org/

robmba said...

You're totally right on the backwards design model, Doug. So when designing, you create the assessment first and work backwards so the instruction fits. However, when the students work through a course, they'll go forward, hence the instruction will build up to the assessment. That's one of those things I'll have to build out in the documentation so it's clear.

Thanks for the other links. I'll check them out.

Jeremy said...

My grad students identified a weakness in backwards design - not a fatal one, but one against which a rubric like this may help buttress. The importance of an educational outcome is independent of its ease of measurement. As the saying goes, not everything that counts can be counted.

In “frontwards design,” important-but-immeasurable (or just really-difficult-to-measure) objectives are left in the instruction; they’re not cut until the assessment. In Backwards Design, those objectives are lost at the assessment step, which is before the instruction is planned.

I’ll reiterate that this is not a fatal flaw; it is possible to work around it. Your rubric would be one way to do so by offering a “significance” alternative to “measurable.”

robmba said...

Interesting comment, Jeremy. Some questions come to mind that I'll have to ponder a bit.

The first is whether we want to be in the business of teaching things that can't be assessed. Obviously there is a lot of learning that can't be assessed, but I'm afraid that higher ed has suffered for too long from poor assessments created by faculty with no expertise in the subject, leading to grades that bear little resemblance to a measure of learning. When I say they have no expertise in the subject, I mean in proper assessment. Knowing one's field inside and out doesn't mean they know how to assess others' knowledge of that field in a way that scales to the number of students in the average class. We need to train faculty in proper assessment or else hand assessment off to someone else. Good luck with either of those. :)

Assuming we either work out or ignore the first issue, there follows the question of how to add the gravy (important-but-immeasurable) objectives. The claim is that the gravy objectives will be excluded if a course design is based on the assessment instead of the other way around. Is there a way to assess at a level that includes a more realistic richness of content, which would exclude less of the gravy (PBL for example)? Who is to say the gravy can't be added after the assessment is designed anyway? Also, students are pretty adept at figuring out what they'll need to do to pass the test, so will often skip much of the extra stuff that won't help them on the test. If the students purposely skip the gravy anyway after we took the care to add it, does it matter that it was ever added to begin with?

I'm picturing the demonstration of the jar with a pile of gravel and sand. If you add the sand first, the gravel won't fit, but if you add the gravel first, the sand will fill in the gaps. If you start with the important measurable objectives and then fill in the gravy, you may be able to fit it all in, but at least you know you got the central, measurable stuff. If you start with immeasurable objectives and then try to cram assessment in later, you end up with what we have today - grades and student satisfaction ratings that do not really measure the quality of student learning.

I'd be really interested in seeing any studies looking at whether important objectives are really excluded more by the backwards design model. If it's a trade-off between measurable and unmeasurable objectives, which do we choose to include/exclude? The conversation should include improving assessment methods so they can measure some of the objectives considered unmeasurable now.

Great conversation. I'm not going to be able to sleep tonight. :)

robmba said...

I got a little off topic responding to your comment, Jeremy, but I think you did identify that part of what I was trying to do is include the importance of measurement but also balance that with other components. If there is something that is realistic and useful to students but difficult to measure, those points will balance each other out on the rubric.

Or as I was heading in my last comment, it will ideally drive innovation in assessment to find ways of scoring highly on both.