As a follow-up to my last post presenting the idea of a design rubric, here is the first real draft of the course design rubric.
I haven't worried yet about putting the rubric items in any specific order or weighting them in importance at all. That will come later. I will probably group them together somehow so related rubric items are next to each other. They will also be fleshed out with better descriptions of what it takes to get a certain score. There's only so much space in each little box.
The way a rubric like this works is that the rater starts at 0 and moves towards 3, stopping whenever criteria are no longer being met. So you can't meet the criteria for a 3 if the criteria for a 2 were not met. In working to validate the rubric, I'll look at an overall score out of 21 by adding up the 7 individual scores, the correlations between individual rubric items and external criteria, and the correlations/dependencies among rubric items.
There were a few times that I debated which items would score higher on a given criterion when one was not naturally subsumed by the other. For example, what if the recommended learning path does build effectively but is not flexible? Or what if learning activities provide new experiences as a foundation for the content to be presented but do not build at all on student experiences prior to the course? Those are obvious holes that will need to be patched before this is production ready.
As I hinted I might do in my last post, I did decide to combine the Measurable and Clarity items into Clear and Measurable Objectives. That was a more interesting decision to make than it seems. As I wrote last time, the goal was to write a rubric that allowed for rating the design of a course before it was developed. What I've been leaning more towards, however, is a rating of a developed course. Most of the items could still be used in a design-only situation (and really should be if you're not shooting from the hip and developing without designing first), but combining those two items related to writing good objectives in effect serves to weight that portion less than some of the other items that apply more naturally to a developed course.
Of course, the real power of using a rubric is not so much that you can use it to come up with a grade but rather that it can be used during the creation process to improve the quality of the end product.
So what's missing? What's redundant? What's out of order? What can't be measured?