|Figure 1 (Click to Enlarge)|
I have elected to take part in their Fellowship program, which has allowed me access to the software for free for one year. They have three additional license levels, as described in Figure 2.
|Figure 2 (Click to Enlarge)|
Working within the software and viewing the tutorials, however, I began to see clear strengths and weaknesses. Although Waypoint makes an appealing promise, the software is not infallible.
One strength of the software is that the rubric can be used easily and customized. To grade a student’s paper, the teacher can quickly go through the rubric (see Figure 3 for an example) and select the radio button that corresponds to the student’s achievement in each category. Each radio button will automatically add a “boilerplate” comment to the student’s memo outlining their grade achievement.
|Figure 3 (Click to Enlarge)|
One trouble with this set up, however, is that my current rubric (Figure 4) does not transfer easily to the Waypoint format. My rubric does not assume all papers begin as an A; instead it starts with all papers being assumed to be a C until proven otherwise. I take this approach because, as Glenn and Goldthwaite explain assuming all papers begin as an "A" requires that assessment focus only upon seeing "what is wrong is wrong with essays" (115).
|Figure 4 (Click to Enlarge)|
Even so: where’s the student’s actual paper throughout all of this, you might ask. What about in-text annotations? The current Waypoint Outcomes software adds very little to this process of in-text annotation. At the bottom of the rubric, they have an “Attach” button wherein the faculty member can attach a document with in-text comments.
The professor has two options: he or she can use a word processor like Microsoft Word to add comments that stand out (i.e. with visual markers like highlighting) and then copy and paste the marked document into the Waypoint Outcomes and attach it. Or, the grader can use a Waypoint tool that operates much in the way the comments feature in Word.
Unfortunately, the process and time needed to add in-text comments is essentially the same as it is without the software--but in this case the faculty member has to additionally copy and paste the document into a new system. One of my chief complaints about grading is the time it takes me to add in-text comments. I would most value a tool that would help me in this area. An upcoming version of Waypoint Outcomes is rumored to have a more robust in-text editing component, which might help in this matter. I look forward to reviewing the tool’s effectiveness again at that point.
Another advantage of this software, however, might be its ability to let one rubric be shared across numerous users within one institution. As Glenn and Goldthwaite explain, often grading standards are established by "a serious attempt to reduce grade inflation and standarize grades within the composition program" (114). If everyone were using the same rubric as the means through which grades were determined it might help ensure grade standards were more uniform. However, it could be argued that the same norming could be achieved through a common printed rubric.
Perhaps the tool’s greatest strength, however, is not in the day-to-day ease of grading. Instead, it is in the ability to collect data about student outcomes. In the premium versions of Waypoints data can be collected on the achievement of students in specific areas or outcomes. Pedagogical interventions might then be planned for the areas with the lowest achievement scores. Likewise, administrators within the department would be able to examine the greatest areas of need within the department as a whole. Workshops and professional development opportunities might then be planned to help generate discussion and strategies for increasing achievement in those areas where the greatest weakness are identified.
Glenn, Cheryl, and Melissa A. Goldthwaite. The St. Martin's Guide to Teaching Writing. 6th ed. Boston: Bedford/St. Martin's, 2008. Print.
Waypoint Outcomes. nd. Web. 15 Feb. 2011.