Direct Assessment Methods

Updated March 2015

Competency Exams or Performance Tasks

These work better in some disciplines than others, but in those where they are appropriate, they can provide very clear data about student learning.  There is variation in whether the exams used are national instruments (Physics, Modern Languages and Literatures, Chemistry), designed by an instructor (Geology), or designed by the department (Economics, Religious Studies, Music).  The use of these exams has, in general, demonstrated significant student learning in relation to departmental learning goals.  In a couple of instances, they allow departments to calibrate learning against a national sample. Also help departments to understand specific concepts or skills that need more emphasis in a course or departmental curriculum.   

A number of approaches have been taken in relation to when the exams are administered, from doing pre- and post tests in a single course (Geology, Physics, Music), to using a test multiple times throughout the curriculum (Modern Language and Literatures, Religious Studies), to designing a test to be administered at the end of the entry level course and at the end of Senior year (Economics).   Of course, a key question in making this determination is whether the content is specific to a course or whether it is present in multiple places in the curriculum. 

The range of exams and performance tasks is also wide: in some instances, students are taking standardized tests (Chemistry, Physics, Modern Languages and Literature), in other instances they are engaged in writing about a piece of music and what they bring to listening to that music or to exploring concepts within a field in short essay form (Religious Studies).  When departments design these instruments themselves, they must consider format and scoring.  But, the process of determining the concepts on which an exam or performance task will focus generates important conversation about what a department wants students to learn.  Economic began this process by having each member of the department write a couple of questions for the competency exam that could then be discussed. Geology asked faculty members teaching key courses to design their own questions, since they each had greater knowledge in the subfield. Regardless of who designs the test or performance assessment, it is critical to consider whether the students will be graded on the assessment and where the assessment will take place, since both will influence the seriousness with which students take the assessment. 

In addition to these exams that measure student learning in a specific area, there are also assessment instruments that are illuminating for both institution-wide and departmental assessment.  For example, several departments have used the Research Practices Inventory to assess departmental courses (Government, Global Studies, History, as well as FYS).  Because as institution, we intend to administer this survey (which has both direct and indirect measures) every three years to both pre-First-Year students and students at the end of the First Year Seminar, we encourage departments to consider using the instrument also as a tool for assessing information literacy of their majors in these years.  Starting with the administration in spring 2015, departments will have the opportunity to also add questions that will better enable them to assess any discipline-specific skills in which they are interested.  As we have done in the past, we will ask departments to evaluate how important particular skills are for their students and share comparative data with departments so that they might get a better sense of the learning that is occurring in their department compared to other departments on campus. 

Evaluation of student work using rubrics

Rubrics have been used at St. Lawrence to assess many aspects of student learning, from competence in relation to writing (English, Art and Art History) , oral communication (Math, Computer Science, and Statistics),  or research, to intercultural understanding (Anthropology), to understanding of key concepts in a field (Art and Art History), to students’ ability to assess their own work and provide feedback to other students (Performance and Communication Arts).  Please see the “rubrics section” of the assessment website if you would like to see examples of rubrics.

A number of departments (Philosophy, Global Studies, Sociology, English, Anthropology) are using a rubric to assess senior level work.  These departments are doing holistic analyses to see if their seniors have developed the overall skills and perspectives defined in departmental learning goals. Philosophy, for example, assesses work from seniors and evaluates a subset of departmental learning goals each year. 

Other departments have identified pivotal courses for their curriculum and assessed student work in those courses.  This method has also been used by History in a sophomore course (HIST 299) that is pivotal for the development of research and writing skills. Government has also developed a tool to do this assessment in GOVT 290. which is focused on writing and research skill in the discipline.  As departments have begun to assess these courses, they have also found that the rubrics are effective for assessing senior level work.  

Oral presentations have been evaluated in order to assess student learning in relation to departmental content goals (Global Studies), or both institutional and departmental goals (Chemistry, Math, CS, and Stats). This assessment has taken place in relation to Senior presentations, and it has been done by faculty who are present for the presentations.  The Director of Rhetoric and Communication also just completed a pilot project assessing oral communication in the First Year Seminar.  In this instance, speeches of students in eight First-Year Seminars were  videotaped with a random sample from each seminar then assessed using the rubric later. 

Rubrics can be challenging to develop.  In fact, it is rare for a department to develop a rubric and not have to modify it after using it the first time. A number of departments have learned that they are trying to assess too much either at one time or within a single category.  For example, Art and Art History found that they needed to recognize the variation within a single portfolio since students might be more successful with some projects than with others.  They also discovered that a rubric designed for the overall curriculum might not be specific enough for assessing the work in a single, entry-level course.  Performance and Communication Arts similarly found that in their initial assessment of peer evaluation, they had too much in a single rubric. The process of developing a rubric is itself likely to foster important conversation about goals.  Despite the complexity of developing and norming rubrics, they have been quite informative for a number of departments, again to tell departments both what students are doing well and areas where they might focus more attention. 

Over the past few semesters, the institution has done two experiments to assess how effectively faculty can assess student learning as they grade their work. One pilot was with a First Year Seminar in spring 2013.  For that seminar, the instructor used the research and writing rubric that had been designed for FYS courses, and then the Director of the WORD Studio score the papers.  We found that the scores were virtually identical for the writing, but that the instructor scores were lower on questions connected to the research.  The two raters talked through the results and largely found that the instructor’s knowledge of the research in the field most likely meant that her scores were a more accurate reflection of the students’ use of sources in the area.  A similar test was done with the pilot assessment of FYS speeches.  In this instance, we found good correlations between instructor and non-instructor assessments in most cases.  These two tests indicate that with well-designed and tested rubrics and a process of norming, one can get good assessment results from course instructors on skills.  In instances where knowledge of a field or research in an area is important, we believe that instructors may provide better assessment.  This suggests that one way of building assessment into courses is to have instructors assess as they grade.