Over the past week, I was at back-to-back annual conferences for regional accrediting bodies. Even for someone who truly enjoys talking about assessing student learning and how to best provide evidence that your institution is meeting the institutional goals it sets out for its students and itself, it was a lot of standards, criteria and sub criteria for one person to keep straight.
One theme did resonate across both conferences, however, and that was the increasing need for reference points within accreditation that are stronger than GPA’s, test scores, graduation and retention rates and student satisfaction. The talk turned, as it often does, to the need for more direct evidence of student learning.
Direct assessment of student learning outcomes is valuable and important. I do believe that institutions should be doing more direct assessment and talking about the results both within the institution and with other similar institutions. But I also think that if accrediting agencies continue to put emphasis primarily on direct assessments, then institutions will put considerable energy and resources into those projects, and not on other tools that offer valuable evidence with respect to student learning.
When I started a direct assessment project looking at first year writing at a small, private liberal arts college, I was able to convince the faculty that starting assessment by looking at writing was important because results of student and faculty surveys indicated that student growth and faculty expectations did not match. Because I had these tools to help me better understand student and faculty behaviors and expectations,, I was able to facilitate a conversation among faculty that addressed this discrepancy, and other issues that mattered to students and faculty.
Unfortunately, the information gleaned from tools, such as surveys, is often referred to as “indirect evidence,” which may imply that it doesn’t hold the same value as more “direct evidence.” I disagree with that notion.
When I got back to the office I ran my thoughts by Charlie Blaich, director of both the Center for Inquiries at Wabash College and the Higher Education Data Sharing (HEDS) Consortium. He told me that when he worked with institutions that only assess outcomes, and don’t look carefully at what students bring in to college, or their experiences in and out of the classroom, they struggle with how to use the results of direct assessments for improvement. To him, assessment can only work for improvement if an institution measures outcomes in both the teaching and learning environment.
Armed with both types of information, institutions have an arsenal of tools to draw from (rather than just one) if outcome measures aren’t changing as planned.
Fixing the Problem
Accrediting agencies should advocate that institutions create a portfolio of evidence–direct assessment of student learning outcomes, survey results, grades, course taking patterns, etc. and then push institutions to look carefully at all of their evidence to determine how students learn, what patterns emerge and what kinds of changes are needed. Only then will an institution truly have a complete picture of its student body and faculty to make informed decisions that result in real tangible improvements on campus.