I’ve just returned from co-leading a workshop for the Higher Education Data Sharing Consortium (HEDS) with Jillian Kinzie. Ten schools had put together teams composed of an institutional researcher, a faculty member or faculty development person and someone from academic leadership. The goal of the workshop was for the teams to review, discuss and identify one or two important patterns in their schools’ CIRP and NSSE results that they wanted to explore, and to develop a specific plan for faculty/staff development activities—based on the survey evidence—that they would implement over the coming year.
As anyone who knows me can attest, as I started to prepare for the workshop-which would involve two plenary sessions-I experienced that telltale moment of panic. What had I gotten myself into? I’ve used assessment evidence to bring changes about at several colleges; I talk with schools about how they use their CIRP results almost daily, so I should be fine, right?
My anxiety about speaking about using CIRP results is at the heart of what Charlie Blaich as the director of HEDS is committed to changing—a belief that posting good data on an IR website is not enough to stimulate campuses to use the data to improve student learning.
This is exactly why Jillian and I came together, particularly because we are discussing the use of survey results. After the results from surveys arrive on campus, the results need to be publicized, pushed out to faculty, staff and students using terminology that does not require an advanced degree in statistics to understand. The second part of this process—the part where we have conversations about the results, reflect on their meaning, and take action on campus–is not always given the respect (and time) it deserves.
If we want results from surveys to be used to improve the learning environment, then we need to describe the results in clear terms and indicate why they are of interest to the campus. We should work with faculty and staff to think about additional analyses and other sources of evidence that might be relevant to contextualize, elaborate, or refine our understanding of the issue. We should consider how to involve students in the process of making sense and responding to survey results. We should endeavor to keep the conversations and the dialog going over time, and to find two or three actions we can take based on what we are learning.
Good institutional research and assessment is not simply about having the most graphs or a really complex multivariate analysis. Rather it’s about being able to communicate patterns you see in results clearly and in a way that underscores why they are important on campus; to engage different constituencies in conversations about the results; and to allow them the opportunity to connect their work to the findings. The ultimate goal here is aligning the programs, practices and opportunities on campus to foster a better learning environment.