Collecting Evidence or Using Results?
October is always an interesting time here at CIRP. The 2011 Freshman Survey has closed, and we are all anxiously awaiting the data file so we can, like the researchers we are, dig into the data and see what we can learn about this years’ incoming first years.
While we are waiting (very patiently), I like to take a look at what institutions are doing with results from our surveys: how they talk about the results, who is talking about the results, and perhaps most importantly what changes are they implementing as a result of what they learned. It serves as an important frame of reference for me. If I know what issues are showing up in discussions on campuses, I can think about what that might look like on a national level. On my annual pilgrimage through institutional websites and in conversations with IR and assessment directors, I have noticed some similarities among institutions that consistently make good use of the results of surveys.
1) The focus of sharing results is not on the results themselves. Rather than presenting “The Results of the 2011 CIRP Freshman Survey” results are used to explore the experiences of students and their growth and development. Moreover, they are presented in a manner that contributes to discussions of how to use the results to inform decisions that improve programs or policies, and thus improve the student experience.
2) Inquiry into the results is an ongoing, organized process. The issues being investigated on campus are complex and multifaceted. Institutions consistently ask questions about the results, connect it back to the issues at hand, and investigate additional sources of information on campus to further explicate the issue. Results from surveys have clear and consistent routes they travel within the institution.
3) Conversations about improving student learning are collaborative. By involving people across departments and divisions in discussing the findings and making meaning of them, institutions are able to promote collective ownership of the findings, and collaboration on making improvements to programs and policies.
4) Results are “packaged” around issues that the institution really cares about. Connecting the results as much as possible to what the institution cares about allows meaningful conversations that can not only inform important decisions and improve the student experience to occur, but also can keep discussions focused on evidence of learning and change rather than on indicators that have little to do with student learning or program improvement.
5) Results are shared in ways that are transparent and understandable. It might be graphs, it might be tables, it might involve cartoons, but institutions who make good use of survey results display findings in ways that can be understood outside the IR office. This serves to make conversation possible and create partnerships that are important to the ingoing improvement of the student experience.