Reflecting on Reaccreditation
The following is taken from my statement to the U.S. Department of Education’s National Advisory Committee on Institutional Quality and Integrity on February 3, 2010.
About a dozen years ago I was appointed to a committee to write sections of Dartmouth College’s reaccreditation self-report. I ran a small student-affairs research office at Dartmouth at the time and worked for Lee Pelton, currently the outgoing president at Willamette University and at the time the Dean of the College at Dartmouth. Lee had hired me to provide his division with information of the student experience and evaluation of various programs and policies focused on institutional improvement. I frankly knew little about accreditation. Luckily the upcoming North East Association Institutional Research conference had a session on accreditation with a staff member of NEASC, and I dutifully attended that session, the only one at that conference on the topic of reaccreditation. Bob Froh, from NEASC, started out with the statement that times had changed. Accreditation in the North East was more rigorous, he said, and it was not sufficient to coast on reputation, as one prominent institution had done, and limit your discussion to “We hire excellent faculty, we admit exceptional students, and then we get out of the way.” That institution, he told the audience, to my chagrin, was Dartmouth College.
Certainly that should not have been sufficient, and indeed we worked hard on that next report. We mined the data sources I had been using: the Cooperative Institutional Research Program’s Freshman and Senior Surveys, Don McCabe’s Academic Integrity Survey, some surveys from the Consortium on the Financing of Higher Education, combined with institutional data from the registrar and other sources.
I am sure that you are aware of, and others speaking to you have already pointed out, that accreditation is now the driving force behind assessment on college campuses. The NILOA 2009 report “More Than You Think, Less Than We Need: Learning Outcomes Assessment in American Higher Education” provided survey results from over 1,500 institutions of higher education which indicated that accreditation was the primary use of assessment data on campus. Institutional improvement was lower down on the list.
As the director of CIRP, I interact with a wide variety of faculty, institutional researchers, and other administrators on campuses across the country. My experience is that there is a difference in conducting assessment for accountability and for institutional improvement.
Contrast my experience at the Northeast Association for Institutional Research a dozen years ago and a current –day experience at the Southern Association for Institutional Research conference. The conference is dominated by presentations that deal with some aspect of going through SACS reaccreditation. If I present on some aspect of research findings from CIRP, people tell me they wish they could come, but they have to attend some session on SACS. I finally gave in and presented on how to use CIRP data in SACS accreditation, and then people came.
My point is not that people need to come to my presentations, but that the office with major responsibility for research that can be used to improve the educational experience for students, the Institutional Research Office, is so busy with two types of activities that it cannot adequately branch out and conduct, or participate in, what we might consider under the academic scholarship perspective. They are desperately trying, and in many cases failing, to get students to respond to these “direct” assessments of student learning, such as the Collegiate Learning Assessment. Not only does it take up a great deal of time, but also a majority of the budget. I recently attended a presentation by a school showing CLA and other data, discussing all the efforts to gather the information, which they felt did not give them any real insight into the student learning process, and calculated that it cost them about $150,000 in direct costs and staff time to complete the study. The stakes are high, and so even though they might not feel these assessments are the best, or easy to administer, or the best use of limited resources, they do it because the stakes are high and they know that the institution down the road passed with CLA, or CAAP, or the test formerly known as MAAP, so they are not going to break out of the mold. When they end up spending so much of the resources on establishing the direct learning of 100 students, that leaves many, many, other potential research projects aimed at institutional improvement as just wishful thinking. And it leaves very little time to think about the results from those 100 students as opposed to just report on them.
The other huge time sink that gets in the way of interesting and innovative research into student learning at the local level, is all the time spent on responding to the ranking surveys like US News and World Report. Huge amounts of time, and money, are spent responding to these requests and putting systems in place to track the information that is used to help sell magazines, on your institutional dime. Then the next sink of time is trying to figure out why you went from number 77 in the rankings to number 78. Our data on the importance of rankings to the actual students looking to attend your institutions is that it is fairly minor: only 17% of incoming first year students this year reported that the rankings were very important in deciding where to go to college. In my experience the people who pay attention to these very skeptical rakings are more presidents and boards, not students and families trying to figure out where to go to college. So this is time better spent, in my opinion, again, on local research into student learning. And if accreditation were more than a pass/fail result, we might not have to endure so many private ranking systems.
I will also say that I think that the academic community should be leading the research on direct measures of student learning, and establishing the connections between direct and indirect measures. When the director of student health prescribes aspirin for a headache, she does not then conduct a study to look at the results of aspirin on headaches. It’s been established in the medical literature. It would be a waste of time and money to do so, and would impact her ability to actually interact with students and serve their health needs. One might say the same in the case of student learning. We must further the academic research that links direct and indirect measures. Frankly, it is the indirect measures, such as faculty-student interaction, student engagement with learning, and the demonstration of more modern pedagogy in the classroom that leads to institutional improvement. If all you have is a demonstration of unchanging aggregate CLA scores, and cannot tie that back to student behaviors and attitudes towards learning, or the interaction with faculty, or pedagogy, then all you have is the need to improve, and not the how to improve.
Yes, I do believe that it is important to assess and demonstrate student learning. But the emphasis on this has resulted in a lot of money and time in this area, with little ability or understanding of how to improve. I would much rather see resources going into large-scale projects that seek to demonstrate the connections between direct and indirect measures, similar to the drug trials, and then have the local researchers having the ability to focus on institutional improvement. Projects like Richard Arum and Josipa Roksa’s Academically Adrift , Charlie Blaich’s Wabash Project, the work that we do at CIRP and that NSSE does are critical to moving us forward in improving student learning. But so are the efforts of hundreds of researchers at the local level. If all we do is have them spend all their time administering CLA, completing forms, and figuring out why we dropped from 77 to 78 in the rankings, we will not be able to dedicate much effort to institutional improvement.
John H. Pryor
Director, Cooperative Institutional Research Program
Higher Education Research Institute at UCLA