Why Use National Surveys?
This question has come up recently from several people who would like to use one or more of the CIRP surveys as part of their overal institutional assessment strategy, but have run up against resistance from some corner of their institution that prefers instruments that are created locally.
There are significant advantages to using a local instrument that is designed at your own college or university for your own specific purposes. The main advantage is that local questions can be crafted to exactly target what your specific issues are and can utilize language that will be readily understood by your students, faculty, or staff. A secondary bonus is that if you are creating an instrument with other people on campus, it facilitates the “buy in” that the survey authors will have with the perceived legitimacy of the survey. (On the other hand, it might also create less “buy in” from anyone not on that committee who thinks they should be.) I am a great advocate of local surveys, having done many myself over the years.
That said, it also makes a lot of sense to participate in national survey projects.
With a local survey, when looking at results, you often do not have a context within which to interpret the information you have gathered. Sure enough, someone will say “how does this result compare to others…I don’t know if this result is good or bad.” With a national survey, a key feature is the ability to compare your results with comparable institutions. This, in fact, was one of the pillars upon which the Cooperative Institutional Research Program (CIRP) was popularized and still remains a key component of the program. Because the same questions were asked in the same way with the other schools using the survey instrument, you have direct comparability.
Another advantage, and this is only with some national surveys, like CIRP, is that our questions have been crafted by both survey design experts and content experts and refined with years of experience. All of our questions are designed with decades of published research behind them. Having such a history can mean it will be easier to persuade others on campus about the usefulness of the results. This is not always the case with local surveys.
A theory-driven instrument can make the difference between action-oriented results and results that are merely interesting. For instance, on our Diverse Learning Environments Survey (DLE), we ask questions that reflect validation theory (see my earlier blog post for more detail). Validation directly relates to retention, and seems to work better with underrepresented minority populations than traditional retention theories. Looking at the information on validation from the survey, schools can examine how students experience the learning environment with an eye towards improving those areas that allow all students to succeed in college. All of our surveys also connect certain types of student-faculty interaction with the gains students make in college, again, providing actionable information on which types of faculty interactions should be increased.
Finally, it often takes massive amounts of time and effort to design, implement, and report on a survey. With resources at a premium on campus, sometimes you just do not have the time or tools available to take on a local survey project. It is also much less likely that a local instrument will have had people considering issues of validity and reliability.
As an aside here, one best-of-both-worlds scenario is when a national survey also provides options for institutions to add local survey questions. On each of the CIRP surveys, institutions can add up to 20 additional questions that they design locally. This combines the benefits of a national and a local survey.
In summary, there is a place for both locally created and national surveys at colleges and universities. A well-crafted assessment plan will include both in an effort to drive the use of information in decision making.