Download Constructs Documentation


Why create Constructs?

When examining the college experience, we are often interested in understanding something about an aspect of that experience that is fairly complicated. For instance, we know that faculty and student interactions are important to foster since they often lead towards greater gains in a number of areas in college. But if we are trying to measure this using a survey, we don’t simply ask students to tell us if they interacted with faculty. We ask about many different types of student and faculty interaction because there are many different ways that students and faculty can interact.

We then create sets of questions that we think encompass the types of interactions we think are important to measure. For faculty-student interaction, we ask if faculty provided students with advice and guidance about their academic programs, feedback on their academic work, and opportunities to work on research, to name a few. Although we are interested in those specific interactions in and of themselves, we are really interested in the broad concept of faculty-student interaction. In order to get at this broad concept, we combine individual survey items into one global measure of student-faculty interaction. We call this one measure a construct.
A construct can only be created when our analysis of the combination of items tells us that they actually are measuring a single trait or aspect of a student’s life. In building the CIRP constructs, we are not just adding together responses from items that are not really related. The CIRP Constructs are based upon decades of research on college students, but also on state-of-the-art rigorous statistical methods.


How are the CIRP Constructs created?

To score the constructs, we use Item Response Theory (IRT), a modern psychometric method that uses response patterns to derive construct scores estimates. Computing an individual’s construct score in IRT involves deriving a maximum likelihood score estimate based on the pattern of the person’s responses to the entire set of construct questions (or to a sub-set of the questions that were answered). Items that tap into the trait more effectively are given greater weight in the estimation process. Student construct scores are thus not simple arithmetic means or weighted sums, but rather the estimated scores that are most likely, given how respondents answered the set of questions. A technical or “white paper” with more detailed information about how the CIRP Constructs were created is available here.


How can I use the Constructs?

In IRT, a construct and its meaning are independent of the items. This means that CIRP Constructs offer more flexibility to institutions in interpretation. A high score on student-faculty interaction Construct indicates a high level of student-faculty interaction. The CIRP Construct Report that comes with your survey results compares your institution’s construct scores with scores for your institutional comparison groups. This allows you to determine if the traits and experiences of your students differ in a statistically significant and meaningful way from the average student at other institutions like yours. You can also use the CIRP Constructs to compare different groups of students at your institution. For instance, you might expect that students with high scores in student-faculty interaction would show greater gains in educational goals. Lastly, CIRP Constructs can be entered into statistical analyses.


So, everything I need to know is in a Construct?

No. The CIRP Constructs, while based upon decades of CIRP research, are being created with Item Response Theory (IRT) for the first time. It is a more precise measure of the relationship between survey items, and thus we are finding that in order to have the best possible constructs, we may need to revise some items on the surveys. We are providing you with only the most stable Constructs right now.

Sometimes items that provide useful and valuable information about a topic are not suitable for inclusion in a construct, either because they do not tap into the trait being represented by a construct, or because a construct has not yet been developed. Further CIRP research focused on construct development will result in a broader range of constructs in the future.


Why aren’t all of the relevant items included in the construct?

At the beginning of the construct development process all survey items relevant to a particular topic area were considered for inclusion. However, as detailed in our technical report, each item had to meet a series of statistical criteria in order for the item to be included in the final construct. Items that did not make it into a CIRP Construct, but nevertheless shed light on an area of interest, are included in our CIRP Themes. Please see the CIRP Construct Technical Report and CIRP Theme Reports for more details. For a more detailed analysis of IRT, see our article in the Review of Higher Education.


A Construct has a different number of items in it this year than it did last year. Is it a different construct?

The answer to this question is no, the underlying trait being measured by a Construct is not tied to a specific set of items. CIRP is using Item Response Theory to tap into each construct, and this theory assumes that the construct “exists” independently of any specific items. Any relevant set of calibrated items, in any number, can tap into the construct of interest. Therefore, removing or adding an item from a construct does not change what is being measured. The only measurement property affected by the addition or removal of an item is precision, because with more or fewer questions there will be more or less information available about each students’ level of the trait of interest. However, the precision of CIRP Constructs across all common levels of the traits are high. If an item is removed from a Construct, we are careful to ensure that the removal does not significantly alter the measurement precision of the Construct as a whole.

Why do the Constructs that have different numbers of items in them this year have the same parameters? Don’t you need to re-estimate the parameters because you removed or added an item?
The answer to this question is also no, new parameters are not necessary. As we discussed in the answer above, with Item Response Theory any relevant set of calibrated items, in any number, can tap into the construct of interest. “Calibrated” in this sense means that parameters for the items have been estimated. Once estimated, parameters do not need to change. This is because Item Response Theory assumes that the item will always tap into the construct in the same way. Only if there is reason to believe that an item relates to the underlying trait somewhat differently than in the past would CIRP investigate whether re-parameterization is necessary.


What about the Construct Alpha levels?

Cronbach’s Alpha is a statistic that is commonly used in the traditional, Classical Test Theory-based evaluations of psychometric scales. Alpha is a measure of internal consistency, or “the overall degree to which the items that make up a scale are intercorrelated” (Clark & Watson, 1995, p. 315). Although it is commonly used otherwise, alpha can only be used as part of the assessment of-and not the final or sole determination of-unidimensionality (Gardner, 1995). As Cortina (1993) explains, alpha measures “the extent to which items in a test have high communalities and thus low uniquenesses.” He continues, “It is also a function of interrelatedness, although one must remember that this does not imply unidimensionality…a set of items…can be relatively interrelated and multidimensional” (p. 100).

As the above points note, Alpha can be a misleading statistic. At CIRP we are well aware of this potentiality, and have chosen other statistics besides Alpha to ensure the internal consistency and unidimensionality of our Constructs. Specifically, we have decided to use an iterative factor-analytic technique to assess whether each Constructs’ set of items are unidimensional. Our Technical Report describes this procedure in more detail.

For more information on Cronbach’s Alpha, see:
Clark, L.A. & Watson, D. (1995). Constructing validity: basic issues in objective scale development. Psychological Assessment, 7(3), 309-319.
Cortina, J. (1993). What is coefficient alpha? An examination of theory and application. Journal of Applied Psychology, 78(1), 98-104.
Gardner, P. (1995). Measuring attitudes to science: Unidimensionality and internal consistency revisited. Research in Science Education, 25(3), 283-289.


What are Factors?

CIRP factors, which are used in the Diverse Learning Environments (DLE) survey, are designed to capture the rich student experiences and outcomes that institutions are often most interested in understanding. We conduct factor analysis using classical test theory to create factors that succinctly capture these experiences. Factors are particularly useful for benchmarking. They allow you to determine if the experiences and outcomes for your students change over time and/or differ from your comparison groups.