By Rachel Schwimmer
According to a new study by GG+A Survey Lab, an analysis of 2,083 donors to a private university compared the amount of giving with the respondents’ feelings of connection. The finding shows the donors’ self-reported level of connection with the university in relation to their giving followed a stepped pattern as opposed to a linear one. Specifically, there were clear tipping points at the mid-point and high end of the connection scale. The findings show that as the level of connection increases, the amount of giving also increases, but this relationship is not a direct linear correlation.
Feelings of connection to the university were measured on a 1 to 10 scale (10 = Very connected) and levels of giving were grouped into ranges of less than $1,000; $1,000 to $24.9k; $25k to $99.9k; $100k to $249.9k; and $250k+. The data show that between the connectivity rankings of 5-6 and 9-10 there is a dramatic jump in the level of giving. Between points 5 and 6 the level of giving increases approximately $141,000, compared to a sum climb of just $5,000 from points 1 to 5. Between points 9 and 10 that increase is a remarkable $308,000, while the increase in giving from points 6 to 9 is only about $10,000. The question we pose is whether these tipping points are the result of distinguishable differences in the level of connection at these points or of respondents’ selective rating bias in response to their known level of giving. In other words, do clients give at different levels because they feel a certain degree of connection or do they rate themselves at a particular level of connection because they want this rating to correlate to their level of giving?
One proposed method to parse out this relationship between connection and giving is to adjust the ranking scale which respondents use. Research has shown that the number of response categories used when collecting survey data plays a significant role in the reliability, validity, and discriminating power of responses. While fewer response categories (2 to 4) are easier and quicker for respondents to use, additional categories (7 to 10) allow for results that are more discriminating and valid. Research by Preston and Colman suggests that scales with 7, 9, or 10 response categories are ideal for obtaining valid and reliable results, while affording respondents a large enough scale to differentiate between categories, yet not so large that the differences between categories become essentially meaningless.
These findings can be applied to our tipping point question by re-administering the surveys with a 7-point scale instead of the original 10-point one. We could then compare the responses to see if the correlation between connectivity and giving is more linear than that shown in the original dataset. Unlike with a 10-point scale, a 7-point scale would not have a clear mathematical division between the top and bottom half. Here, respondents would have the ability to place themselves in a neutral, central ranking which could result in a less dramatic spike in the middle of the range when compared to the 10-point scale. While this theory would not apply to the jump in giving seen between connectivity rankings 9 and 10, additional comparison to see if this tail-end spike holds while the mid-point one flattens out could further support the argument that a 7-point scale mitigates some of the selective response bias that is seen with the 10-point scale.
It is also important to note that when researchers are choosing the optimal number of response categories for ranking scales, there are many factors to consider. Chief among them, and a component that has been largely neglected by academic studies is the familiarity and ease of use of a 10-point scale for respondents. Giving answers in this manner is a common part of our cultural vernacular and therefore something that most respondents can relate to more easily than ranking on a 7-point scale. However, one advantage to researchers of moving to a 7-point scale would be a simpler analysis and a cleaner display of data.
Future studies would do well to compare results from the same or similar surveys given on 10-point scales and 7-point scales, with respect to responses of self-assessment. By comparing the findings from these two datasets, we can come to some conclusions as to whether there is an element of selective rating bias involved in the relationship between a quantifiable variable (in our example, level of giving) and one evaluated by the respondent (level of connection).
For information about how the GG+A Survey Lab can help you understand your alumni, donors, subscribers, and members, contact Dan Lowman at email@example.com
About the Author:
Rachel Schwimmer is a research intern at the GG+A Survey Lab. She focused on developing meaningful insights from the Lab’s collection of survey results. She is a senior at the University of Illinois at Chicago.