Academically Adrift has reignited the debate about the amount of value that attending college adds to a student's learning. Its conclusions--that many students do not learn much during college, and that that fact is due to the low requirements for the amount of student work are both sad and unsurprising.
The book's reliance on data from the CLA has received much less comment than its conclusions. This is due, in part, to the fact that over time the CLA has become a non-controversial assessment tool. But it is due also to the fact that few people outside higher ed know how the CLA works on a particular campus. For our campus, though, the CLA, regardless of what is says about our students' performance (and the news is sometime quite good), is always of questionable value.
The CLA purports to measure the "value-add" of a college by giving students a real-world critical thinking and writing challenge, and then measuring how students perform on that test in comparison to how the CLA (using a sophisticated algorithm) predicts they should have done. If senior students perform better than predicted (both by the performance of freshmen and by their own aptitude scores), that improved performance is the "value-add." Schools receive a report from CLA that breaks down student performance by field of study, and that compares the college's value-add to that of other institutions. Institutions are free to do with the data what they want. In my experience what they most want to do is see how they stack up against their peer and aspirant institutions on the "value-add" measure.
Westminster has used the CLA for six years. Each year's report is met by the same reactions. First, whether our value-add is high or low, we always wonder what that measure is due to. Because the CLA reports data only at major and campus level, and because it is a cross-sectional study, it is never possible to be sure what causes student performance gains or losses. Is there a particular class that made our philosophy students a year ago stand out so much? Is there a particular gap in the liberal education curriculum that lead our students to perform poorly on the "make an argument" portion of the test? Did some change in advising or curriculum have an effect? Do we just happen to have an outstanding bunch of freshmen whose scores make our senior's performance look bad? How would we know?
Second, and most importantly, our results are always compromised by small sample size.One year we were able to get the entire graduating class of philosophy majors to take the test. They performed very well. But there were six of them, so the statistical significance of the findings is in question. Some years we are only able to get 14 nursing majors to take the test, and so we have a huge standard deviation. Every year there are outliers who either make our scores look great or make them look awful.
The message here isn't that the CLA is a bad tool, or that Academically Adrift's conclusions are dubious. It is, instead, that small colleges and universities are poorly served by assessment tools that sample, aggregate data, and make comparisons at the campus level. The numbers in our sample groups will always be too small to be trustworthy, and the unit of measure (the entire college experience) too large to do something about.
Instead of these sort of abstract measures, small colleges and universities need to become much more serious about tracking and influencing individual student performance over time. There is no reason why a campus could not administer a measure of student learning each year to each student and track that particular student's performance. Then, if a student performs poorly on critical thinking, the student's advisor could recommend a particular course, or a particular shift in study habits, to respond.
The assessment mantra of small colleges should be something like this: Disaggregate, don't aggregate. Do longitudinal studies, not cross-sectional ones. And most importantly, assess the learning of students as real living human beings, not as part of an abstraction of how the entire institution is doing.
Such steps will make it harder for small colleges to play in the rating/ranking game that measures like CLA and NSSE allow. But it will allow small colleges and universities to be able to link assessment and learning in the lives of individual students--the thing we say we do right now.
Saturday, February 26, 2011
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment