What’s success? Is it the first-time, full-time students measured by the Integrated Postsecondary Education Data System (IPEDS) who matter most? Is it realistic to expect community college students to graduate in three years? What about six? And what about the manufacturing employee who completes company-sponsored training to advance on the job but doesn’t show up as having completed?
If you ask Paul Feist, vice chancellor for communications at the California Community Colleges Chancellor’s Office, success is closing the completion gap and helping students achieve their goals — no matter what that goal is. And to measure how the system’s 112 colleges are doing, it created the California Community Colleges Student Success Scorecard.
“The primary purpose is to provide a benchmark for the college itself,” Feist says. The idea is to give college leaders “an easy-to-read snapshot of these metrics, so they can look at how many of their students are completing certificates, achieving associate degrees, advancing past developmental education and more. There are all kinds of measures, and they all basically tell you how well the colleges are doing.”
But what does that look like, and how are California college leaders using it?
Persistence and results
Instead of picking one endpoint, the scorecard measures so-called momentum points as well as outcomes.
Momentum points: College leaders can check to see how many students moved from developmental education to college-level courses, how many students persisted in enrollment for three consecutive terms and how many students completed 30 units — an important benchmark — in the last six years.
Completion outcomes: Likewise, leaders can click through an interactive metric of how many students completed degrees or certificates or completed workplace training in the past six years.
But it doesn’t stop there: If one clicks on the header for, say, Degree/Transfer results, the data gets granular, breaking down according to gender, age or race or whether the students were prepared for college. This is not just data — this is information that can be acted on.
“We disaggregate the data to identify where performance gaps are,” Feist says. “The goal is to have the colleges use this data, analyze it and have conversations with their local boards of trustees, so they can strategize for improving student success at every level.”
Use as directed
And that seems to be what college leaders are doing. In a survey of college leaders Feist shared with the American Association of Community Colleges, senior executives made up 89 percent of the people who review the data. And of those, only 36 percent of respondents were using the results for external reporting. Eighty percent used the data in student equity planning and in strategic plans. Colleges also used the data as part of student retention and persistence studies and in student success stories.
College leaders also reported looking carefully at how their institutions were doing for students of various races, college-readiness status and gender, as well as by economic and disability status.
“We think for the most part that the internal users in the colleges are using the scorecard the way it was intended,” Feist says. “Eighty percent said they were using this data in student-equity planning. That’s really what this was intended to do, by large measure.”
So far, it seems to be working. In the system’s State of the System Report, released in January, results showed that the system awarded 40 percent more certificates and degrees in the last academic year than it did four years prior. What’s more, the number of transfers to the California State University System doubled between academic years.
Feist hopes the data will allow colleges to continue to improve.
“If [college leaders] can zero in on where those gaps are,” he says, “they can strategically employ the information at the local level to reduce those gaps.”