Return to site

How & Why All Student Affairs Professionals Should Be Measuring Co-Curricular Learning

June 30, 2021

The importance of assessment should not be lost on any student affairs professional. 

If it’s viewed as merely a nice-to-have activity at your institution, then you need to help reprioritize. Assessment is one of the most cited areas of concern or deficiency for accreditors and quality assurance entities. More specifically, institutions are often cited by accreditors for the lack of engagement or insufficient evidence of co-curricular and student affairs assessment.

I’ve encountered far too many student affairs professionals who do not believe that their work impacts their students’ learning. I remind them that learning involves any change in knowledge, skills, or behavior — not just classic academic learning. If your area or office engages with students in any way, then assessment of student learning includes you! 

Some areas may be more transactional and the learning outcomes may be pretty narrow or limited, but that’s ok. Every area plays a different role in the student experience, and assessment should be bounded by the scope of an area’s work.

It is worth pointing out that assessment for student affairs shouldn’t be all that different in its approach than efforts undertaken by academic affairs.

Academic affairs departments typically conduct specific learning interventions, including courses, capstone projects, and academic documents. These interventions contain and are guided by purposeful content, including curriculum, or college- or degree program-specific information. The interventions have integrated measures for assessment that include quizzes, tests, papers/projects, surveys, and evaluations. 

Student affairs should also have interventions, purposeful content, and integrated measures to promote learning. Interventions can include advising sessions, workshops, support services, and more; the purpose varies by office and content may or may not directly relate to classroom learning. Student affairs professionals can conduct assessments of learning through surveys, rubrics, observations, and more.

It’s typical for institutions to measure operational elements of student affairs (such as needs, satisfaction, usage, quality). Proper assessment of student learning should measure student actions, behaviors, and knowledge. Ideally, there should be a balance here. Learning doesn’t happen in a vacuum and the operational elements can provide important context.

As an example, imagine if you held an event and had evidence of students demonstrating mastery of the expected learning outcomes, but only two students participated. This may be affirming learning outcome data, but that is tempered by the small sample size. And from an operational perspective, this may be especially disappointing if resources were allocated for an event designed to attract a large attendance. 

Alternatively, you might know that 200 students attended an event but have no data on whether they learned anything. In other words, you have helpful operational information but no insight on learning outcome achievement. 

Ideally, you would want to have both operational data and learning outcome data so you can think about actions for improvement for the event and the intended outcomes for students.

It’s important to remember the purpose and intention of assessment work; you should strive to engage and be informed by the process and not get bogged down or intimidated by the details. 

Institutions have to do more than share their intentions, however. If we only did that, then it would be all too easy to assume that everything is going according to plan without measures of quality assurance. 

And even when quality assurance is done on the front end or in relation to operations, those efforts do not account for whether student learning occurred. As such, we need to find evidence that our inputs, intentions, and assumptions are having the outputs, outcomes, and realities we anticipated.

I strongly push back on student affairs folks who argue against the relevance of assessment in general or against dedicating more resources to it. While assessment can often be improperly rolled out in practice or misunderstood by those involved, it’s meant to be useful, inform about effectiveness, and be meaningful to those involved. I’ve yet to encounter anyone – faculty, staff, administrators, external stakeholders, students – who didn’t want to or need to answer the following questions (from Upcraft and Schuh): 

  • How can we be better stewards of resources?
  • How can we improve quality where necessary?
  • Are we providing needed support for student success?
  • How can we articulate what we do to outside parties?
  • What are our students learning?

Those questions are seemingly relevant to all higher education professionals, but I especially share them here because assessment can provide the answers, complete with evidence! 

Good educational practice calls for data-informed decision-making and to know if our work is having its intended impacts. Assessment of student learning data is not the only data set you need to guide strategic decisions (you should also be looking at operational data, budget, institutional goals), but assessment of student learning needs to be part of key metrics and student success stories for institutions.