Badges allow recognition of learning that occurs across contexts, but this presents challenges for assessment.
One of the strong selling points of digital badges is that using them allows programs to recognize achievements that were previously outside the scope of their previous credentialing schemes. In formal higher education, students’ learning is regularly assessed within the context of each of their courses. The credential of higher education, the degree, is the result of adequate performance across many course-based assessments.
Adding badges to this environment means that schools may award credentials for additional achievements besides what the previously existing system allows. Students might earn badges for learning specific skills as assessed by their course instructors, for instance. Another way badges allow schools to create new credentials, though, is in recognizing competencies developed across several courses or contexts. One of the earliest programs to start thinking about digital badges in higher ed is UC Davis’ Sustainable Agriculture & Food Systems (SA&FS) undergraduate program. This case shows how badges can complement an existing credential structure by recognizing learning that students develop through connecting experiences in formal courses, internships, work experience, and personal pursuits.
The main badges in the SA&FS system are to recognize development in seven competencies, which are defined as important learning outcomes of the major and based on an investigation of the skills, content and experience valued by the sustainable agriculture industry, faculty, alumni and students.. These badges are awarded for Systems Thinking, Experimentation and Inquiry, Understanding Values, Interpersonal Communication, Strategic Management, Civic Engagement, and Personal Development. This is an example of a practice using the recognizing design principle “Align credential to standards,” specifically those standards “internal to community.” The SA&FS badge system allows students to collect evidence of development in each of these competency areas in a portfolio system. Their experiences from courses and internships as well as life and work experiences outside of formal education can all count toward earning a competency badge.
This is an ambitious recognition goal, and it has major consequences for the assessment practices that must be adopted to be able to adequately measure student development across all these experiences. Additionally, many of the competencies SA&FS chose to recognize fall under the category of soft skills, which are chronically “missed” by traditional assessment techniques (Heckman & Kautz, 2012). Not only does SA&FS’s desire to recognize competencies developed across contexts make the existing assessments incomplete, the competencies selected are notoriously difficult to design assessments for.
This challenge deeply affected which assessment design principles the SA&FS system embodied. The badge system touched on seven of the ten general assessment principles including “Align assessment activities to standards,” “Enhance validity with expert judgment,” “Use e-portfolios,” “Use formative functions of assessment,” and “Promote hard and soft skill sets.” This indicates a complex interlocking assessment system that includes both self-assessment and many opportunities for feedback from instructors, other faculty, and peers.
SA&FS has not fleshed out the whole set of practices that will make up its formalized assessment system, but the number of principles that come together in assessment is what appears necessary to meet the ambitious challenge set out by the goal of recognizing competency development across contexts. Part of the process will involve considering competencies when designing assessment activities inside each context. Both in terms of self-reflection and expert assessment offered by instructors, students will need to know when to include their experiences under each competency section in their portfolios. SA&FS also intends to continue its tradition of using close relationships between students and advisors to offer more holistic assessment opportunities outside of classes. The badge system designers value the idea of students engaging in “back-and-forth mentoring conversations…throughout their experience in the major” in a recurring process of dialogue over competency development (DPD Follow-up Interview). The program hopes to have a strong offline community, complemented by online tools like the portfolio and badge system.
Joanna Normoyle, the program’s coordinator of experiential learning and a major driving force behind the badge system design, pointed out that the decision to recognize learning that occurs across different contexts does not necessarily mean that the assessments naturally embedded in each individual context are invalid for the purpose of the badge. Instead, the fact that there are assessments fit to the purpose of the classroom or the internship context means that there is “an opportunity to feed in different assessment data streams” that together could form “a rich picture of the work, a rich picture of all the assessment that is already happening” (Personal correspondence 26 February 2014). The challenge instead, is to collect these inputs where they are relevant and find the right mix of these different types and sources of assessment (DPD Follow-up Interview).
There is also the issue of finding where in the process different assessments fit best. For example, Normoyle says that implementing peer assessment to serve formative functions is a messy process, but “it’s the piece that has to work in order for everything else to work…It’s in getting this kind of feedback that makes everything else in the system feel valuable” and that when students get to the point of more summative final reflection and are asking themselves whether they are ready to earn a badge, they can look back at all the data points they’ve already gotten to help them get a better sense of what they have achieved (DPD Follow-up Interview).
Besides building the social infrastructure to support collecting and distilling assessment information across contexts, there are also technical demands. The next step is development of the capability to collect peer feedback on portfolio artifacts to support discussion around how the artifacts connect to competency development. SA&FS submitted in June 2013 that they learned it is important to design for each type of stakeholder who will be using the portfolio system, including building the tools faculty need to easily see and provide feedback on their students’ work (HASTAC Q&A).
Whether done well or poorly, the assessment practices have deep consequences for students’ motivation. When peer assessment comes together right, Normoyle says “when people feel that they’re being recognized as effective in what they’re doing by their community, there are very few things that match that for motivation as human beings. Whatever we can do to provide experiences for our students that tap into that deeper motivation, giving them recognition for being effective, we’ve got to figure out how to do it” (DPD Follow-up Interview). SA&FS is committed to its goals of recognizing competency development across connected learning contexts, including prized soft skills.
HASTAC. (n.d.). Project Q&A With: The SA&FS Learner Driven Badges Project. Retrieved March 20, 2014, from http://www.hastac.org/dml-badges/SA%2526FS-Learner-Driven-Badges-Project
Heckman, J. J., & Kautz, T. (2012). Hard evidence on soft skills. Labour Economics, 19(4), 451-464.
|Recognizing Principles||Assessing Principles||Motivating Principles||Studying Principles|
|Specific principles:||Specific principles:||Specific principles:|