Skip to main content
Medicine LibreTexts

1.6: EVALUATION OF LEARNING

  • Page ID
    9959
    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    “Ever tried. Ever failed. No matter. Try Again. Fail again. Fail better.” —Samuel Beckett

    Few topics have generated more impassioned discussions among educators of health professionals than evaluation of learning. In many clinical practice settings, instructors are required to apply evaluation tools that they have not designed themselves. On one hand, criticisms of standardized assessment techniques for required professional competencies and skill sets note the over-emphasis on reproducing facts by rote or implementing memorized procedures. On the other hand, teachers may find themselves filling out extensive and perhaps incomprehensible checklists of criteria intended to measure critical thinking. How can evaluation possibilities be created to advance required competencies with individuals in complex practice environments?

    Expectations for learner achievements must be set out clearly before learning can be measured accurately. Within the clinical environment, the stakes are high for learners. Client safety cannot be compromised. Further, measurement considerations must not dominate the time educators might otherwise spend on creating meaningful instructional approaches. In his seminal Learning to Teach in Higher Education, Paul Ramsden (1992) establishes an important distinction between deep and surface learning. In his view, deep and meaningful learning occurs when assessment focuses on both what students need to learn and how educators can best teach them.

    Understanding the complexities in evaluating students and our teaching is an ongoing process. Approaching the process collaboratively in ways that consistently involve learners as active participants, rather than passive recipients, can support their success and inspire our teaching. In this chapter we introduce the vocabulary of evaluation and discuss methods of evaluating students and evaluating teaching. We suggest creative evaluation strategies that teachers can use in a variety of different clinical practice settings.

    Vocabulary of Evaluation

    Educators may feel overwhelmed by measuring how learners create personal meaning and demonstrate understanding of the consensually validated knowledge they will need to practice competently in their field of health. Measuring the efficacy of our own teaching in relation to preparing learners to practice safely, ethically, and in accordance with entry to practice competencies is not straightforward either. However, whether we are seeking to appraise student learning or our own teaching, knowing the criteria for expected outcomes will help us understand what is being measured. Measurement, assessment, evaluation, feedback and grading are terms used in appraising student learning and our own teaching.

    Measurement, Assessment and Evaluation

    Measurement determines attributes of a physical object in relation to a standard instrument. For example, just as a thermometer measures temperature, standardized educational tests measure student performance. Reliable and valid measurement depends on the skilful use of appropriate and accurate instruments. In 1943, Douglas Scales was one of the first to argue against applying the principles of scientific measurement to the discipline of education.

    The kind of science which seeks only the simplest generalizations may depart rather far from flesh-and-blood reality, but the kind of science which can be applied in the everyday work of teachers, administrators, and counselors must recognize the great variety of factors entering into the practical conditions under which these persons do their work. Any notion of science which stems from a background of engineering concepts in which all significant variables can be readily identified, isolated, measured, and controlled is both inadequate and misleading. Education, in both its theory and its practice, requires a new perspective in science whichscience that will enable it to deal with composite phenomena where physical science normally deals with highly specific, single factors. (Scales, 1943. p. 1)

    One example of a standardized measurement tool is a required student evaluation form. Most health professions programs provide clinical instructors with evaluation forms that have been designed to measure learning outcomes in relation to course objectives. These forms provide standardization in that they are implemented with all students in a course. They often focus on competencies such as safety, making them relevant to all members of the profession (Walsh, Jairath, Paterson & Grandjean, 2010). However, clinical instructors using the forms may have little or no input into their construction and may not see clear links to their own practice setting.

    Another example of a standardized measurement tool is a qualifying examination that all members of a profession must pass in order to practice. Similarly, skills competency checklists, rating scales, multiple choice tests and medication dosage calculation quizzes can provide standardized measurement. Again, clinical instructors may have limited input into design of these tools.

    Assessment obtains information in relation to a complex objective, goal or outcome. While the kinds of standardized measurements noted above can all contribute to assessing student performance, additional information is necessary. Processes for assessment require inference about what individuals do in relation to what they know (Assessment, n.d.). For example, inferences can be drawn about how students are applying theory to practice from instructor observations of students implementing client care, from student self-assessments, and from peer assessments.

    Evaluation makes judgments about value or worthiness in relation an objective, goal or outcome. Evaluation needs information from a variety of different sources and at different times. Evaluation of learners in clinical practice settings is considered subjective rather than objective (Emerson, 2007; Gaberson, Oermann & Shellenbarger, 2015; Gardner & Suplee, 2010; O’Connor, 2015).

    Formative evaluation is continuous, diagnostic and focused on both what students are doing well and areas where they need to improve (Carnegie Mellon, n.d.). As the goal of formative evaluation is to improve future performance, a mark or grade is not usually included (Gaberson, Oermann & Scellenbarger, 2015; Marsh et al., 2005). Formative evaluations, sometimes referred to as mid-term evaluation, should precede final or summative evaluation.

    Summative evaluation summarizes how students have or have not achieved the outcomes and competencies stipulated in course objectives (Carnegie Mellon, n.d.), and includes a mark or grade. Summative evaluation can be completed at mid-term or at end of term. Both formative and summative evaluation consider context. They can include measurement and assessment methods noted previously as well as staff observations, written work, presentations and a variety of other measures.

    Whether the term measurement, assessment or evaluation is used, the outcome criteria or what is expected must be defined clearly and measured fairly. The process must be transparent and consistent. For all those who teach and learn in health care fields, succeeding or not succeeding has profound consequences.

    Creative Strategies

    The Experience of Being Judged

    Clinical teachers measure (quantify), assess (infer) and evaluate (judge). Tune in to a time in your own learning or practice where your performance was measured. The experience of having others who are in positions of power over us make inferences and judgments about what we know can be both empowering and disempowering. Reflect on an occasion when you were evaluated. Did the evaluation offer a balanced view of your strengths and weaknesses? Did you find yourself focusing more on the weaknesses than on the strengths? How can our own experiences with being judged help us be better teachers?

    Students also bring with them experiences of being judged. One helpful strategy may be to have them share their best and worst evaluation experiences. Focus a discussion on the factors that made this their best or worst experience, to help learners reveal their fears. You can consider asking learners to draw a picture of their experience before they reflect and discuss.


    This page titled 1.6: EVALUATION OF LEARNING is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Sherri Melrose, Caroline Park, & Beth Perry (Athabasca University Press) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.