Skip to main content
Medicine LibreTexts

15.2: Social and behavioural research in evaluation

  • Page ID
    13229
    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    Social and behavioural research conducted during and after the trial may facilitate understanding and interpretation of the trial results. Two methodological approaches for this purpose are process evaluation (process documentation, process learning) and evaluation of pathways of change.

    2.1 Process evaluation to understand implementation

    Process evaluation is a term applied to a range of data collection activities conducted during the implementation of a trial to assess, at a minimum, whether the intervention is being implemented according to the study protocol. This is important to document and report, in order to determine whether an intervention’s apparent success or failure is attributable to the intervention’s concept or theory or to the way it was implemented.

    Table 15.2 shows six aspects of process evaluation that have been described by Saunders et al. (2005) to guide data collection activities.

    Each of the intervention components and its delivery methods should be subject to a process evaluation, resulting in the documentation of the six aspects in Table 15.2. Data collection may be quantitative, such as the number of subjects who receive an information leaflet, or qualitative such as perceptions of the political agenda behind an information leaflet that affects the ‘dose received’ of a particular message. Data may be collected through self-completion questionnaires, for example, by trainers who can record the amount of content actually delivered, the relative participation of different members of the group, and their impressions of the level of understanding for the various objectives of the training. Direct observations of activities can also provide an assessment of how well a particular intervention activity was delivered and can provide interpretations of the delivery in context, for example, to note other activities or events occurring at the same time that could support, or conflict with, the trial intervention. Interviews may also be used with both purveyors and intended recipients to understand what was delivered and what was received, and to give an understanding of why some aspects of an intervention may have been more effective than others. The data collected can be used in the interpretation of the final trial outcomes. The data can be incorporated into final analyses quantitatively, for example, in dose-response or per protocol analyses. The qualitative data can also be used to interpret what any change may be attributable to, in terms of the intervention delivered and received.

    Process evaluation can also identify difficulties with implementation that occurred and how these difficulties were addressed.

    Table 15.2 Six dimensions of process evaluation

    Fidelity (quality) The extent to which the intervention was implemented, as planned
    Dose delivered (completeness) Amount or number of intended units of each intervention or component of the intervention that were delivered
    Dose received (exposure or adherence) Extent to which participants actively engage with, interact with, are receptive to, and/or use materials or recommended resources. Can include initial and continued use
    Reach (participation rate or coverage) Proportion of subjects who receive or participate in the intervention; includes documentation of barriers to participation
    Recruitment and retention Procedures used to approach and attract participants at individual or organizational levels; includes maintenance of participant involvement in the intervention
    Context Aspects of the environment that may influence intervention implementation or study outcomes

    Adapted with permission from Saunders, R. P. et al., Developing a process-evaluation plan for assessing health promotion program implementation: a how-to guide, Health Promotion Practice, Volume 6, Number 2, pp. 134–47, Copyright © 2005 by Society for Public Health Education. Includes data from Steckler, A. and Linnan, L., pp. 1–24, in A. Steckler and L. Linnan (Eds.), Process evaluation for public health interventions and Research, Jossey-Bass, San francisco, USA, Copyright © 2002 by John Wiley & Sons; and Baranowski, T. and Stables, G., Process evaluations of the 5-a-day projects, Health Education and Behavior, Volume 27, Number 2, pp. 157–166, Copyright © 2002 by Society for Public Health Education. This table is not covered by the Creative Commons licence terms of this publication. for permission to reuse please contact the rights holder.

    2.2 Evaluation of pathways of change

    In the evaluation of pathways of change, which is particularly relevant for behavioural interventions or multi-component interventions, the researcher aims to establish the relationship between any changes detected in trial outcome data and the intervention package, taking into account contextual factors that may have shaped the intervention and outcome variables. The objectives of an evaluation of pathways of change are to establish plausibility that outcomes are attributable to the intervention and to depict the mechanisms by which an intervention had effect, including identification of contextual factors considered significant in supporting these mechanisms. Two approaches can be taken to understanding pathways of change: hypothesis testing and hypothesis generating. These approaches are complementary and should be considered together to maximize understanding of the trial and generalizability of the results.

    2.2.1 Hypothesis testing research

    The hypothesis testing approach relies on prior specification of the intended pathway of change, for example, through a logic model. Steps along the pathway can be identified, and the relationships between these steps tested. For example, a multi-component trial in Uganda to enhance the quality of care at rural health facilities included a workshop series on patient-centred services. The hypothesized pathway of change was that health workers would attend the workshops and participate in individual reflection, conceptualization, experimentation, group reflection, and planning in the workshops; would feel motivated and able to change their practice; health worker interactions with care seekers would be more patient-centred; care seekers would detect, and be more satisfied with, this style of communication; and community members would subsequently be more attracted to attending the enhanced health facilities. The study included a process evaluation to document the attendance, participation, and learning, followed by a pathway evaluation to assess communication between health workers and care seekers using audio recordings, care seeker satisfaction with their interactions with health workers, and logs of attendance at health facilities (Chandler et al., 2013a).

    2.2.2 Hypothesis-generating research

    A hypothesis-generating approach intends to understand ‘what happened’ from the perspective of the target population, from the time of intervention delivery to outcome evaluation activities. Here, unintended pathways of change can be captured, together with information on factors that affect the delivery, uptake, and use of an intervention in practice, as well as factors that may influence the outcomes of interest in the trial. Un- structured methods are best suited to this task to enable the research team to discover findings that may not have been hypothesized or depicted in the logic model. Project ethnography is one methodological approach to capture what actually happened. Here, an anthropologist, or someone similarly trained, carries out detailed participant observation, for example, working alongside the intervention implementation team for the trial, or even as a member of that team. Analysis of the in-depth data from these observations can provide insights into why and how members of the target community took up, adapted, or ignored different intervention components. Project ethnography can capture interpersonal relationships and power dynamics among the multiple ac- tors involved and provide insights that would have ordinarily been missed. Evans and Lambert (2008) provide an excellent example of the value of project ethnography in illuminating key factors in the successful implementation of an intervention related to HIV. Other methods include in-depth interviews and focus group discussions with implementers, stakeholders, and the target population.

    Further information and examples about using social research to carry out formative studies and evaluations of pathways of change in LMICs can be found at <www. actconsortium.org/qualitativemethodsguidance>.


    This page titled 15.2: Social and behavioural research in evaluation is shared under a CC BY-NC 4.0 license and was authored, remixed, and/or curated by Drue H. Barrett, Angus Dawson, Leonard W. Ortmann (Oxford University Press) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.