15.1: Purposes of social and behavioural research in intervention trials
- Page ID
- 13228
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)
( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\id}{\mathrm{id}}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\kernel}{\mathrm{null}\,}\)
\( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\)
\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\)
\( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)
\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)
\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)
\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vectorC}[1]{\textbf{#1}} \)
\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)
\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)
\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)Social and behavioural research is often conducted during the design and evaluation of health interventions. In the design phase, ‘formative research’ is conducted in the community in which the proposed trial is to be conducted to explore the context in which the intervention will be delivered and to examine ways in which the intervention might be optimized. Examples are given later in this chapter. The outcome of such research should help define the content and delivery of the intervention package and ensure that the study protocol takes proper account of local conditions. In the evaluation phase, either during or after a trial, social and behavioural research is often used as part of a ‘process evaluation’ to understand aspects of the implementation of the intervention, such as in the context of intervention coverage, comparing how the intervention was supposed to be delivered, compared to how it was actually delivered, and to understand ‘pathways of change’ in the case of behavioural intervention trials (i.e. what the components of the intervention that led to, or did not lead to, behaviour change were).
The methods applied derive from a variety of disciplines, including anthropology, sociology, and psychology. They include both qualitative and quantitative approaches. In Section 3, we outline qualitative methods that are commonly incorporated in the design and conduct of intervention trials. Rather than detailing all possible methods, examples are given of how different methods can be used in the context of such trials.
1.1 Formative research to define the intervention package
All of the component parts of an intervention (the ‘intervention package’) to be tested in the field trial and the method of delivering the intervention should be clearly defined. To maximize the potential for the intervention package to be effective, it should draw on local priorities and contexts, as well as best practice from elsewhere. With the poten- tial exception of some ‘proof of concept’ trials, the intervention must have the realistic prospect of being affordable, given the resources available at the household level and to the local health system, either immediately after the trial or in the foreseeable future. It must be acceptable to the community, and it must be feasible for it to be implemented by those charged with delivering the intervention in the trial (e.g. local health workers). If the intervention tested in the trial is poorly designed or does not have the potential to meet these criteria, this greatly reduces the chance that the intervention will be adopted into routine practice at the conclusion of the trial, even if it is found to be effective. To optimize evaluation methods, particularly in the context of ‘complex’ interventions, the intended mechanisms of effect should be clearly articulated in advance, for example, through a logic model (see Section 1.1.2).
In the context of this chapter, we use the term ‘intervention package’, rather than simply intervention, to emphasize that, even if the core intervention under evaluation is a single item, such as a vaccine or a drug, it will always be necessary to deliver it as part of an intervention package, which will have a number of different components that have to fit together for there to be a significant effect on the health outcomes of interest. An intervention package can be regarded as composed of the core intervention and complementary activities to promote uptake and use of the core intervention. Varying amounts and types of formative research are required, depending upon the nature of the core intervention and how fully it has already been defined. As discussed in Chapter 2, the core intervention under trial varies widely from products or technologies such as vaccines, drugs, food supplements; behaviours such as hand-washing and exclusive breastfeeding, or care seeking from a health facility in response to danger signs; different methods of delivering or managing health services, such as delivery of services through visits by community health workers to the home, rather than through visits by users of the services to a clinic or dispensary, or different methods of supervision of health workers. Whichever intervention type, it is likely that the intervention and package will require one or more components beyond the core idea or technology.
Formative research to define intervention packages typically involves fieldwork and review of the literature before an intensive period of design and pilot testing of the intervention package. An example of the role of formative research in intervention de- sign is shown in Figure 15.1, which was developed in the context of trials of the delivery of drug treatment interventions against malaria (see <http://www.actconsortium.org>).
Figure 15.1 Example of the role of formative research in intervention design.
Reproduced courtesy of Claire Chandler. This image is distributed under the terms of the Creative Commons Attribution Non Commercial 4.0 International licence (CC-BY-NC), a copy of which is available at http://creativecommons.org/licenses/by-nc/4.0/.
1.1.1 Fieldwork
Formative research fieldwork aims to understand the ‘problem’ that will be targeted by the trial, to gain an understanding of the ‘audience’ for the intervention and to understand the context in which the intervention will take place. For example, the overdiagnosis of malaria by health workers has been identified as a major problem across malaria endemic countries. Prior to a trial to improve the diagnosis of fevers in northeast Tanzania, anthropological research at hospitals had described how clinicians operated through shared ‘mindlines’, rather than following clinical guidelines, shaped by perceived patient expectations and norms established with peers and historically in the wider medical community (Chandler et al., 2008). An intervention package designed to improve the diagnosis of malaria would require changing these norms in a manner that would not undermine clinical autonomy. The audience for the trial intervention was defined to be both clinicians and patients at dispensary level facilities. Further qualitative work was carried out with these groups to learn what existing ideas and situations supported the use of diagnostic tests and to discuss how these could be built upon to develop intervention activities and messages that would encourage a change in practice.
Table 15.1 outlines four areas for exploration informative research: understanding the current policy and operational context of behaviour; understanding current practice in the local context; understanding current perceptions; and understanding whether the population of interest perceives a need for change, and their ideas for how this might be achieved. Each area may be explored, using different methods and with different participants.
The identification of existing practices, ideas, and scenarios upon which to build intervention design is important informative research. Identifying only barriers to a particular health practice can be limiting in designing an effective intervention. An approach to identify existing beneficial practices is ‘positive deviance inquiry’, which uses multiple methods. For example, a research group may want to improve child malnutrition by promoting beneficial child feeding behaviours that exist in the community but are only practised by a minority of households. Here, the study team might identify two sets of households with similar levels of material wealth and other characteristics, but with different levels of child nutrition. A small descriptive study, including structured observation of child feeding practices and interviews of various household members, can be carried out to try to identify potentially beneficial behaviours, which might subsequently be confirmed in a larger study with a representative sample. A detailed manual of how to apply this methodology is available (<www.positivedeviance. org>) (Sternin et al., 1998).
An important characteristic of the core intervention to be explored at this stage may be its cost. In efficacy trials, the product or intervention is typically provided to research participants, free of change. In trials designed to mimic what might happen when an intervention is introduced into public health practice, a product may be sold to participants, at the cost users will pay when the product eventually is available through routine distribution channels. One major focus of formative research at this stage may therefore be on evaluating not only the acceptability and feasibility of use, but also the willingness to pay for the intervention (see also Chapter 19).
1.1.2 Literature review
In addition to fieldwork, the formative stage of intervention design requires review of previous work. Systematic reviews of evidence of other interventions that have been more and less successful in achieving similar objectives are recommended as a first step (see Chapter 3 and Medical Research Council, 2008). In addition, identification and specification of the theory or theories used to guide the design of the intervention and its delivery are recommended, in order to strengthen the effectiveness of the intervention, as well as to enable evaluations to contribute to wider bodies of theory about ‘what works’. This is especially relevant for behavioural interventions. Care must be taken in identifying an appropriate behaviour change theory to ensure that the theory reflects well the situation found locally informative fieldwork research. Certain cognitive-based models, such as the health belief model, that centre on replacing ‘be- liefs’ with biomedical ‘knowledge’ and replacing ‘myths’ with ‘truth’ have been criticized for taking too little account of the local issues around health, care seeking, and care giving, and not relating these to their social, economic, and political contexts. Even in the absence of the explicit use of theory to guide intervention design, social science approaches are useful in enabling depiction of the implicit pathway of change (how change will be brought about) and the hypotheses embedded within this. Such a depiction is often termed a ‘logic model’ or ‘theory of change’ or ‘impact model’ and can help to tighten up an intervention design as well as to identify where evaluation activities are required, in order to test hypothesized pathways of change. For a discussion of these aspects, see National Institute for Health and Clinical Excellence (2007).
Figure 15.2 shows a framework for a logic model.
Analysis of the intervention details and the context in which it is implemented is important for the proper interpretation of trial outcomes, so that the applicability of the trial results in other situations can be assessed.
Table 15.1 Areas for exploration in formative research for the design of an intervention package to improve the diagnosis and treatment of fevers
1.1.3 Developing and pilot testing intervention delivery
Once the core intervention is defined, the details of the intervention’s delivery require development to promote understanding, acceptance, and utilization of the core intervention, or to improve physical, financial, and cultural access to the core intervention. Details to develop and pilot-test for the effective delivery of the intervention include activities, materials, and ‘purveyors’ (explained in the following paragraphs).
Activities to accompany a core intervention might include the design of workshops, media spots, or engagements with opinion leaders. When the intervention to be introduced is new to the potential recipients, a small-scale pilot introduction may be carried out. This can help to refine the activities and identify needs for materials and the optimal characteristics of the purveyor(s) who will deliver different components of the intervention package. An example is a pilot feasibility study carried out in rural Zimbabwe to design an intervention to target adolescent sexual health (Power et al., 2004). Teachers were trained in four schools to deliver weekly lessons on reproductive health. Feedback and responses to the materials and delivery were gained through questionnaires, in-depth interviews, focus group discussions, and participant observation with pupils, parents, teachers, and education officers. The research found that the intervention as originally conceived was unlikely to be deliverable because the classroom was not the appropriate context for delivering the intervention, the school infrastructure was not suitable to deliver the intervention materials, and existing materials were inadequate for the intervention. As a result, substantial changes were made to the design of the intervention prior to formal testing in a large community randomized trial.
Figure 15.2 framework for a logic model of an intervention’s pathway of change.
Materials for the intervention delivery might include printed instructions and/or a film of how to use the product or how to perform the behaviour; vouchers to be provided to the poor who otherwise could not afford the product; materials for the channels through which the product will be sold or distributed such as pharmacies, shops, and health facilities; and print or audiovisual materials for communication activities such as radio broadcasts, protocols for community meetings, and posters. The development of these materials should draw on best practice in communication science, together with either information already gained from local formative research or participatory research at the design stage. Participatory, or ‘action’ research, can lead to the development of intervention materials that are more effective and acceptable to end-users. An example is the development of a treatment guideline for the effective case management of malaria in children at home by caregivers (Ajayi et al., 2009). Several forms of modified focus group discussion sessions were undertaken, with ideas depicted in illustrations by a graphic artist. The emerging guideline, in a cartoon format with a local language script, was subject to multiple rounds of pre-testing by end-users, during which edits were made to the pictures and text to increase comprehension and interpretation of the stories. Pre-testing of materials with community members is essential before finalizing them. Images, statements, and even colours can often portray different meanings to different people. To avoid misinterpretation, community members should be shown drafts of materials and systematically asked for their comprehension and interpretation of each element of a poster, video, or audio broadcast. An excellent manual for pre-testing that includes principles for clear communication has been produced for the WHO (Haaland, 2001).
Purveyors are the people who will deliver the intervention. Attention must be given to their selection, training, and supervision. These may include facility-based health workers, community health workers, traditional healers, private and informal sector providers, traditional birth attendants, women’s groups, and community or religious leaders. Small-scale studies can be conducted to investigate which type of person might be the most appropriate as the purveyors of the intervention. These might be based on either discussions of hypothetical options with the potential recipients of the intervention or pilot projects to implement one or more alternative options. Examples of projects with a comprehensive package of complementary activities and people to implement these activities in the field were a programme for the social marketing of bed-nets in Tanzania (Schellenberg et al., 2001) and an education and counselling programme on exclusive breastfeeding for HIV-infected mothers in a trial in Zimbabwe (Iliff et al., 2005).
1.2 Formative research to adapt the study protocol
1.2.1 Study design and procedures
Chapter 4 describes decisions to be made regarding study design such as selection of interventions, allocation of interventions and unit of randomization, and method of implementation. Often, such decisions are made far from the study site, and they will always benefit from detailed information about the study site. Formative research conducted to inform the study design may examine different topics, including:
Selection of study site: Typically, there are a number of possible locations at which a trial may be conducted. Requirements of the trial may include enrolment of people with specific characteristics, and long-term follow-up of those enrolled in the trial. The decision on the choice of site may be informed by analysis of existing census data or other datasets, or interviews on community characteris- tics such as patterns of migration, economic activities, and observation of health programmes already being implemented by local organizations (see Chapter 9).
Randomization: Qualitative data can inform decisions about the unit of randomiza- tion (individual, village, cluster of villages, sub-district, etc.) and the boundaries of the units for group randomization. An understanding of the social structure and the social context of the target behaviour is useful for identifying the importance of administrative or social groups. For example, if a target behaviour is known to be habitual to a group and the intervention relies on individuals making changes as part of a group, the unit of randomization should be that group, rather than individuals within the group. Another consideration may be defining boundaries to minimize the potential for contamination, due to interactions between those assigned to different trial arms. Formative research can reveal common interactions and social and logistic boundaries.
Promotion of trial participation: Prior to the start of a trial, its usefulness and the priority given to the research question should be established from the perspective of the hosting communities. If the question or methods are not aligned with local interests, changes to the intervention or evaluation may need to be made (see Chapter 9). Once a trial is launched, it is desirable that a high proportion of those eligible to participate in the trial agree to do so when invited. A high refusal rate may jeopardize the generalizability of the trial findings or may even threaten its viability. Thus, it is important to implement activities to promote understanding and acceptance of the research activities and create the conditions under which truly informed consent is possible. These might include community meetings to discuss why certain communities or persons will receive the intervention, while others will not, and print or counselling materials to explain the risks and benefits of participation. This component may also elicit community input to improve the trial protocol itself, as occurred in the design phase of a clinical trial on the safety and efficacy of antiretroviral and nutrition interventions to reduce postnatal transmission of HIV conducted in Malawi (van der Horst et al., 2009). Qualitative studies were conducted to assess the acceptability of three alternative efficacy study designs and the feasibility of participant recruitment for such study designs.
Participatory methods can be used to engage communities in the design and implementation of the trial interventions. For example, a feasibility study for a microbicide trial in Mwanza Tanzania formed a city-level CAB, with representatives from among the potential trial participants elected from each ward. Through workshops and meetings, both with the CAB and wider groups of potential trial participants, many modifications were made to both the trial design and the study procedures. CAB members expressed concerns about the sale of blood specimens for witchcraft purposes, whether speculae for pelvic examinations would be reused and therefore be unclean, insufficient transport allowances for attending the trial assessments, and delayed reporting of laboratory test results. In response, the study team invited CAB members to observe the preparation and storage of blood specimens and the use of the autoclave in the laboratory, raised the amount for reimbursements, introduced HIV rapid testing, and accelerated the feedback of laboratory results (Shagi et al., 2008; Vallely et al., 2007).
1.2.2 Consent procedures and measurement tools
Obtaining truly informed consent for participation in an intervention trial is very challenging. The researcher’s perception of an intervention and its possible beneficial and adverse effects may be very different from those of potential trial participants. Social science investigations conducted in the trial community, prior to designing the in- formed consent procedures, may give the investigator a much better understanding of how the community is likely to view the proposed trial and will inform the ways in which the trial should be presented to potential participants to facilitate their understanding of both the potential risks and potential benefits and of why the trial is being conducted. Issues around informed consent are discussed further in Chapter 6, Section 2.4.
Social and behavioural research methods can also help inform the design of quantitative outcome measures for the trial. Tasks include formulation of questions and definition of appropriate forms of measurement. Some trials make the mistake of measuring outcomes through open questions, thinking that closed questions introduce bias. In addition to the fact that post-coding of open questions is very time-consuming (see Chapter 20), problems caused by incomplete responses to open questions may outweigh the limitations of closed questions. Also, open questions have lower test-retest reliability, leading to difficulties when pre-post comparisons are made. Nichter et al. (2002) outline a systematic process for informing the design of survey instruments through formative research.