3.1 Intervention characteristics required
Several criteria should guide the suitability of candidate interventions to be evaluated in a large-scale field trial. The intervention, or package of interventions, should usually be one that could be introduced into a national or regional disease control programme (though this criterion might not apply for ‘explanatory’ or ‘proof of principle’ trials—see Chapter 2, Section 3.3). The dose (when applicable) should be ‘optimal’. Evidence would usually be required from smaller preliminary studies (sometimes called Phase I and II trials, particularly with respect to trials of drugs and vaccines) that the intervention is relatively safe and produces a convincing intermediate response, such as a good antibody response to a vaccine or a change in self-reported sexual behaviour for an intervention to prevent unwanted pregnancies.
When an intervention has to be repeated several times to be effective (for example, micronutrient supplements), there should be evidence that the interval between each intervention is appropriate. For some interventions, the concept of dose is meaningless, such as the application of a diagnostic or screening test. Corresponding relevant evidence would then be required that the test is adequate (for example, previous studies indicating that it had good sensitivity, specificity, and predictive values). For continuous or repeated treatments, similar considerations apply to the duration of treatment. For example, with vitamin supplementation, the duration required will depend on whether the outcome of interest is the reversal of the acute effects of severe deficiency or of the chronic effects of more moderate deficiency. In addition to being safe and giving promise of being efficacious, the intervention must be acceptable to those to whom it is directed, relatively easy to deliver, and, at least eventually, of sufficiently low cost that it could be incorporated into the national disease control strategy if it is proved to be effective within the field trial.
3.2 Number of interventions compared
The choice of the number of different interventions to compare in a field trial is likely to be determined not only by the number of competing alternatives, but also by the implications the choice has on the size of the trial. This, in turn, is dependent on the frequency with which the outcome of interest occurs. ‘Rare’ outcomes require large trials (as discussed in Chapter 5). For example, in a trial of leprosy vaccines in South India, it was planned that each ‘arm’ (one of the alternative intervention assignments) included in the trial would require around 65 000 trial participants, in order for the trial to have the desired statistical power to detect effects that would be of public health importance (Gupte et al., 1998). Clearly, in this situation, a decision to add another arm would have had enormous cost and logistic consequences.
If the outcome is common, however, trials to compare more than two interventions may be undertaken more readily. For example, if seroconversion following vaccination is the outcome of interest, it may be straightforward to compare multiple vaccines or vaccination strategies in a single trial.
It is important to note, however, that many researchers try to build too many comparisons into a trial. There is often a tendency to divide groups after the sample size has been calculated or to plan comparisons within groups, without going through the appropriate computations (as given in Chapter 5).
Comparisons within a single trial can always be made with much greater confidence than those between trials. Thus, if drug A is found to be 50% more effective than a placebo in one trial and drug B is found to be 50% more effective than a placebo in another trial, it will not necessarily be possible to conclude that A and B are equally effective, as the circumstances in which the two trials were conducted will not have been identical. A further trial may be necessary for a direct comparison of A and B. If the need for this trial could have been anticipated in advance, it would have been more efficient to conduct one trial involving both drugs A and B and a placebo. A trial like this may be more complex to organize and would probably have to be substantially larger than either of the ‘2-arm’ trials but would still tend to be smaller than the sum of the two trials.
When two interventions are being compared to a control intervention, and in situations where it would be possibly appropriate to apply both interventions to the same individual (or community), an efficient way of comparing both interventions with the control arm in the same trial is to design it as a ‘factorial’ trial. In such trials, some individuals receive the control intervention, others receive one or other of the new interventions, and some receive both interventions (typically 25% in each of four groups) (Montgomery et al., 2003). Although not commonly used, this design is very efficient, unless there is ‘interaction’ between the two interventions, i.e. the effect of both interventions applied at the same time is different from the simple sum of the separate effects of each of the interventions. Ayles et al. (2008), Awasthi et al. (2013a), and Awasthi et al. (2013b) are examples of the design of such trials.
3.3 Combined interventions
For some diseases, there are several possible interventions that may reduce the disease impact on a population. For example, interventions against malaria include destruction of mosquito breeding sites, spraying of residual insecticide, personal protection measures (for example, use of bed-nets and repellents), drug prophylaxis, and drug treatment, and trials might be designed to evaluate each of these interventions individually. A malaria control programme may choose to use more than one intervention at the same time and may wish to evaluate the impact of the ‘package’ of interventions, rather than the individual components of it. In such a case, the trial might compare an integrated strategy incorporating several different interventions applied simultaneously with a control group in which only the routine interventions that were previously available would be applied.
Several trials of this kind have been conducted for the prevention of HIV. For example, a recent trial in Tanzania tested the effectiveness of a package of interventions targeted to young people. Those in the intervention group received HIV prevention education in school; health workers in their local health facilities were given special training and support to try to make their facilities more ‘youth friendly’; new suppliers who were thought to be particularly attractive to young people were trained and supported to sell condoms, and annual ‘youth health weeks’ were organized in their local communities (Ross et al., 2007). The advantage of this kind of trial is that it allows the testing of a package on interventions that might reasonably be expected to have a greater impact than any single component of the package. However, if no effect is seen, then although it may be reasonable to conclude that no one of the components of the intervention (at least, as applied in the trial) would have been effective on its own, it is necessary to think carefully about whether the existence of several concurrent interventions might have diluted the effect of one component on its own, or even that one component might have counteracted the effect of another. Another disadvantage is that, if an effect is demonstrated, it is not possible to be sure of the contribution to the overall result of each of the various components of the intervention.
3.4 Choice of comparison intervention
The best way to evaluate an intervention is to compare its effect with that of another intervention in the same population at the same time. Whenever possible, the allocation of individuals or groups of individuals to the different interventions should be ‘at random’ (see Section 4.1 and Chapter 11). In general, the intervention that is the current ‘best’ should be used as the comparison, but the choice of the ‘control’ intervention is not always straightforward and may involve difficult ethical considerations (see Chapter 6). When no effective intervention is known, the comparison must be with a group in which ‘no intervention’ is made; ideally, a placebo should be administered in order to preserve ‘blinding’ (see Section 4.1). For example, before the development of ivermectin no effective and safe treatment for onchocerciasis existed. Thus, placebo-controlled trials of the drug were ethically acceptable, at least until the beneficial effects of ivermectin had been established. For most tropical diseases, however, some kinds of intervention already exist and may already be deployed by the health services or by a control programme in the area where a trial is planned. Only in very rare circumstances would it be ethical to withdraw these existing interventions for the purposes of a trial. A more complex issue is with respect to the extent to which they should be introduced in the context of a trial. It is known that regular prophylaxis with anti-malarial drugs reduces morbidity from malaria, for example, so would it be necessary to give this intervention to all those in the ‘control’ arm of a malaria vaccine trial, even though, in normal circumstances, very few, if any, of them would otherwise have been on prophylaxis? Indeed, would it even be ethical to withhold prophylaxis from those who would be receiving a malaria vaccine whose efficacy was unknown? The optimistic reader will seek a definitive answer to these questions in Chapter 6! Unfortunately, the search will be in vain, as there are no general definitive solutions to problems such as this; each situation has to be considered on its own merits, taking full account of the circumstances in which a particular investigation is planned. However, in Chapter 6, key principles are outlined that should be used when making such judgements.
In a leprosy vaccine trial in Venezuela, the new leprosy vaccine consisted of a mixture of BCG and killed Mycobacterium (M.) leprae bacilli. When the trial was designed, a choice had to be made between using BCG for the control arm (the efficacy of BCG alone against leprosy in Venezuela was unknown at the time) or using a placebo. BCG was chosen, even though doing this might reduce the chance of showing a protective effect (as BCG alone may have been protective). The inclusion of a third, placebo, arm would have allowed the protective effect of BCG alone to be evaluated, but the incidence of leprosy was too small for a third arm to be feasible within the trial. The major purpose of the trial that was conducted was therefore to evaluate whether a leprosy-specific vaccine (i.e. one which included M. leprae bacilli as well as BCG) was more effective than a non-specific vaccine (in this case, BCG). If the comparison had been with a placebo instead of BCG, any effect due to BCG could not have been distinguished from that due to the addition of M. leprae bacilli to the vaccine. In a larger trial of the same vaccine that was conducted in India, it was possible to include a placebo arm (Gupte et al., 1998).
The use of a placebo may be very important to derive an unbiased measure of effect (see Section 4.1 and Chapter 11, Section 4), but it requires careful ethical justification, and thought must be given to whether particular circumstances might lead to treatment being offered to participants, irrespective of their trial arm. In a placebo-controlled trial of vitamin A supplementation in Ghana, for example, the objective was to determine if a reduction of child mortality was produced by supplementation. As eye signs of vitamin A deficiency are effectively treated by vitamin A supplements, all in the trial were monitored for such signs and treated immediately if such signs were detected, even though this was likely to reduce the power of the trial to detect an impact of vitamin A supplementation on mortality.
A related issue concerns trials which do not test new interventions as such but evaluate new ways of delivering existing interventions. In a cluster randomized trial in Bangladesh, the Integrated Management of Childhood Illness (IMCI) strategy promoted improved ways of delivering interventions such as antibiotics for pneumonia, oral rehydration therapy, and vaccines; these interventions were also available from routine services in comparison areas. It was judged ethical not to change routine practices in the comparison areas, because these reflected what was already in place in the country as a whole (Arifeen et al., 2009).
3.5 Complex interventions
The design of a trial to evaluate the efficacy of a new vaccine or drug is relatively straightforward, in the sense that there are many past examples of such evaluations to draw upon when planning a new trial. However, the evaluation of some interventions, such as the deployment of a new procedure in the health service or public health practice, may involve consideration of several interacting components, including, for example, educational components and behavioural change. Such interventions pose special problems for evaluation, and these kinds of intervention have been called ‘complex’. Many of the extra problems relate to the difficulty of standardizing the design and delivery of the interventions, their sensitivity to features of the local context, the organizational and logistical difficulty of applying experimental methods to service or policy change, and the length and complexity of the causal chains linking intervention with outcome. See Chapter 2, Section 2.3.4 and the associated Box 2.1for further discussion.