Randomization with learning is better than haphazard implementation without learning.
RCT proponents use variations on this refrain to defend against ethical questions about randomization. While I’m sure there are people who are troubled by the very thought of randomization, I’ve never met one of them. Most practitioners I know have larger questions about the efficacy and politics of RCTs than about the ethics. The ethical issue is a bit of a straw man: I see the defense far more than I see the critique. I’m getting tired of it, so I’ve decided to actually make the critique — but from the other direction.
In its most common form, the defense rests on an implicit assumption that the program (intervention, distribution, whatever) was going to be conducted in a haphazard manner among a larger population (people, households, whatever) than it could possibly serve. So why not spend a little bit of the funds to randomize and study the impact? The ethical tradeoff is a few people unserved, but massive learning that could lead to wiser program decisions in the future. In most cases, that’s a net positive.
Proponents often put a cap on the argument like so:
Or like so:
But what if we remove the implicit assumption? What if we reframe the ethical comparison from “haphazard selection into the program” and instead recognize that NGOs and service providers often have very specific reasons for working with the individuals they do? Instead of spending funds on a randomization study, they take steps to target their work better: at those who need it most, or who stand the greatest chance of benefiting, or who can be served the most cost effectively, or who have been identified by the community, or who simply have to be served for some broader political or cultural reason.
This is what it means to respond to context in programs, and it might be at odds with randomization.
Despite accusations that they’re evidence-less or careless in this targeting, many NGO staff and donors incorporate the latest academic research into their program designs. Are RCT proponents suggesting that implementers should be less rigorous in their program design and targeting so that the pool remains large enough to have people left out, and then randomize?
If you use a well-targeted program as your ethical comparison — rather than lazily implying haphazard administration as your baseline — then it’s no longer merely a question of learning-or-not. It becomes a question of how-much-impact. And that’s an ethical question worth arguing over.