Promising malaria vaccines in the pipeline?

Efforts at designing effective malaria vaccines have been stifled by the variation among the different strains of the parasite, but two vaccine trials have given researchers cause for hope.

A small trial in Burkina Faso (involving only 45 children) found vaccinated children 12-24 months of age were significantly less likely to acquire malaria than those who were unvaccinated. The trial was actually designed just to test the safety of the vaccine in preparation for a larger trial, but the encouraging outcome has been a boon to the research team. An 800 person trial in Mali is now slated. In fairness, these results were published as a correspondence piece in the New England Journal of Medicine and not as a fully peer-reviewed article, however, the upcoming 800 person trial will surely be subject to heightened scrutiny and it is those results that will likely determine the future of this vaccine.

Previous attempts to eliminate malaria have been unsuccessful.

A much larger trial consisting of 6,000 children 5-12 months of age in 7 African trials showed vaccine efficacy of ~56%. This number is not great for vaccines, but with 247 million cases of malaria and 881,000 deaths every year, it could constitute a substantial reduction in morbidity and mortality. Some questions have arisen about the duration of the immunity of provided by this vaccine; will it be considered a success if yearly boosters are required? Also, it is thought to be more expensive than the vaccine studied in Burkina Faso.

In general, evaluating vaccines is challenging. Figuring out the appropriate dose, dosing schedule and required boosters is often a matter of educated guesses and inferences from data. Further, neither of these inoculations reduced the risk to zero, and the importance of insecticide-treated mosquito nets, repellents and prophylaxis are still relevant. However, given the numerous failed attempts to design an effective malaria vaccine, even imperfect candidates offer reason for optimism.

 

Advertisements
Posted in Uncategorized | Leave a comment

Most of the news that’s fit to print

Last week, Jason wrote a post questioning the effectiveness of Avahan, a Gates Foundation-funded program in India. What’s interesting is the disparate ways the media has portrayed the project’s results:

The New York Times headline on October 10th, reads “India: Gates Foundation’s AIDS Program in India Has Made Uneven Progress Over 8 Years.”

The BBC’s headline on October 11th is less nuanced: “Bill Gates India Scheme ‘Spared 100,000 From HIV.'”

Hooray!

Not so fast...

As Jason points out, the article’s results are not cut and dry. The program was more successful in the south of India, where HIV is primarily transmitted sexually whereas there was no observed effect in the north, where the virus is mostly spread by intravenous drug use. It was also more successful in areas that received more resources and had higher populations. But it’s these details that make the role of journalists so crucial.

The reason the media reports on scientific and economic publications is that most people don’t read the academic journals the primary research is published in. This is a great service, transmitting results to and informing the greater public. But it requires a savvy journalist capable of interpreting the results and presenting them in a more accessible fashion. In truth, most of us don’t even get to the end of news articles summarizing these research studies. That’s why proper framing–starting with the headline–is so essential.

Posted in Uncategorized | Leave a comment

Doubting the success of Avahan

There’s a piece doing the rounds on various media wires claiming that the Gates-funded Avahan program prevented a large number of HIV transmissions in India, reducing the prevalence of the virus by as much as 13% in Karnataka, the state where it was most successful. The first thing to note about this is that that’s 13%, not 13 percentage points. The prevalence of HIV in Karnataka is only 0.5%, so that’s a drop of 0.065%. So this sounds a whole lot bigger than it really is.

Beyond the framing of the results, I’m also dubious that they’ve really shown this effect. It looks like what the study does is to regress new HIV cases on Avahan spending at the clinic level. That’s not a great way of determining the impact of the program. I would look at the article itself to confirm this but, infuriatingly, it is still gated at TheLancet.com so all that can be seen is the abstract. This is especially aggravating because 1) my university has a subscription to The Lancet and I can generally see their entire archive and 2) most of the articles on the site are ungated for everyone. Indeed, I just read an “online first” article from there the other day. If I were a cynic, and I am, I would point out that this gives them a nice grace period to tout their success without being subject to informed criticism.

Even if these results are real, the cost per averted infection is $2500, which seems really high to me. Figures from Africa, where purchasing power is broadly similar, are around $300 per infection prevented by treating other STIs or using nevirapine to prevent mother-to-child transmission.

Posted in Uncategorized | 1 Comment

Unsupported claim of the day: “economics is not an experimental science”

Adam is optimistic about the spread of RCTs into economics, and (like most people in development) I share his optimism. But there is a strong resistance to the rise of experimental and quasi-experimental methods in economics. In development we mainly argue over RCTs, but in economics as a whole the focus is on “natural” experiments, where circumstances generate effectively random variation in a causal variable. One of my favorite examples of this is Doug Almond’s paper on the effect of maternal fasting during early pregnancy on a child’s health and economic outcomes, which relies in part on data from right here in Michigan. His analysis relies on the fact that fasting during Ramadan is not required for pregnant women, but it’s common for women not to realize they are pregnant early on, and fast anyway.

Recent economics Nobel-winner Chris Sims is not a fan of natural experiments in economics; indeed, he won the prize in part for his work on vector autoregression models, which typify the model-everything, nothing-is-exogenous school of thought. This short comment is a pretty accessible overview of what he doesn’t like about the quasi-experimental approach, and focuses mostly on papers that analyze the deterrent effect of the death penalty. I highly recommend it for applied stat-heads like myself.

However, I was pretty let down by Sims’ complete failure to back up his central claim: “Because we are not an experimental science, we face difficult problems of inference.” We’re not? Says who, and why? Claiming that economics is not an experimental science seems to imply that we can learn nothing from experiments, which I think is obviously false. Perhaps by that statement he is implying the weaker claim that there are some important things economists cannot learn from experiments – but by that standard medicine, with its utter failure to run an RCT on the effect of smoking on health, is not an experimental science either.

Hat tip: Dan Hirschman

Posted in Uncategorized | 2 Comments

One more reason not to smoke

MethodLogical has long been a proponent of the anti-smoking movement (as evidenced by this piece I wrote a while back)- a gutsy stance, I know. Now, the British Medical Journal is outlining yet another risk of smoking.

According to a mathematical model published in the BMJ, failure to curb smoking rates could lead to a substantial rise in tuberculosis cases, as smoking appears to double a person’s chance of developing active TB and dying from it.

This is your lungs...

...this is your lungs on cigarettes.

Because of heavy regulation on the tobacco industry in the states, tobacco companies have focused more energy on the developing world. Unlike the U.S., many of these countries have high rates of TB. Many of these areas also have high prevalences of HIV (increasing the risk of developing TB) as well as significant issues with air pollution (researchers have speculated that both indoor and outdoor air pollution increase the risk of TB), creating a perfect storm-like assault on people’s lungs.

Public smoking bans are increasing in number worldwide, but now comes the hard part: enforcement.

Posted in Uncategorized | 1 Comment

Hormonal contraceptives and HIV transmission – do we *want* to separate biology from behavior?

A new piece in the Lancet by Heffron et al. finds that hormonal contraception (specifically the injection-based method most common in sub-Saharan Africa) roughly doubles the risk of HIV transmission between an infected partner and an uninfected one. My first instinct upon hearing the result was to question whether they were really measuring the biological effect of hormonal contraceptives, as opposed to behavior change. After all, why do people take up these methods in the first place? So they can stop using condoms, of course, which naturally increases their exposure to HIV. The answer, however, is that there almost surely is some biological effect going on: in their last section the authors report that women on hormonal birth control had higher levels of HIV RNA in their cervical canals. To me this is the lede, and the article totally buries it, leaving it out of the abstract.

On top of that, the study finds that controlling for condom use has little effect on . But the way they account for condom use is pretty problematic. While they report that they tried separate controls for the number of condom-protected and unprotected sex acts, those variables aren’t in any of their tables and their final regressions include only a dummy for “any unprotected sex”. We know from previous studies of HIV transmission in Africa (e.g. Wawer et al. 2005) that inconsistent condom use has no predictive power for transmission – and inconsistent use (meaning that people sometimes do and sometimes don’t use condoms with the same partner) is evidently pretty common in African countries: so common that Wawer et al. found that none of the couples in their study used condoms all the time. This means it’s credible that the effect on condom use was “inframarginal”, affecting how often they were used rather than whether they are used at all. Controlling for any use might not adequately adjust for the change in condom use that comes with uptake of hormonal contraceptives.

What can we we do about this? One cheap way to get at the biological effect of hormonal contraceptives would be to run the regressions from the paper with a control for the level of endocervical HIV RNA. If that’s a good proxy for the biological increase in HIV exposure, then in those regressions the effect of hormonal treatments should drop out unless there are also behavior changes going on. My take is that the authors should have stressed that the biological risks aren’t that interesting, and focused on regressions that don’t control for sexual behavior. Imagine a policy where you promote hormonal birth control. In terms of HIV transmission risks, you’d see not just the biological impact that Hefron et al. are trying to estimate, but the sum of that and behavior changes. The paper basically controls for an outcome of hormonal birth control use (it leads to less condom use) which is almost never a good idea.

Hat tip: haba na haba

http://www.sciencedirect.com/science/article/pii/S147330991170247X

Posted in Uncategorized | Leave a comment

Don’t *artificially* restrict your experiment samples!

My colleague Adam rightfully approves of the spread of RCTs as a means of evaluating interventions, but raises a couple of important concerns about drawing inferences with them. His overall point about the generalizability of results is on point: especially in economics, many people are dubious that we can learn anything broadly useful from experiments, even when the internal inferences of those experiments are valid. However, , Adam’s specific example of a source of these issues (artificially constraining the study population for randomized controlled trials) is troubling because it’s completely unnecessary. If you’re running a randomized experiment you don’t need ex ante restrictions on your sample. They’re not necessary for causal inference, and as Adam points out they limit the extent to which you can generalize your results. In fact, not only is such a restriction unnecessary, eliminating the need for restrictions is the entire point of experiments!

Think about doing a non-experimental evaluation of a blood pressure medication. That means you look at the people who took the drug and the ones who didn’t and see who had better outcomes. The first thing you might worry about is that people with initially higher blood pressure take the drug, so even if it works your comparison might appear to show no effect – or even a negative one. The general notion here is that your treatment and control group are unbalanced on observed factors. It’s possible to think of lots of other factors that could impact the trend in blood pressure, like age and diet. If you don’t run an experiment, you need to control for those factors in your analysis. We often do that through a linear regression, but another approach would be to use matching: for each drug user, find a non-user who looks the same as a basis for comparison. The problem with doing this is that there are lots of factors that we can’t see because they aren’t ever recorded in the data (genetics, environmental factors, behaviors, etc.). If those factors are correlated with the treatment, then our estimates will be biased

To get around this problem, we just randomize! Assigning the treatment randomly ensures that with a big enough sample, the treatment and control groups will be balanced on all variables – unobserved ones but also observed ones. That means you never have to worry about heterogeneity in your experiment population – leave everyone in and let the random number generator sort them out. In fact, more heterogeneity is obviously better: in many cases the impact of the treatment will vary based on observed factors like race, sex or age, and if you include a diverse group you can look at those variable effects directly.

That’s not to say that RCTs never have constrained populations. A lot of times study populations are restricted because it’s unavoidable in the experimental setup. For example, it’s hard to include Kenyans if I’m running a study in Malawi. Sometimes, on the other hand, a project will target a specific population because that’s the intervention they want to study – think about experiments on schoolchildren, which test the effectiveness of rolling out a policy to all schools. Ideally you want to run your experiment on a representative sample of the whole population that’s relevant for the intervention/policy/medication/etc. that you’re testing. How do we ensure that? You got it – we randomize. Specifically we choose a random sample of the entire population. In large enough samples that ensures that our study population looks like the overall population.

I don’t think any economists are imposing artificial constraints on their experiment populations. If they are, then whomever taught their program evaluation course should be tarred and feathered. But the fact that Adam is worried about it implies that people in medicine may be committing this statistical sin. That’s actually pretty understandable: in general, the more a discipline is able to run actual experiments the less its practitioners need to know about statistics. Usually that limited knowledge makes little difference in a field like medicine, but this is a pretty important exception. Spread the word.

Posted in Uncategorized | Leave a comment