Scientific Method; image from: scifiles.larc.nasa.gov
Last week, I read some exciting news about H.I.V. treatment and transmission. A New York Times article reported that a large clinical trial found that “[p]eople infected with the virus that causes AIDS are far less likely to infect their sexual partners if they are put on treatment immediately instead of waiting until their immune systems begin to deteriorate…” The study found that “[p]atients with H.I.V. were 96 percent less likely to pass on the infection if they were taking antiretroviral drugs…” These findings are overwhelmingly positive and the implication for public health is huge.
The study details are fascinating, particularly in regards to the results. For example:
The $73 million trial, known as HPTN 052, involved 1,763 couples in 13 cities on four continents. One member of each couple was infected with H.I.V.; the other was not. In half the couples, chosen at random, the infected partner was put on antiretroviral drugs as soon as he or she tested positive for the virus.
In the other half, the infected person started treatment only when his or her CD4 count — a measure of the immune system’s strength — dropped below 250 per cubic millimeter.
In 28 of the couples, the uninfected person became infected with the partner’s strain of the virus. Twenty-seven of those 28 infections took place in couples in which the partner who was infected first was not yet getting treatment.
What I found most interesting, however, was that the study was not completed. The reported findings were the preliminary results from the clinical trials. In fact, the article explained that “[t]he data was so convincing that the trial, scheduled to last until 2015, is effectively being ended early.”
The way these results were discovered and released during the course of the study was what intrigued me. Here’s how the study data was described:
“[U]nblinded” to an independent safety review panel, which is standard procedure in clinical trials. When the panel realized how much protection early treatment afforded, it recommended that drug regimens be offered to all participants. Although participants will still be followed, the trial is effectively over because it will no longer be a comparison between two groups on different regimens.
This means that the clinical trial was stopped before reaching completion so that all of the participating couples could receive treatment.
The implications of ending this trial short are complicated. For the participants, the decision can be nothing but positive since it may provide the study participants with the opportunity to receive treatment that could hopefully lead to a dramatically decreased likelihood of infection with a potentially deadly disease. For many others around the world who have a partner with H.I.V., the implications are likewise a boon for public health. The release of these early results may impact treatment of H.I.V. infected individuals around the world who may now be able to protect their partners from infection. However, the end of this study is not as clear-cut in terms of research and ethical implications as it might seem.
I originally became aware of the idea that clinical studies are sometimes cut short in the mid 1990s. My father, an occupational health doctor who died in 1999, was involved in the CARET studies during the 1990s. This large-scale double blind study was looking at whether beta-carotene and retinyl palmitate would be able to prevent lung cancer in heavy smokers and workers who had been exposed to asbestos. However, the study was ended prematurely based on interim results that suggested an adverse affect on the study participants. Since I was only a high school student at the time that the trial was ended, I did not know many of the details at the time. I understood the basic idea that if a medical research study was causing harm to the participants, that it must be ended. When I read the recent news about the H.I.V. treatment study, it prompted me to try to learn more about how and when clinical studies are interrupted.
There was an article published called “Stopping the active intervention: CARET” that I found enlightening about how and why the CARET studies were ended. The article provides an overview that I found helpful in thinking about the current H.I.V. study:
The optimal design of a randomized clinical intervention trial, where the outcome is a disease endpoint, includes a set of criteria for stopping the active intervention before planned. These criteria, called “stopping rules,” guide the review of findings by key study scientists and an independent set of reviewers. If the pattern of outcome, effect or harm, is large enough to be attributed to the intervention, the trial is halted, regardless of the planned completion date or the readiness of staff to end the trial.
While in the H.I.V. study, the impact was positive, rather than negative as in the CARET study, the procedure seems to be similar. A procedure in the study allowed for data to be unblinded and examined by an independent panel which then recommended that the trial be stopped early.
However, this does present two potential problems. One is that the study, scientifically speaking, did not reach its full conclusions. It may not provide as much evidence of implications as it might have if it had continued or if the study population had been larger. For example, the New York Times article mentioned the following:
Although the trial was relatively large, there are some limitations on interpreting the data.
More than 90 percent of the couples in the trial, who lived in Botswana, Brazil, India, Kenya, Malawi, South Africa, Thailand, the United States and Zimbabwe, were heterosexual.
“We would have liked to have a substantial number of men as potential study subjects, but they just weren’t interested,” Dr. Cohen said.
Although common sense suggests the results would be similar in the contexts of homosexual sex and sex between people who are not couples, strictly speaking, the results apply only to the type of people studied, Dr. Fauci said.
Another implication is that the results may not be as scientifically accurate if the trials are stopped early. A study published in JAMA entitled “Stopping Randomized Trials Early for Benefit and Estimation of Treatment Effects: Systematic Review and Meta-regression Analysis” explains:
Although randomized controlled trials (RCTs) generally provide credible evidence of treatment effects, multiple problems may emerge when investigators terminate a trial earlier than planned, especially when the decision to terminate the trial is based on the finding of an apparently beneficial treatment effect. Bias may arise because large random fluctuations of the estimated treatment effect can occur, particularly early in the progress of a trial. When investigators stop a trial based on an apparently beneficial treatment effect, their results may therefore provide misleading estimates of the benefit. Statistical modeling suggests that RCTs stopped early for benefit (truncated RCTs) will systematically overestimate treatment effects, and empirical data demonstrate that truncated RCTs often show implausibly large treatment effects.
(internal footnotes omitted)
So, perhaps, if the trial were continued, the results would not have been as overwhelmingly positive. Perhaps the percentage of partners infected in the two groups would not have been as widely divergent. But perhaps they would – and would you want to gamble with someone’s life?
Have you ever been involved in a clinical research study that has been ended early? Was it for positive or negative results? What should be done to maximize public health and safety while still providing the benefits of full blind research studies?