“We thought there would be a relatively small effect. But we knew that an effect of that size would save lives.”
That’s what Shivan Mehta, MD, associate chief innovation officer at Penn Medicine and an assistant professor of Medicine in the Perelman School of Medicine at the University of Pennsylvania, thought heading into a trial aimed at improving colorectal cancer screenings in a historically under-served community. The idea was to make the screenings as simple as possible for patients at a local community health center that primarily served people of color — through mailing fecal immunochemical test (FIT) kits directly to patients instead of making them proactively sign up for them or come in for more involved colonoscopies.
The study, which ran in 2018, was calibrated so that a relative improvement of 60 percent — not a miniscule accomplishment — would be considered statistically significant when compared to the standard method of sending text reminders to patients overdue for the screenings.
But Mehta and his fellow researchers — including the study’s eventual lead author, Sarah Huf, MBBS, a former Commonwealth Fund Fellow at Penn — didn’t find a “relatively small effect.”
They found a giant one. The rate of patients whose kits were mailed home improved over the standard texting group by 1,000 percent. It was a tenfold increase.
“While we knew that mailing FIT kits was an effective form of outreach, we did not realize how much better it would be in this population, partially because the standard method of sending a text alone had such a low response,” said Mehta. “We needed to take extra steps to reduce the burden of responding.”
When researchers get results like this, it’s imperative to capitalize on them. Applying the lessons learned — pushing an easy, low-effort choice — to future work can expand the already impressive positive impact achieved, particularly for underserved patients.
That’s what the researchers did for a recent Penn Medicine study attempting to improve hepatitis C screening rates. Like colon cancer, treatment options and overall outcomes are better for patients with hepatitis C when the disease is caught sooner. But the newer study, conducted in mid-2020, targeted its no-effort choice not at the patients, but their providers. For patients in the hospital who were due for a screening, a default order was automatically placed in the patient’s electronic health record. Physicians didn’t need to click even once to have their patient screened — they only needed to click if they wanted to cancel it.
This trial, replacing a system that needed physicians to click once to opt in to a screening for overdue patients before, resulted in a near-doubling of screening rates. And it followed the FIT kit trial’s example before it: Low effort, high reward.
On top of that, a study recently conducted by Mehta and colleagues that attempted to increase mammogram rates by texting patients a link to self-schedule an appointment for the test, framed as the default option. The results for this study are still being analyzed.
By and large, scientific research is a game of inches. Incremental change is common and important, and it often isn’t change in the expected or desired direction. So that’s why it’s a big deal when studies turn out so well, especially ones that can lead directly to improvement in patient care. They’re not just flash in the pan success — each has a significant ripple effect.
“The biggest encouragement from our prior results is showing our clinical colleagues the value of incorporating experimentation and randomization in routine care delivery,” Mehta said. “We can both provide better care to patients that need it and learn from a research perspective; we don’t necessarily have to choose between research or clinical operations.”