By Greg Simon, MD, MPH, senior investigator, Kaiser Permanente Washington Health Research Institute and psychiatrist, Washington Permanente Medical Group
Have you heard the one about the police officer helping a man look for his keys at night? They’re searching under a streetlight and after a while, the officer asks if the man is sure this is where he dropped them. “I lost them across the street,” the man says. “But the light’s better over here.”
I think about that joke whenever my research team designs a study and needs to reach participants who represent the real-world population we’re trying to help. If we don’t think clearly about who we aim to help, we might just be looking under the streetlight. We’ll miss people who are not being helped by the status quo.
Several years ago, Evette Ludman and I did a focus group study to learn why some people go to their first psychotherapy visit for depression and then don’t return. We contacted Kaiser Permanente Washington members who had gone to one therapy session and then stopped, and invited them to participate in our study.
Only about one-third of people we invited agreed to speak with us, but that’s a pretty good success rate for focus group recruitment. We soon learned, however, that these were not the people we needed to hear from. We wanted people who tried psychotherapy for depression for the first time and for some reason, gave up. No one in our focus group fit that description.
Instead, the participants were experienced with psychotherapy. For some, the single visit that we noticed was a “refresher” visit. Others had been trying a new therapist. Our focus group turned into a hypothetical discussion of why other people might give up on therapy after just one visit.
In retrospect, we should have realized that we wouldn’t learn much from our approach. People living with depression who are frustrated or discouraged about treatment don’t tend to become motivated research volunteers. We should have published something about that experience, but I’m still waiting for someone to establish The Journal of Instructive Failure.
Still, that instructive failure shaped our research. We’re now studying how to reach out to people to increase their engagement in mental health treatment. In planning this work, we have to be careful to not study engagement among people who are already engaged. In fact, volunteering to participate in research to increase engagement should probably make someone ineligible. For example: If we hope to learn if outreach to people at risk of suicide can reduce suicide attempts, we certainly shouldn’t limit our research to people who volunteer for that study. We appreciate them, but by stepping forward, they’ve shown they don’t need our additional outreach.
We aim to reach people who are lost. They are disconnected, discouraged, and convinced that treatment has nothing to offer. If we hope to find them, we have to look outside of the bright light under the lamppost.
So our studies of outreach or engagement interventions follow a “randomized encouragement” design. We’re using this design in our Mental Health Research Network (MHRN) pilot study of automated outreach to people who appear to have stopped taking a medication or going to psychotherapy for depression. After identifying these patients using electronic health records, we randomly assign some of them to receive extra outreach, such as messages and calls to offer support and help overcoming barriers to mental health care. The rest continue to receive their usual care.
Our MHRN Suicide Prevention Outreach Trial, testing two outreach interventions for people at risk of suicidal behavior, uses the same real-world research approach. This design answers the question we care about: Among people who appear to have unmet needs, will an outreach intervention increase engagement in treatment—and ultimately lead to better outcomes?
Real-world, randomized encouragement requires extra steps, but they are features rather than bugs. First, we must be able to identify people with unmet needs before they ask us for help. That’s been a central focus of our MHRN research, including our recent study on predicting suicidal behavior. Second, we must be able to use health system records to assess impacts or benefits. Relying on traditional research interviews or surveys takes us back to the problem of measuring engagement in people who are already engaged. Third, we have to remember that any benefit of an outreach or engagement intervention is diluted by absence of benefit in those who do not participate. That diluted effect is the true effect, if what we care about is the real-world impact of an outreach program.
As researchers, we have to remember: Outreach interventions might seem to work well right under the lamppost, but that’s not where people get lost or left out.
Kaiser Permanente Share profiles longtime psychiatrist, KPWHRI investigator, and Mental Health Research Network leader.
Read about it in Healthy Findings.
Researchers and practitioners have to respect individuality while developing large-scale solutions, says Dr. Gregory Simon.
A psychiatrist applauds the rock star's acknowledging that she has bipolar disorder and considers how we use language to describe such diagnoses.