The Digital Behavioural Health Formula : Impact = Reach x Efficacy

 

It seems that every week there is a new smartphone app, artificial intelligence platform, or virtual reality program targeting behavioral health. But behavioral health is a broad term that captures the services offered to address or prevent mental (e.g., depression), behavioral (such as overeating, medication noncompliance) and addictive concerns. Innovation for behavioral health is certainly needed as disorders like depression and anxiety are already a leading cause of global disability, healthcare spending, and personal suffering. But which announcements of new digital behavioral health technologies offer true innovation today compared with more aspirational claims for the future?

 

There is no clear answer but in this article, we offer a simple framework to help approach new digital behavioral health technologies. Although this is not intended as business advice, we highlight those points we often consider when we try to understand and contextualize new product or company announcements.

 

The overarching formula that drives recommendations on what to look for is a well-accepted behavioral health formula: impact = reach x efficacy. Without being effective in the actual setting of interest, a digital health product is not a solution. Without being accessible and engaging to the user group (patients and clinicians), a digital health product does not have enough reach. Without impact, it is not likely to be profitable.

 

Many companies suggest that they have products that work and will integrate smoothly into patients’ lives and into the existing healthcare infrastructure. One focus has been on identifying payers who will cover the cost. Once payers are on board, the assumption is that doctors will recommend the product, patients will download and use it, efficacy will be similar to that observed in highly controlled clinical trials. It is also assumed that doctors will track user data (an additional administrative burden that is typically not reimbursable and potentially introduces some liability).

 

These recommendations break down aspects of the clinical implementation process that, in our experience, indicate whether these assumptions are reasonable for a particular product:

Do clinicians (psychologists, psychiatrists) with expertise in digital health work for the company?

It is difficult to develop a product for a market that you do not really know inside and out. While many companies have behavioral health providers as subject matter experts and consultants, fewer have in-house clinician-scientists who are actually integral to the product development / improvement process. Behavioral health clinicians receive years of training in how to produce the types of change digital platforms seek to make and they are vetted by licensing boards.

 

Companies without this seem likely to miss important opportunities to drive clinical innovation.

Related to this point, many companies employ non-behavioral health clinicians (e.g., primary care doctors) or doctoral level scientists (e.g., neuroscientists) as the key clinical contributors. Unfortunately, this is similar to employing a chemical engineer to produce a robot. The engineer may be able to transfer pieces of their skillset, but they are not a software engineer who has been specifically trained and vetted in this area.

 

 

Has the product been evaluated in its intended use setting?

 

Many products boast strong outcomes generated in randomized controlled trials (RCTs). Randomized controlled trials, however, do not always translate to real-world efficacy especially in behavioral health. RCTs typically involve a high level of participant outreach, select for highly motivated participants and incentivize engagement with the platform (e.g., compensation for study visits, module completion).

 

Just because a depressed patient completes 16 sessions of digital health program in an RCT, does not mean that the same patient, let alone a less motivated patient, would complete 16 sessions of a digital health program in their routine healthcare settings where the reinforcement is usually entirely intrinsic and study staff are not calling regularly to check in. To date, there are few real-world RCTs for behavioral health technologies although, if available, they certainly would offer valuable information.  

 

Promising companies show the efficacy of their products in actual healthcare settings when the products are recommended by actual providers or payer systems, and there are no external incentives or study staff are encouraging engagement. This evidence should be more than anecdotal or simple endorsements; rather it should be rigorous and objectively collected. It is also worth carefully checking a company’s website to determine if the evidence cited is for general behavioral health principles (e.g., cognitive behavioral therapy) versus treatment delivered by the actual product (e.g., cognitive behavioral therapy on a certain smartphone app)

 

What does user engagement look like in the real world?

 

A related but separate consideration is the extent to which the program itself draws users in. For a product to be viable it needs to be something that users actually engage with initially and stay engaged with over time. In real-world (non-research) settings it is standard for habituation to occur and engagement to drop off even within days. No one would expect someone who took three days worth of a 12-day course of antibiotics to get better. Without user engagement, digital health interventions cannot produce change.

 

So, when evaluating a company or product, we look at engagement metrics. Has the company evaluated engagement? If so, what engagement levels do we see in real-world settings where artificial incentives often seen in research are not present? App download metrics may often seem impressive, but downloading an app or software is often unrelated to how many users actually engage with it in a meaningful way. There is even evidence that the number of app downloads, or star ratings on app marketplaces, does not correlate well with the usability or clinical validity of behavioral health apps.

 

What demands does the product place on clinicians?

 

Tracking options and clinician interfaces have become common in digital health products. While appealing in theory, in practice they may place more burdens on providers than the market can withstand.

 

Clinicians tend to already be overburdened with administrative tasks – tasks that, while important, distract from patient care and do not drive their salaries or the clinic’s bottom line. When digital health products add to this burden by providing another piece of information to check they place strain on an already strained system, which means something has to give.

Simple additions to a `clinician’s schedule, such as requiring them to log onto an interface that is separate from the electronic medical record, reduce clinicians’ likelihood of engaging.

 

Clinician engagement is key since studies suggest it may drive patient engagement. Low clinician engagement may also leave clinics and the digital health company vulnerable to legal ramifications if patients are providing key health information with the understanding that it is being monitored, but then it is not monitored.

 

In the future, we may see models where providers can be reimbursed for time spent doing administrative work related to digital health treatment. At that time, many doors could be opened. Until that time, additional administrative burden is unlikely to be well received within healthcare systems.

 

Is there any value offered by the product even if patients and providers do not engage with the product?

 

If we assume that many of these products will not generate ongoing engagement from patients and providers, we are left asking what else can they do? The answer may be to provide alerts when data collected via passive sensors suggest a problem. It may be tracking patterns of behaviors that are reviewed in routine appointments (e.g., steps for cardiovascular patients being advised to increase activity). It may be nothing.

 

If it’s nothing, we rely entirely on user engagement for value of the product. And, as already discussed, user engagement is likely the biggest challenge in digital behavioral health, given that a motivation is often at the core of behavioral health concerns.

 

When the answer is yes to these above questions, we expect to see high reach and high efficacy. In short, we expect to see high impact. While the field of behavioral health and especially digital technology for behavioral health is always full of surprises, in-house clinician expertise, real-world evidence, low burden on healthcare providers, and delivering value even when engagement is low are useful points to always consider in your personal assessment.

 

About the Authors

 

Jessica Lipschitz, PhD is the Associate Director of the Program in Behavioral Informatics and eHealth within the Department of Psychiatry at Brigham and Women’s Hospital (BWH), a Harvard Medical School-affiliated teaching hospital. Lipschitz also provides clinical services as part of BWH’s Cognitive Behavioral Therapy (CBT) Program and supervises residents in the Harvard Longwood Psychiatry Residency program on how to conduct CBT. She has a background working within the VA Healthcare System and consulting for digital health startups.

 

John Torous, MD is co-director of the digital psychiatry program at Beth Israel Deaconess Medical Center, a Harvard Medical School-affiliated teaching hospital, where he also serves as a staff psychiatrist and clinical informatics fellow. He has a background in electrical engineering and computer sciences and received an undergraduate degree in the field from UC Berkeley before attending medical school at UC San Diego.

 

Source : https://medcitynews.com/2018/03/5-factors-consider-evaluating-new-digital-behavioral-health-tech/?rf=1

 

Share on Facebook
Share on Twitter
Please reload

Please reload