Besides the common questions I addressed in my last note, many readers raised questions about the logistics of implementing FIT (feedback-informed treatment, in which therapists use computer surveys and algorithms to track clients’ progress and predict whether they’re at risk of deteriorating). This reader says my article ultimately convinced her it could work, though she was skeptical at first:
One thing I wish the author had discussed is how the computer assessment is presented to the patient. As I read the article, I realized that my university's student counseling office must have been using this method: Every time I went for a session, I had to go to a computer station and enter a great deal of personal information, including scaled responses to the kinds of questions this article mentions.
I would have loved to know that my answers were used to check my therapist's perception of my psychological state over time. Instead, no one ever volunteered information about the procedure. I wasn't sure my therapist ever looked at it, and I resented it as an unnecessary task that took time away from my session. If the counseling office and my therapist had been transparent about using the information to improve each of my sessions, I think it would have noticeably increased my confidence in the treatment I received there.
Indeed, FIT works best when therapists explain the process to clients and are personally invested in using the system. This article explores that topic.
On a similar note, this reader, who alludes to negative past experiences with therapy, raises two very important points:
I think it's an ethical MUST for such a program to also give the person who takes the test power—i.e. information on how they are being judged. I would not see a therapist using FIT (as it is) to be honest. It seems like a way to "extract" information I don't want to give, which can be used in ways I don't like and don't even find ethical—being locked away, being kicked out of therapy, being forced to use medication or being expected to work harder than I want to.
First, the client should be collaboratively involved in the FIT process. Clients should also have the right to decline to participate in FIT. Second, FIT data should not be used in isolation to make clinical decisions, such as medication, terminating therapy, hospitalization, etc. These decisions should involve a host of other data, including foremost collaborative discussions with clients, then also therapists’ clinical judgment, etc. FIT is just one data point (like a thermometer). Barry Duncan has written about this topic in his book On Becoming a Better Therapist.
But FIT only works if agency directors support the process by making time and financial resources available, and this reader raises an important point about feasibility:
Something like this sounds fantastic, but I can imagine the difficulties of actually implementing it. First, not every agency can afford iPads. So people will be taking surveys on paper and someone is going to have to enter that data. Everyone at your typical mental health agency is already overburdened with work. Are patients going to show up early to take a survey before every session? If not, is that time is going to come out of their session time? Who is going to train clinicians on how to actually use the data collected?
In my state, most agency-based clinicians work on a fee-for-service basis, carry big caseloads in order to meet productivity quotas, and already have tons of paperwork to complete, so I could see why there would be a lot of pushback on something like this, especially if it wasn't immediately obvious as to how this would benefit patients.
All true—however, the same is true for every new medical assessment tool that is invented, from a thermometer to an MRI. If our society finds time and money to make hundreds of new medical tools feasible, then we should find resources for FIT in therapy.
What’s more, the time investment for FIT is less daunting than some might think. This reader worried about the burden on clients, particularly those who might have less stable employment or participate in free outreach programs:
Early detection of risks of deterioration or early drop-out is wonderful. However, an additional 45 minutes answering surveys on iPad on top of the one-hour session may pose some serious logistic problems for many clients who have to take time off work to attend therapy sessions.
Another aspect of this issue is that outcome measures are sometimes implemented in conjunction with attempts to limit numbers of allowed sessions for each client. While efficiency of mental health services is a very important issue, unfortunately, institutional policies sometimes interfere with the attention each patient deserves.
Completing a FIT measure typically takes just a few minutes before a session. But certainly, FIT data should not be used to limit therapy services but rather to improve the quality of therapy.
That’s also the takeaway from this reader, who raises a very important point about how a misuse of FIT data could encourage therapists to game the system:
In baseball, the batting average and other standardized statistics are not tools for the player, they’re tools for the manager--to decide which players to hire, which to keep, and which to fire. This is not good for the player: It puts him under constant accountability for producing results, and his painstaking work at creating social bonds with the team, the owners of the team, and the fans are nearly useless.
Baseball is a good arena for generating clean statistics--everybody has hundreds of at-bats in a season against largely the same pitchers. But if you start firing therapists because of poor success rates, there will be a huge incentive to not take on the patients who are least likely to get better.
Miranda Wolpert, a researcher in England, wrote about this topic here. It’s a real risk, which is why therapists must be collaborative partners in any use of their FIT data.
If you’re a therapist or patient who’s worked with FIT, what challenges have you encountered, and how have you been able to address them? We’d like to hear about your experience: email@example.com.