Facebook’s recent experiments in social media mood contagion got us thinking about user-based testing in general and especially how that applies in healthcare technology that is intended to influence behavior.
Facebook’s recent experiments in social media mood contagion got us thinking about user-based testing in general and especially how that applies in healthcare technology that is intended to influence behavior.
The Experiment
How Do You Test Behavior Change?
It’s an interesting question for those involved in healthcare, and in particular trying to help people modify their behavior. In our case, at Wellpepper we are helping people be more adherent to home treatment programs. To do that we use a number of motivating factors including personalization and notifications. As part of building our application we test which features are effective in motivating people. We continually improve and change the application based on what we learn. Is this testing on human subjects? Yes. Did we get permission? Yes. This is part of our terms of use and it is also an essential part of how the industry builds software that people will use: by testing that software with real users. When people start using our software they use it to help them with a specific problem and they are happy when we make improvements to make it more effective to solve that problem. We encourage user feedback and implement new features based on it. So while, we may test new features, it is part of the implicit agreement of delivering software to users. (If you’ve ever used software that was not tested with real end-users, you’ll know the difference.)
When we test and add features that help improve user experience and become more adherent to their treatment program users are happy because we have helped them with their goals for using our software and the implicit contract with them. If we started testing and adding features that made them less adherent or changed some other type of behavior that they weren’t trying to change using our application we would have broken that contract and they might vote with their feet or in this case fingers and stop using the application.
What’s Your Implied User Contract?
The same thing could happen with Facebook, and it stems back to what their intention is with this research. The unfortunate thing is that they probably have enough data to have figured out that positive newfeeds make you happy and negative newsfeeds make you unhappy without actually manipulating the feeds. The fact that they did this, and did this without consent, brings up a bigger question of what their intention is, and what exactly is the implicit contract you have with Facebook. What exactly is their motive in trying to manipulate your emotions? For marketing experiments of this type the motive is pretty clear: consume more of their product. For Facebook it might be the same, but the fact that they tested negative messages does cause some alarm. Let’s hope they use their power for good.
Guidelines for User Testing in Consumer Healthcare Applications
While looking at specific feature testing, these guidelines can help make sure you respect your end-user testers:
- Unless you have explicit consent, all user testing must be anonymous. This is because if you are dealing with PHI and have signed a HIPAA BAA you have agreed to only access PHI when absolutely necessary. If you need to know demographics of your users for user testing, then you should err on the side of getting their explicit consent. This could be either via a form, or simply a non-anonymous feedback form on your application or website. By providing you with direct feedback the user has agreed to not be anonymous. (The good thing here is that patients can do whatever they want with their own data, so if they give you consent, to look at it, you have it.) That said, if you are working with healthcare organizations you will also have an agreement with them about contacting their patients: you need to make sure they have agreed to this as well. When possible err on the side of making data anonymous before analyzing it.
- Think about the implicit contract you have with the user. If you are providing them with an application that does one thing, but you discover it may have applications for something else, don’t test features for that something else without getting consent. That is breaking the contract you have with them. Let’s look purely hypothetical example: at Wellpepper we have an application that increases patient adherence to home treatment programs for those undergoing physical rehabilitation. Let’s say we found out that people in physical rehabilitation are also often fighting with their spouses and started adding features or asking questions about the user’s relationship with his or her spouse, users would find this both unnerving and intrusive because that was not their expectation that we would help them with marital issues when they signed up for the application. Obviously this is a bit far-fetched, but you get the point.
- Don’t get in the middle of human-to-human communication. This is essentially where Facebook broke the implicit contract with users by dis-intermediating the newsfeed. Your expectation with Facebook is that it’s a way for you to communicate with people (and sometimes organizations) you like. By changing what showed up in your feed, Facebook got in the middle of this. In healthcare this is even more important: don’t get between healthcare professionals and their patients. Make sure it’s clear when it’s you (the application, the company) talking and when it’s the caregiver and patient.
- Consider where you’d get more value by partnering with a research organization. Sure it will take longer and may require more effort, but you will be able learn a lot more about why or how people are using your features by getting explicit research consent. I am not sure if it’s a coincidence or not but about a month ago I noticed that my Facebook newsfeed was full of extremely depressing stories. I remember wondering what was going on both with Facebook and the world in general and I remember wanting to post something depressing but then thought, “No I don’t want to add to this. I will only post positive things.” It’s possible that I was part of another study by Facebook and if so, they didn’t get the full picture that they would have if they’d been upfront about it, got my consent, and were able to ask me questions later about my thought process.
There is no doubt that we will see more discussions of ethics and consent in the space of user testing, especially as it relates to consumer-facing health applications. Having no regulation or guidelines is not good for consumer. However, only doing research with IRB and third party researchers is also not good for the consumer as innovation that could really help them can be slowed dramatically. Most people, whether healthcare practitioners or entrepreneurs got into the space because they wanted to help people. If we remember this, and we consider the ethical implications of our actions, we should be able to balance the two worlds.
For more reading on this topic as it applies to the software industry, see:
http://en.wikipedia.org/wiki/A/B_testing
http://ai.stanford.edu/~ronnyk/2009controlledExperimentsOnTheWebSurvey.pdf