There has been a lot of dialogue lately regarding CMS’s potential expansion of a star rating system to providers. Not surprisingly, it is being rather heavily opposed. The difficulty with such a system is whether a star, or any other indicator, is ensuring the rating actually measures what it is intended to. Further, does the rating actually add value for the audience, or audiences, in which it is intended to be consumed?
There has been a lot of dialogue lately regarding CMS’s potential expansion of a star rating system to providers. Not surprisingly, it is being rather heavily opposed. The difficulty with such a system is whether a star, or any other indicator, is ensuring the rating actually measures what it is intended to. Further, does the rating actually add value for the audience, or audiences, in which it is intended to be consumed?
I’ve always been a big proponent of sharing lessons learned and best practices across industries and this may be another one of those times to learn from prior experiences.
Teachers don’t love ratings either.
Two industries where the impact of a “good” or “bad” professional can have rather significant consequences on the consumer – a patient and a student (lets not debate the comparison). With Teachers, it is now being argued that the standards must go through a few cycles of implementation and that there is not a direct relationship between student progression and teacher ratings given the lack of coordination.
There are very few industries where we have successfully implemented a consumer facing rating system that is consistent and standard, yet appeases all parties involved.
Why is this the case?
One reason is because the implementation often occurs prematurely or without coordination with other efforts. There is unprecedented change occurring in Healthcare and we need to make sure there is a methodical approach to establishing these guidelines (which may require awaiting on the maturity of other components first). But, if we were to take a stab an effective approach, I would suggest the following:
Step 1:
Solidify the data an associated inputs. A recent study published in the Journal of the American Medical Informatics Association noted that there appears to be discrepancies between the alignment of patient safety indicators and the transition to ICD 10. This could in turn, lead providers to “adverse behaviors of selecting translation that minimize adverse behaviors”. These patient safety indicators are one, of a myriad of data points that may be considered when deciding a rating system that is fair. If we can’t trust the inputs, how can we trust the outputs?
Step 2:
Standardize data capture. Various systems, applications, and products will need to be developed, and assessed to ensure data is being acquired in a digestible format.
Step 3:
Develop the scale, indicator, or format. This seems simple enough, but it is critically important. Once you have the data and a way to capture it, how do you present this data to various stakeholders that may use this information to drive different activities. A provider rating scale may be used by a consumer to select a doctor where the same scale may be leveraged by a payer to drive reimbursement rates. How do you associate the right data with outcomes to ensure correlation?
I am no expert on rating scales for doctors, nor for teachers. However, from my experience in the industry, efforts that have diametrically opposed incentives tend to struggle with adoption. Instead, we must find a way to leverage the data that is available to turn it into a win for all stakeholders involved and reach consistency on the intended goals.
1. www.commoncorestandards.org
star rating / shutterstock