Читать книгу Using Predictive Analytics to Improve Healthcare Outcomes - Группа авторов - Страница 59

3 Cultivating a Better Data Process for More Relevant Operational Insight

Оглавление

Mary Ann Hozak

An organization's performance improvement plan has traditionally been based on data that measures a compilation of (a) demographic information, (b) prevalence of outcomes such as use of restraints or pressure injuries, and (c) percentage of policy compliance such as: How many falls? How many appointments were canceled? Were patients happy or unhappy? What percentage of the form was completed? Although helpful as a starting point for identifying and quantifying quality indicators, the only thing these scores really show us is an abundance of data points moving up or down each month. They indicate whether a specific task is being performed well or poorly, but they do little to help us understand the big picture. Since “task performance” is an inadequate measure of professional practice, we need to rethink what we are measuring and how we are measuring it.

Since “task performance” is an inadequate measure of professional practice, we need to rethink what we are measuring and how we are measuring it.

Data collection itself is challenging. How do you decide what to collect or where to collect it, and how can you be sure it is collected the same way for every audit or that every auditor is auditing the same way? Then, once it is collected, what is the best way to present the data to help communicate with others what new realities you have come to understand?

Most often, data is collected manually, with some information technology assistance, which is a very arduous process. Once collected, however, the data is not always analyzed to find and correct any collection errors and outlier information before it is distributed to people in the organization. This not‐necessarily‐valid performance improvement data is often then reported and discussed in committee meetings and staff council meetings. As you can imagine, the action plans created in response to this data can cause some serious problems. Using data without screening for error first is like putting a ship to sail before checking if the ship has holes in it.

Using data without screening for error first is like putting a ship to sail before checking if the ship has holes in it.

If the data is not checked for accuracy, anxiety and frustration build for staff members and leaders alike as the organization’s inconsistent data is then reported to Leapfrog, Hospital Compare, and the Magnet® Recognition program. The experience of anxiety for staff members and leaders is tied to the reporting of this data to regulatory and accreditation organizations, because these organizations report the hospital‐level data to the public who then make choices about what hospital they will go to. The frustration is tied to the ongoing struggle with trying to understand the fluctuation in scores. This fluctuation in scores, which is also frustrating to manage, suggests that other variables affecting the outcomes have not been measured or were not measured correctly. Given this pattern of collecting, distributing, and acting on flawed data, with no sustainable improvement toward the goals of high reliability, patient safety, and full reimbursement opportunities, the question is: How can we get our hands on data that includes everyone on all units and points us toward the precise actions we can take to improve operations?

Using Predictive Analytics to Improve Healthcare Outcomes

Подняться наверх