In early March, Rebekkah Schear and I attended the Association of Community Cancer Centers’ Annual Meeting and Cancer Center Business Summit, which featured best practices in cancer care delivery and included 700 attendees from across the country. Here, we represented Dell Medical School’s Livestrong Cancer Institutes and shared our presentation, “How to Evaluate the Impact of Cancer Programs and Clinical Services.”
Our presentation was based on our work evaluating the Cancer Institutes’ Cancer Life reiMagined model over the last 18 months. Similar to how the CaLM model was designed with whole-person care in mind, we wanted to make sure that those with lived cancer experience were involved in evaluating it.
We found that much of the data needed to understand how our team-based clinic operates was not easily accessible. Our electronic health record system did not report on joint visits with multiple providers, so gauging the number of touch points that each member of the care team has with a given patient was not possible. Discrete fields such as cancer diagnoses, staging and treatment plans also did not exist in the way the electronic health records were set up, leaving a manual review of data as the way to obtain this information. Because of this, a data collection tool and a chart abstraction method (a process of extracting patient data) were developed from scratch.
With this new and better understanding of patient data, we focused on patient experience to fully inform improvements in patient outcomes. We held surveys and two-hour patient interviews for those who were willing to talk with our evaluation team. We also held focus groups with our care team to better understand their experience with the model of care.
In doing this work, we settled on a few key takeaways, which we shared at the summit:
- Use mixed methodologies. In our CaLM evaluation, we combined best practices from program evaluation, quality improvement and implementation science, collecting both qualitative and quantitative data.
- Frame evaluations into research questions. If you don’t know what you’re looking for, you will never be able to evaluate it. For example, in designing our methodology, we asked whether the CaLM model impacts quality of life, improves access to care, offers enhanced supportive care (compared to traditional oncology care delivery models) and delivers optimal patient and provider experiences.
- Look to the experts for metrics. Don’t reinvent the wheel. A number of organizations specialize in outcome measurement. We considered metrics from cancer accreditation bodies, the Centers for Medicare and Medicaid Services, the Agency for Healthcare Research and Quality Standards, the Quality Oncology Practice Initiative, as well as cancer navigation metrics. We compiled a massive list of all the things we wanted to measure and whittled it down to the metrics that mattered most and those we could actually measure.
- Include evaluation strategy from the start. It is much easier to build an evaluation strategy as you are designing a new program or service rather than trying to figure out how to evaluate information after the fact.
- Engage patients, survivors and loved ones throughout the process. In our evaluation, we engaged our patient and family advisory boards to brainstorm, draft, edit, mock-up and test our processes.
If you are looking to evaluate your care delivery model, Rebekkah and I are happy to talk more about the lessons we learned and share the tools that we have developed to help shape your strategy.
Special thanks to the Association of Community Cancer Centers; the Cancer Institutes’ advisory board and young adult advisory board; S. Gail Eckhardt; Barbara Jones; and the CaLM evaluation team.