Theresa Christner, M.A.
Manager, Policy/Program Development Section
Children’s Special Health Care Services Division
Michigan Department of Health and Human Services
Measuring what matters has been the topic of conversation within our Children’s Special Health Care Services (CSHCS) program at the Michigan Department of Health and Human Services for quite some time. Given the size and complexity of our program, we have been challenged by the following questions:
- What data do we review?
- Is the data selected meaningful?
- Does the data measure what we think it measures?
- Does the data paint a complete picture of the program, capturing its accomplishments as well as providing clear indications of its strengths and weaknesses?
Needing answers to these questions, I attended the Measuring What Matters – Making Progress Through Program Evaluation workshop at the 2018 AMCHP Conference. The skills-building session provided hands-on examples of how to use the U.S. Centers for Disease Control and Prevention’s Framework for Evaluation in Public Health to develop an evaluation approach that is integrated into routine program operations. The framework incorporates a take action cycle that includes the following six steps:
- Engaging Stakeholders – getting input, participation, and sharing power with those who are invested in the program;
- Describing the Program using a Logic Model – identifying the relationships between program elements and expected changes;
- Using Validated Measures to Focus the Evaluation Design – planning the end goals of the evaluation and the necessary steps of the evaluation;
- Using a Measurement Table to Gather Credible Evidence – compiling information that stakeholders perceive as trustworthy and relevant for answering their questions;
- Justifying Conclusions – making claims regarding the program that are warranted on the basis of data that has been compared against pertinent and defensible ideas of merit, value, or significance; and
- Using and Sharing Lessons Learned – assuring that the findings are useful by using them to make improvements.
Underpinning these six steps are four standards that are essential for good evaluation:
- Utility – ensuring that the evaluation will meet the information needs of the intended users;
- Feasibility – ensuring that the evaluation will be realistic, practical, sensitive, and economical;
- Propriety – ensuring that the evaluation will be legal, ethical, and appropriate; and
- Accuracy – ensuring that the evaluation will be correct and precise.
As part of the session, the attendees at each table worked on developing a logic model together. I found this extremely helpful. Having cut my public health teeth on the traditional work plan, constructing logic models has seemed counter-intuitive and, to be honest, somewhat illogical. After all, counting outputs is so much easier than measuring outcomes.
But after working through the process in this group setting, I see the benefit of developing logic models as a necessary step for program evaluation. That step builds the bridge between what a program does and the change that will we hope will result.
Now our MCH group is working to create logic models as part of our maternal and child health fiscal year 2019 planning activities. I am excited to take our Children’s Special Health Care Services logic model to the next step and to apply the CDC’s evaluation framework and its tools to build out a measurement table that can be used to track outcomes. The measurement table template helps to simplify outcome tracking, linking outcomes with specific indicators, the data sources, data collection methodology, monitoring frequency, and – most importantly – who is going to carry out these specific tasks. Once in place, applying this framework and its various tools will help us answer the big evaluation questions related to overall program effectiveness.