On our final Development Series conference call, Greg Naleski, Chief Development Officer for Washington Jesuit Academy, discussed changes that were made to their Annual Fund over the last few years. Before redesigning their approach across all donors they tested some changes with a select group of donors who met certain criteria. This group comprised less than 10% of all their donors. The annual appeal letter that was sent to this group was different than the letter sent to all other donors. The hypothesis was that a more ambitious ask over several years would yield greater engagement and funding. Once the returns were in, the development team analyzed the data from this group to determine if this was the case. The consensus was ‘Yes’ and a new system for the annual fund was applied to all donors the following year in what is now known as the Impact Fund.
As this example supports, it is more appropriate to refer to school improvement as a science rather than as a process as it is basically the scientific method in action:
- Ask a Question
- Do Background Research
- Construct a Hypothesis
- Test Your Hypothesis by Doing an Experiment
- Analyze Your Data and Draw a Conclusion
- Communicate Your Results
While the Washington Jesuit Academy applied this scientific method to their annual fund, similar testing of a change idea can be applied in all areas – board meetings, classroom instruction, school culture, graduate support, etc.
Many of our schools may already be investigating and implementing change through a PDSA cycle, an acronym for Plan, Do, Study and Act. Tony Bryk summarizes the flow of this cycle in Learning to Improve:
Plan: Define the change, make predictions about what will happen as a result, and design a way to test the change on an appropriate scale.
Do: Carry out the change, collect data, and document how change was implemented.
Study: Analyze the data, compare what happened to predictions, and glean insights for next cycle.
Act: Decide what to do next based on what you learned: Abandon the idea? Make adjustments? Expand the scale?
The accuracy of a PDSA cycle, like any experiment, is more reliable based on the frequency of testing and with the introduction of different variables under different contexts and settings, all of which should be documented in the analysis. For example, a change idea in the classroom may be assessed through a PDSA cycle in every period for a week with consideration to teacher, time of day, etc. This information naturally is more helpful than observing the change idea through a single PDSA cycle in one class period with one teacher. Of course, limitations on time, personnel, resources and training will affect how many times a school can test a change idea through a PDSA cycle.
Regardless of how many tests are run, the thought is that before going to scale with a school-wide initiative, start small and look at a specific variable that can be changed to see if it leads to the intended results supported by both qualitative and quantitative data. Importantly, this approach elevates the voice of teachers, support staff, students and families who are committed to learning what works best and excited to share what works and train others to implement sound practices.