You are here

Sponsored Content

Using Data Visualizations to Improve Strategic Decision Making

Making complex data more accessible and informative

In the Madison Metropolitan School District, the Research & Program Evaluation Office provides rigorous and high-quality research and analysis to support district priorities. By using data dashboards to create accessible, easy-to-understand visualizations of a wide variety of district information, the office has helped administrators understand what's working, what's not working and why, improving strategic decision making.

In this web seminar, two MMSD administrators discussed how the district has used data visualizations in a variety of ways, and outlined some best practices for using data dashboards and creating visualizations to improve strategic decision making in any district.

BO MCCREADY
Quantitative Research Supervisor
Madison Metropolitan School District

We’re going to walk through five myths that we have faced in our work doing research and analysis for the district, and how data dashboards helped us dispel those myths.

BETH VAADE
Qualitative Research Supervisor
Madison Metropolitan School District

The first myth that we have encountered consistently is that Madison schools serve the same students as they always have in the past.

Why does this myth matter? Teachers across the district would talk anecdotally about the changes they’d observed in the students and families, saying they didn’t seem the same as 20 years ago, and that instructional approaches may not work as populations change. As our student base changes, we need to tailor our approach to meet students’ needs.

So we built a visualization to tell us how demographics have changed in MMSD. This information helped us reflect on our staffing, our instruction and our support services, as well as our family engagement efforts.

The second myth is that teachers and school leaders can’t work with large and complex data sets—that if you give them too much information, they are not going to be able to engage with it. Why does this myth matter to us in the research office? We annually administer a climate survey to students, staff and parents. With about 21,000 total respondents with answers to dozens of questions, you can imagine the thousands of rows of data.

The more immediate challenge for our office is that we’re given one hour with leaders of all 50 schools—about 350 people—and we ask them to engage with this data in that hour. We had to think of a way to give all of these school leaders with varying data savviness something simple, accessible and robust that would help them make school improvement decisions. So we built scorecards. All questions are listed, and you can filter by school, by level, by race/ethnicity, by disability—and you can combine those filters together.
In that hour in the institute, and beyond in the weeks that followed, we saw teachers and school leaders working with this data, digging into it. There is something about the interactive nature that allowed people to get excited and to think deeply about what it meant. It sparked an interest in something that had seemed unapproachable in the past.

The third myth concerned our facilities, either that they were perfectly fine and didn’t need any upgrades, or that they needed massive updates. Why do those narratives matter? In April 2015 we wanted to go to referendum for critical updates. We created a dashboard with an individual cell for each school indicating key details that would report the relative need for construction projects—things like security, accessibility, capacity, five-year enrollment changes—and then a grade for that building overall.
It was a compelling tool. Our referendum passed with 82 percent support.

McCready: The fourth myth is that our state accountability systems account for differences in student population. The reason that matters to us is that all of the schools and districts in Wisconsin receive a letter grade and a numeric rating out of 100, and a majority of that rating is based on standardized test scores.

We are legally obligated to share these scores publicly and they get a lot of media attention, and unfortunately our ratings were just not as favorable as the surrounding districts in our county. So it was important to put our scores into context and understand what these DPI-created accountability measures were telling us about how we’re doing.

So we created two dashboards—one at the school level, one at the district level—that show every school in every district and how they were scored through this accountability system. It helped us put our scores in perspective in different ways. It also helped other districts do the same thing. We wanted to build this not just to help us but to help all the districts across the state that aren’t fortunate enough to have the ability to spend time on this the way we did.

The fifth myth is that test score distribution patterns vary across demographic groups, or that our test score distributions were irregular, not a pattern that you would expect to see. The reason this matters is that these perceptions of strange score distributions were held among influential figures in our district. Our board of education was concerned about these irregular distributions, concerned that maybe our schools were focusing on students very close to proficiency cutoffs and just pushing them over the edge so that their proficiency rates would look higher. And a lot of advocates within our community believed that the MAP assessment had a ceiling.

So we created a histogram showing those score distributions across all the grades that were tested, filterable by school year, subject, semester and demographic group. This is a tool we were able to make public, which is one of the nice functions of Tableau. Rather than having 50 pages of tendencies illustrated in table after table, we’re creating interactive tools that allow someone to click and dig into the data the way that they want to.

When we did this, we observed that contrary to popular belief, our test score distributions were roughly normal across all of our demographic groups. This was our very quick way of dealing with that misconception of spikes around proficiency cutoffs. What happened next is that we understood a little better that increasing proficiency rates doesn’t happen by just focusing on students near those cutoffs and trying to push them over the edge. We also saw that we couldn’t observe a ceiling on the MAP assessment, which means that the assessment was differentiating student performance even at high levels.

Tableau is particularly effective for debunking myths like this. And we debunked our own myth that data has to be dry and boring. A nice visual like this is a whole different story.

To watch this web seminar in its entirety, please visit: www.districtadministration.com/ws102616