I want to share some interesting data that we recently discovered about charter schools in North Carolina.
-Over 40% of schools that have opened since the 100 charter school cap was lifted in 2012, have turned over their founding principal.
-44% of the schools that have opened since the cap was dropped in 2012 did not make growth in their first year.
-Over the last two school years, schools have opened with less than 80% of their projected enrollment, and one third of schools opened with greater than a hundred less students than they projected- that’s at least 700K less than what they budgeted for.
What does that mean to you if you are a newly opening school? It means that you will most likely be struggling with your budget, you likely not meet growth on your first try, and sooner than later your founding principal will be moving on.
You can use a variety of school data, whether you are in your first or twentieth year, in a variety of different ways to help your school serve its families. By making you you are familiar with the entire range of data sources and data tools, you can use of data to support choices in reaching all of our students. Since the dawn of No Child Left Behind 15 years ago, when funding became linked to standardized test scores, we in education have paid special attending to data, and schools have become flooded with data.
So who looks at data at your school, and what do they do with it? Right now, I am reading the book Data Dynamics by Edie Holcomb, and it breaks down the different data teams that quality schools use.
First is the leadership team – this is a team that is made of the people that are most intimately aware of the school’s needs and successes. It should have the principal and any assistants, depending on your staffing, also it would be appropriate to include coaches or curriculum specialists, and a few select teacher-leaders who are respected by their colleagues. Some schools invite critical friends into this group, be it a university adviser, or even a board member or parent in the right situation, if the school values transparency. It is important that this group has representation from special ed, electives/specials, general ed, support staff. But the bottom line is that this is a team that is concerned with what is best for the whole – people who are capable of looking at the big picture of the school. What does this team do? It may seem contradictory, but this team does not make instructional decisions based on data. Their job is to look at trends, and they decide what it will take to make the big decisions that will impact the school, and how to engage the right people in the process. With the many viewpoints, it is easier to not miss anyone or anything during this planning stage. But most of all – this group will explore together what macro-level data means to the school, and how the school will organize its data culture.
I know what you’re thinking – who then makes the decisions about what to do with the data?
Well, the answer to this question is based on what data are you looking at? Because all data is not created equally – there are very important data figures that come out once a year – this is your EOG, EOC data for proficiency. And honestly, it is my opinion that this data is not even for the school. It tells people who are far away from your school about the performance of the students at a quick glance. You can identify a group of poor performing or super high performing students, but that’s about it. That data gives us a summary of how what happened -past tense- during the last 10 months. That data is not very mobile. It doesn’t allow us to change the way we help kids while they are failing. But the other data – that teachers collect on the fly during lessons, or weekly during progress monitoring, or quarterly during benchmarks – that data is fast, and highly effective schools use this data to make changes in instruction, in grouping, in intervention as it is needed, and in a timely way so that we can adjust the sails for kids throughout the year, not once a year.
The people that make the decisions about what to do with data should be —- teacher led data teams. Now, schools can choose to have one designated data team, filled with your biggest data fans, or it could be that every grade level is their own data team. But the fact remains that teachers must be given a special status in the actual gathering and analysis of data. They should be the first to explore, analyze and prepare data to share with others. This should be a safe practice because looking at data that is collected throughout the year is all about growth and mobility, and not at all about evaluation and ranking.
This group is deciding what data is being collected and reported, deciding what data is relevant to helping kids, and they are responsible for making changes to instruction, pacing, grouping and interventions for individual students.
You see, only the classroom teacher has the full scope to explore and put into practice ways to help individual students. This is why the leadership team determines the structures of how the school looks and data, how often, what to take data on, by the teacher led teams decide what to do with it for EVERY kid. Every student deserves their school to look at their individual data, and to plot a course for their success.
While working in education, I have sought out being a part of all things data – a teacher data team, a school-wide data team, and administrative team. Even when I was a PE teacher I kept data on the number of laps each class ran, and mapped it for them as they ran across the country. The school where I worked that had this most urgent data culture had us – the teachers taking all kinds of data. We met weekly and brought grouped data from an assessment or assignment, with next teaching step for different groups of mastery, approaching mastery, and well below mastery. It was a great practice for us as teachers. But all things considered it was not an effective practice. See, at this school where I taught, the teacher data team and the leadership teams were merged together.The atmosphere was more confrontational and less constructive because teachers felt like they were answering to their bosses for poor data, rather than exploring ways to respond to the data or seeking ideas from colleagues. I think the downfall of these meetings was that the teachers had to determine their course of action ahead of time, rather than with the help of the team. There was no sharing of ideas or exchange of dialogue- It was accountability reporting. But when you think about it that is what makes any problem solving team successful – the chance to build upon each other’s ideas, to arrive at a place where no one member of the team could reach alone.
In what ways have you at your school organized your approach to making decisions on data, and what role do teachers have vs your non-classroom staff? Who would like to start us off?
How have you found ways to engage non-mathy teachers in data use and analysis?
This week, let’s talk about what to do with the data you mine.
There are really 4 types of assessment that get us data, and each deserves for you to look at it through a different lens, because the action or actions that you take are calibrated to the place where you got it.
The first kind of data that you can get is diagnostic data. This information is taken before instruction takes place. What are the sources of diagnostic assessments – think pre-tests, placement tests, kindergarten screening – these tests establish a baseline for understanding. Think about diagnostics in medicine- they are examining their patient to see what to prescribe as treatment. Diagnostic testing should do the same for students. Diagnostic testing provides specific information about a student’s deficiencies. Along those lines of thinking, a diagnostic test should not have just one or two of several types of questions, but a diagnostic test should have several examples of questions, of varying difficulties, about let’s say, multiplying double digit numbers. So once you have this data – this pre-teaching data, or pre-school year , or pre-unit data should be used to help teach students. If a diagnostic test identifies gaps in understanding, the teacher can use it to target an individualized plan for students. Diagnostic information also allows us to accelerate students if they already can demonstrate mastery of a subject we are about to cover.
Another data source is benchmark data. benchmark tests are given quarterly or at the end of sessions to indicate mastery of the material that was covered during the preceding marking period. These are also called periodic or interim assessments because they are regularly given to students. A benchmark assessment is standards-based and drawn from a standardized source so that teachers can use their data as a predictive indicator of future student performance. Teachers can use benchmark assessment data to identify gaps in the retention of their teaching, or for trending of the progressing of individual students. For example if a student performs well on the first quarter benchmark, but poorly on the second quarter benchmark, that might indicate that an environmental factor is not working in their favor. Benchmark data can also be used by the teacher to formulate remediation plans for students that are not up to speed with important grade level standards that have already been taught.
Whereas benchmark assessments are taken 2 or 3 times a year, only one summative assessment is given to students at the very end of the course. A summative assessment is important to assessing the overall health of the school and measures the status of the school as a snapshot at one point in time. This regular snapshot provides a platform to measure schools comparatively and also to mark the growth of students compared to their peers. So what can you do with the data you get from the summative evaluation? Decisions about curriculum choices can be brought to the table based on curriculum choices, as well as holes in your teacher pool. When one teacher year after year produces less growth in their students than their peers, we as school administrators have to ask some difficult questions and dive deeper into their practice. What can’t you do with summative assessment data: You can’t treat it like it is the only piece of data that matters. Another way to qualify a summative assessment is that comes at the end of a course, so by design it does not allow the possibility for improvement for the test subjects.
The opposite of this type of data is formative data. Formative assessments can happen every week, every day, and in fact even every minute of class informally. Whereas summative data is inoperable, formative data is mobile and useful. Formative assessments can be quizzes, exit tickets, assignments, projects, pointed questions. They can be answered verbally, nonverbally, with raised hands, or with smartphones as quiz clickers. Any question a teacher asks of their students can be formative – so long as – and here is the kicker – they do something with it. Formative assessment data is any data that informs future instruction. When a teacher asks “point left if you think the answer is a positive number and point right is you think the answer is negative” then re-teacher when she sees students pointing in every direction, that is an assessment that led to a change in instruction – formative assessment. When a teacher gives a spelling test, then provides cut and paste activities and additional workshop activities to students that get less than 80% correct, that is changing instruction based on assessment results – formative assessment. Our great teachers constantly assess and modify based on informal and formal formative assessment. Our best teachers use this data to provide individualized learning for their students based on the needs they draw out with their formative assessment.
So once again we have 4 types of assessment with 4 uses
Diagnostic assessment – which we use for placement and major skill deficit identification
Benchmark assessment – which we use for tracking progress on standards and goals
Summative assessment – for taking a yearly snapshot comparing to our peers
Formative assessment – to modify instruction as frequently as possible.
Which of these types of data do we as administrators spend most of our time looking at?
Which gives us the best chance to improve learning?
Moving away from academic data. As the school leader, we are first and foremost the instructional leader of the school. We are responsible for assembling a team of capable educators that can adjust and adapt instruction to meet the needs of every level of student in our schools. But, academics are not the only measure of impact of a school. Tom and I go around and around sometimes about standardized testing and the impact of No Child left behind and high stakes snapshot – summative reporting. But I always go back to a Churchill quote – he said – democracy is the worst form of government, besides the other ones. well, in my opinion, end of year testing is worst kind of school evaluation, besides the other ones.
Internally though, you can always compare yourself to yourself. And you can also to a degree compare yourself to other schools that are most like yours. The kind of data that will help you do that is non-academic data. And there are so many important nonacademic categories that we can use data to evaluate our schools.
First – let’s consider demographic data. If you are trying to compare your school to any other, you should start by looking at demographic data. Think – socioeconomic status, race, language, gender maybe. If you are seeking to compare academic data apples to apples, matching demographic data is a good place to start. One very relevant demographic data piece to pay attention to right now is the growth of the Hispanic population. Right now, about 16-17% of the US population is Hispanic, but 25% of 5 year olds are Hispanic. that indicates, depending on your local area, a major shift that will have to happen for the next generation of learners. think about additional ESL or ELL services, or a change in content to appeal to a quarter of your class.
Another vitally important nonacademic data piece to investigate is attendance data at your school. It should not be surprising that there is a correlation between high performance and high attendance, but as charter schools, where many of us have yet to find the best way to offer busing, attendance is a real struggle, whether we are talking about tardiness because so many parents are dropping their kids off at the last minute, or for going over the state’s 10 day absence limit. Some of the most difficult conversations I had a school leader were about students that were chronically late or pulled out early. The convenience of bypassing the car line cannot take preference over a child’s attendance at their last period class! Some data I found recently says that 1 in 10 kindergartners miss at least a month of school! Another study showed that a poor attendance in 8th grade is a better indicator for high school drop than poor test scores. Attendance data is especially interesting because, like academic data, you have to evaluate the school as a whole body – finding out the attendance rate, tardy rate – but also have to look at individuals – who has missed more than 10 days out of the first quarter. How can we intervene now to right that ship?
So there are so many other forms of non-academic data to take –
For students: participation logs, journal entries, blog posts, teacher ratings, graduation rate, interest inventories qualitative focus group data – and you can disaggregate any of those by demographic if you want to take it a step further.
For your teachers – any kind of staff survey, we supply a local survey if you want to use one, or there is the North Carolina Teacher Working Conditions Survey which opens up march 1st this year – , how about teacher attendance and tardiness data – super important to establish professionalism and a professional culture. Staff observation logs – having a way to analyze who you have observed formally and informally.
There is one more I want to talk about in detail is discipline data. Because this can demonstrate both student action and teacher action or inaction. A keystone to having a successful discipline system is the ability to compile data in one common place. Keeping data for every student’s discipline history is vital to identifying students in need of additional behavior support, or even to have a concrete system to escalate consequences for repeat behaviors. On an individual level, having the conversation with a parent and being able to bring up data on the 23 office referrals a student has logged, seeing that 19 of them have been for aggressive behavior – this is a totally different conversation that saying “your son is always getting sent to the office.” On a school-wide level, including information on your discipline by time, location, day of the week, or month of the year – enables you to find holes in your monitoring system. I still remember that day that I sat down with my leadership team with this data after talking about how much of a mess our recess was, and how it seemed like all of our office referrals were for fighting at recess, but then we look at our data and found out something surprising. We saw that there were way more incidents happening during our class transitions that at recess. With that data, we were clued into the fact that teachers were not standing at their doors and monitoring transitions, and brought this in front of our teachers, who had no choice but to acknowledge their role in the discipline gaps.
This kind of environment analysis and ensuing change is why we have to look critically at our non-academic data. What patterns are out there? Who is more represented in these patterns? Why do we think that these patterns have emerged? How can we confirm that this data is correct? These real, open and honest conversations are what drives a school culture, and the more informed by data, the better.
So to wrap up this session on data, I would like to close with this: Numbers can be deceiving. Data can be manipulated. I can tell you, and this is true – that a majority of charter schools- over 56%- meet or even exceed growth in their first year in operation. That’s incredible right! their first year! they already have it figured out. Or we could look at the same data this way – an inconceivable 44% almost half of charter schools in their first year don’t even met the growth standards set by the state. What a disservice! You are almost as like as not that a charter school in their first year will be any better than your traditional district school. The point i’m trying to make is that data can either be the answer or the question. A score of 61% on a science unit assessment can be the answer “Suzie knows 61% of the material we were tested on.” or it can be the question – “what is the 39% of material that Suzie did not master, and how did I teacher formatively assess them as they moved through the material?” Data can be the answer – 58 out of 250 students have been late more than 10 times this year. Or it can be the question – what is happening during the first 30 minutes of our school day that a quarter of our students are missing frequently?
I hope you see data as the question, not the answer.
This blog was written by Geoff Gorski, consultant at Leaders Building Leaders.