Mary Kadera
  • Home
  • About Me
  • Blog
  • How I Voted
  • Contact

Get the lead out

10/20/2024

 
Imagine you live in a small, fictional country. Your government cares about health and the environment, and it's recently issued new regulations about lead contamination. 

Lead, as you likely know, can cause all kinds of human health problems--from headaches to high blood pressure, memory problems, and miscarriage. In young children, lead poisoning causes developmental delays, learning difficulties, seizure, and more.

So, your government is going to require every neighborhood and community to reduce the level of lead in the water, in the soil, in buildings and in the air. Lead levels will be measured through a variety of tests, starting this year, and in 12 months each neighborhood and community will be publicly rated on how much lead is present. Communities will the highest lead levels will have to enter into a special agreement with the government and cede local control of environmental services, waste management, water treatment, and parks.

What kind of tests will the government use? Well, they haven't developed all of them yet. But by this time next year, they'll be in force and your rating will be published. 

What kinds of funding or assistance will the government provide to help communities reduce lead pollution? Well, they haven't worked that part out yet.

If your community loses some of its local control, what exactly will that mean and how long will it last? Well, the special agreements haven't been drafted yet.

But you'd better get started with that clean up, because it's going into effect one year from now.

Maybe you live in a wealthy neighborhood. Individually and collectively, you can pay for companies to come in and work on the problem. You don't want your neighborhood to earn a failing grade, right? What would that do to home values?

But maybe you live in a neighborhood that's less well-to-do. The houses are older and smaller, and many likely still contain lead-based paint. You live closer to the municipal waste incinerator, which for many years contaminated the soil, air, and water nearby.  You're going to start with more lead pollution, and your neighborhood has less money to fund its own clean up effort.



​Of course you don't live in a small, fictional country, and nobody is testing your yard or your blood for lead levels. But something very similar is happening in the state of Virginia to our public schools.

Recently, the Virginia Board of Education approved a new School Performance and Support Framework to evaluate how well schools are meeting academic expectations. The new Framework replaces the current accreditation system, which critics have charged is not rigorous or transparent enough.  The Framework will  rate each school as Distinguished, On Track, Off Track, or Needs Intensive Support. The Department of Education will publish these ratings next fall. School divisions with a certain percentage of schools rated "Needs Intensive Support" will be required to enter into a Memorandum of Understanding (MOU) with the state. 

Whether or not you believe we need a more rigorous accountability system, you've got to admit (I hope) that we need one that's fully built. 

What assessments will the state use to gauge academic performance? Well, some of them exist today but some of them haven't been created yet. Even the most familiar, the Standards of Learning, will be changing this year as new standards for Math and English Language Arts have rolled out. (It's worth noting that Virginia is introducing the new math standards and the new math SOL test in the same school year; normally, teachers get a year to familiarize themselves with the new standards and the related test rolls in the following year.) 

What funding and resources will the state be providing to schools that are rated Off Track or Needs Improvement? We don't know yet.

What will the Department of Education put in the MOU it will require for the lowest performing schools and districts? No one has seen it yet. 

It's a little like not knowing if you're actually going to earn your college degree because the university hasn't figured out its degree requirements--even though you're starting your senior year and you have less than 12 months before you're supposed to graduate. 

Or like the lead testing in my imaginary country. Similar to the less affluent neighborhoods in my example, there are schools and school divisions in the Commonwealth that will be disproportionately affected by the new academic accountability system. 

Under the new system, students who are English Learners will "count" towards a school's overall accountability rating after just three semesters. Federal law requires that school divisions test all EL students after three semesters, but until now in Virginia we've used those tests to check for growth of content knowledge, not mastery of the content. After just three semesters, an EL student's science SOL score may say more about the student's understanding of English than their understanding of the science content being assessed. For this reason, many states that count EL scores after three semesters will test students in their native language; Virginia does not allow this.*

In APS, 27% of our students are English Learners. The state average is 9%. In several of our schools, EL students comprise 50% or more of the student body (Carlin Springs and Randolph are highest at 75% and 59% respectively). What will those schools' ratings look like when compared to schools like Jamestown (2% EL) or Discovery (4% EL)?  EL students and families are also often economically disadvantaged. The extra support that more affluent families tap into, like private tutoring, disability screening, occupational and speech therapies, and more, are beyond the reach of many immigrant families, which exacerbates the gaps across and within school communities.

In Arlington, most of our school funding is locally generated, with the state kicking in just 14% of the school division's revenue. In other, less affluent counties in the Commonwealth, the state contributes two-thirds or more of the school division's operating funds. 

I imagine that if the state does not appropriate additional, dedicated funds to support schools labelled Off Track or Needs Intensive Support, school divisions like Arlington will figure out another way to finance the extra staff and resources we'd need for these schools. But what if you are Lee County in Southwest Virginia, and 70% of your education funding comes from the state? Or the City of Petersburg, which is 63% funded by the state? (It's worth noting that Petersburg schools have been operating under an MOU with the Virginia Department of Education since 2004. The state's involvement for 20 years has not yielded significant improvement.) 

I'm all for accountability (just like I'm all for lead-free neighborhoods). There are parts of the new school accountability framework that I believe could be beneficial. And in our country, state, and community, we do need to act with greater urgency to close chronic opportunity and achievement gaps that can have lifelong consequences for English Learners, students with disabilities, and our Black and Brown students. 

But "getting the lead out", if the solution is not crafted with care, is in a best-case scenario blundering and ineffective. In the worst case, it will do real damage to those we had ultimately set out to serve.



*The issue of when and how to test English Learners on subject area content knowledge is complex, and (at least in my study of it) does not lend itself to blanket guidelines that dictate how "all" EL students should be assessed. Variables include the language in which instruction was delivered; whether a student is fully literate in their native language, which is not the case for many students whose formal education was limited or interrupted before coming to the US; a student's English language proficiency (e.g., are they at WIDA Level 1 or WIDA Level 3?); and more. It's also not clear to me that a blanket "number of semesters" rule makes sense. Three semesters may well be too soon for some EL students' scores to "count" as a measure of how well a school is equipping students with content knowledge. Eleven semesters (or five and a half years) feels to me like it would be too long in many cases.

I note in WestEd's 2019 independent evaluation of APS's English Learner program that a large percentage (44%) of middle school English Learners had been classified as English Learners for five or more years, and a large percentage (40%) had been English Learners since Kindergarten. Since that time, of course, APS has made significant changes and investments in English Learner instruction as part of its 2019 settlement agreement with the Department of Justice. 
​

We are all investors

5/10/2024

 
It was time for a new elementary math curriculum in Traverse City, Michigan, and the school district decided to take a pretty unconventional approach to making its selection.

This is no small matter, as purchasing curricula (including print or digital textbooks, workbooks, and other components) can cost millions of dollars, and districts typically only make this investment every five to ten years.

In Traverse City, the curriculum adoption committee had narrowed it down to three new curricula, each of which was backed by research. But here’s where things get interesting: the district then decided to run a year-long pilot study of all three curricula and include a control group of students who would continue to use the existing materials.  Principals, teachers, and district leaders ran the pilot together.

At the end of the year, they found that only two of the curricula produced statistically significant improvements. They could then compare the financial costs of the two products, and they had practical wisdom from teachers who had implemented each product in the classroom to inform the decision about which product to select, what components to purchase, and how to roll it out to the rest of the district.

The associate superintendent overseeing math instruction called it “the best experience of my career.” One school board member shared, “For the first time in my board tenure, I feel that decisions have been rooted in objective information.”

This is one example of Academic Return On Investment (A-ROI), a collection of practices that many school districts are adopting to make more strategic decisions about how to invest their funds and how to evaluate the impact of their programs.

The ABCs of A-ROI

I’ve been learning about A-ROI from sources including the Government Finance Officers Association, the District Management Group, and Education Resource Strategies.

The question at the heart of A-ROI is: What does the most good, for whom, and at what cost?

Districts are using A-ROI to adopt new programs and initiatives, like in the math curriculum example shared above. Often, they run limited pilots before they scale implementation across a whole district.

Districts also use A-ROI to evaluate the return on investments they’ve already made, ensuring that existing initiatives are worth the time, money, and effort being expended.

Because staffing comprises the largest part of any school district’s budget, it’s important to capture the amount of staff time a particular program or initiative requires, as part of its overall cost; this is challenging, but not impossible, to do. There are formulas, tools, and templates available from districts that have already begun this journey.

That said, because A-ROI is intense, districts can’t analyze everything. Often, they choose to focus on the programs that consume the most resources, or where they’ve identified that a number of programs overlap and there might be redundancy.

The “Ugly Christmas Tree” in Boulder Valley, Colorado

A few years ago, Boulder Valley School District  was struggling with the same problem that a lot of school districts face: in a well-intentioned effort to support as many students as possible, it had layered one initiative on top of another, creating what  one former district leader calls “the ugly Christmas tree” effect: “too many decorations that, while individually well-intended, don’t work well together and weigh down the very thing they were intended to support.”

An initiative inventory confirmed some suspicions: school staff were trying to implement 251 initiatives from 28 teams across nine departments in the central office. Over the next six months, the district worked to glean as much information as it could about
  • the students served by each program
  • its known outcomes
  • its fully loaded costs, including allocation of staff time
  • and its connection to other efforts.
In parallel, through a survey of school principals district leaders gauged their perceived value of each program, and for which students. They also asked principals about the implementation status of each program, and whether additional support was needed to implement it effectively.

This didn’t instantly fix the problem—but it gave Boulder Valley a good place to start. The district is using this inventory to create a roadmap for when and how it will conduct more thorough analyses of specific initiatives as a regular part of its ongoing operation.

Five Tips I’ve Learned From Districts Who’ve Done It

1. Be clear at the outset about what “success” looks like. When a new initiative is proposed, specify the outcomes that will be measured, by whom, and when. Make sure everyone knows what data would be considered proof of success later on.

2. Combine evidence-based decision making with cost-benefit analysis. Evidence-based decision making says “Wow! This program delivers great results!” Cost-benefit analysis says, “Yeah, but it costs sixty gazillion dollars per student. What if we could get 70% of that same benefit with a program that costs a little less, and allows us to work on this other instructional need, too?”

3. Don’t be afraid of pilot tests. I’ve said in a school board meeting, “We can’t run pilot tests” and here is where I eat my words. We can and probably should. It’s the best way to reduce the risk of a district spending too much of its money and students’  and teachers’ time on an intervention that doesn’t work.

4. Beware the sunk cost fallacy. Only the likely future benefits and costs of a program—not the sunk costs—should be considered when making a decision on whether to invest in a program going forward. I think of this as the “bad boyfriend” cognitive bias. Yeah, you’ve been with him for four years. You’ve invested a lot of effort. But girl, it’s still time to go.

5. Don’t make it just about cutting costs. A-ROI might yield budget savings, but it’s ultimately about making the best instructional decisions for the district’s students. As such, it’s a process a district should run completely separately from its budget development cycle. Some districts also adopt formal policies stating that no employee will lose their job as a result of the findings of an A-ROI analysis. That analysis can lead to any one of the following results:
  • Wow. Great value. Let’s expand this program.
  • Delivering really well for some student populations: let’s use it in more targeted ways.
  • Results aren’t clear: let’s continue to monitor for X period of time.
  • We’ve uncovered this flaw: let’s fix that flaw and reevaluate in X period of time.
  • Let’s abandon this program.
 
In my work as a school board member, I can appreciate how intensive A-ROI would be to implement. But I am triply certain that A-ROI or a discipline very much like it is absolutely essential. We have to exercise this discipline if we are to be good stewards of taxpayer dollars, if we want to avoid overburdening educators with low- or no-value initiatives, and—most important—if we are really committed to providing the best education to our communities’ youngest citizens.

Like. A. Boss.

6/30/2023

 
Jamestown, Randolph, Carlin Springs, and Claremont. At these four APS elementary schools, something really interesting is happening: for two years in a row, these schools have produced double-digit growth in reading.*

APS uses a tool called DIBELS to measure the development of early literacy and early reading skills at the beginning, middle and end of the school year. I got curious about this question: what percentage of K-5 students are in the DIBELS “green zone” of proficiency in September, and how many more (or fewer) are testing at that level by the end of the year?

In eight of our 25 elementary schools, the number of students at or above proficiency increased by at least 10%. And at Jamestown, Randolph, Carlin Springs and Claremont, that happened two years in a row.

These schools had quite different journeys: one of them started with just 28% of its students in the green zone in September 2021, while another was at 73%. One of them is a Spanish immersion school while another is a neighborhood school. One is part of the  International Baccalaureate program and another operates with a community school model. At one school 72% of the students are Latino, and at another they comprise only 8% of the student population.

So what do these schools have in common—is there a “secret sauce” that’s driving this growth? In APS as in other districts, school and division leaders look at a number of contributing factors, including what kinds of curricula and instructional practices teachers are using; having a highly skilled and high-performing school staff; access to resources that meet the unique needs of that school’s population; and more. It’s not always be easy to identify with absolute certainty what’s working, but it’s really important to try to do so, so that those factors can be sustained and potentially spread to other schools.

Some other things I noticed in the DIBELS reading data:

1.  We have a few schools that experienced significantly more growth in 2022-23 than they did the previous year. What changed? (These schools are Ashlawn, Drew, and Escuela Key. Nice work!)

2.  We have a few schools that showed impressive gains for students with disabilities, while at other schools the percentage of that population in the green zone stayed flat or even decreased.**  This makes me wonder three things:
  • whether there are staffing challenges at certain schools (nationwide, school districts are having trouble recruiting and retaining special education teachers)
  • how to navigate the nuance in special education reading data (for example, some students receive special education services and are also English learners. One student could have an IEP, be an English learner and be identified as gifted.) and
  • how to factor in the fact that some families can afford supplemental private services.

3.  Ten schools showed double-digit growth in the number of Latino students reading proficiently last year, led by impressive 15% growth at Drew. The year before, the growth in proficiency for Latino students at Drew was just 4%. IMO, we should be congratulating the Drew community and asking,”How did you do that?”

4.  Ditto for English learners at Drew: an impressive 16% growth in proficiency over last school year.

5.  Ditto for the 12% growth in the number of Black students proficient in reading at Drew. Oakridge was a close second with 11% growth.


Some of you will take issue with my focus on growth measures. A hypothetical example: 20% growth in a single year may mean that just 10% of students in School XYZ were reading proficiently in September, and by June only 30% are doing so. 30% is still not where we want to be.

I appreciate this concern and wholeheartedly agree that our work isn’t done until 100% of our students at every school are reading proficiently. But for a variety of reasons, some of them not directly under a school’s control, that journey is longer for some school communities than others. (Consider, for example, that at some sites about 30% of kindergarteners start school with entry-level proficiency on DIBELS, while at other schools more than 70% do so.)

This is why school- and student-level growth measures are so important. They honor the hard work that is happening in certain schools even if they’re further from the finish line. Our commitment should be to support and accelerate that growth as much as possible.



*More precisely: APS uses a research-based tool called DIBELS to measure the development of early literacy and early reading skills. DIBELS yields really important data about individual students’ skills, but it can also show us what’s happening at the school level: for example, in this dashboard showing the percentage of students performing at different levels. For simplicity’s sake, the stoplight colors are useful: green generally equates to “proficient” and “above proficient.” Yellow and red are more concerning.


**Note: I am not a statistician but I do understand the general idea of statistical validity. My nod to this concept is: I did not analyze subpopulations that comprised less than 10% of the overall school population.

when graduates can't read

3/7/2023

 
It was late spring and [the new principal] was just getting settled into his office, when in walked a father and his son who had graduated the week before. The father took a newspaper off the desk and gave it to his son, asking him to read it. After a few minutes of silence, the young man looked up with his tears in his eyes. “Dad, you know I don’t know how to read.”

The reality for many of our graduates is that they soon find out they didn’t get what they needed. Some of the kids fall into deep despair when they realize they have been betrayed. They were told that they are ready, but they’re not.
- Lindsay Unified School District Superintendent Tom Rooney

In America, nearly one in five graduating seniors (19%) leave school with only marginal reading ability. Despite decades of investment in reading research, curricula and teacher education catalyzed by the No Child Left Behind Act in 2001, we haven’t made much progress.

​Here’s what test data from the National Assessment of Education Progress (NAEP) show:
Picture
In Arlington, there are two ways we measure high school reading ability. First, juniors take a state-required Reading SOL test at the end of 11th grade. Second, APS just began using the  HMH Growth Measure, which students in grades 3-12 will take three times each year.

In APS, 97% of white students passed the 11th grade Reading SOL test last year. The pass rate for Black and Hispanic students was 20% lower, and nearly 30 percentage points lower for students with disabilities (69% pass rate). Less than half of our English learners (45%) passed this test.

​The HMH Growth Measure categorizes performance as Far Below Level, Below Level, Approaching Level, On Level and Above Level. Looking just at “Far Below Level” and “Below Level,” here’s what the most recent testing reveals:
Picture

What does research tell us about the problem?

I spent some time earlier this year reading the research and talking with two local experts: Dr. Olivia Williams, a postdoctoral researcher at the University of Maryland and adjunct professor with the Goucher Prison Education Partnership Program; and Dr. Carrie Simkin, a UVA professor and the director of AdLit.org. Here’s what I’m learning from my reading and my conversations:

1. There’s not a lot of research on high school students who struggle with reading.

In the decades since the National Reading Panel released its report, researchers have published thousands of studies on reading. Yet when I looked online, I could find very few that looked specifically at students in grades 9-12.

I found only one literature review focused on high school reading. Olivia Williams, the author of the review, searched for studies that a) examined interventions conducted on or after 2002; b) measured reading performance both before and after the intervention; and c) studied native English-speaking, general education high school struggling readers. Only 26 studies met her criteria.

Williams notes that even this small group of studies lacked consistent terminology. What does “comprehension” mean? How are we defining “struggling reader”? In the studies she reviewed, “struggling reader” meant everything from being at least one grade level behind to failing an 8th grade state assessment to being at least five years below grade level. “There’s a difference between kids who are significantly behind and those who are just a couple of years behind,” comments Carrie Simkin. “The approaches have to be different.”


2. There’s a disconnect around phonics.
It’s commonly believed that students have mastered phonics by the time they get to high school unless they have specific diagnosed learning disabilities. There’s some research, however, that suggests this might not be true.

As Williams recounts in her review, a 2015 study of reading comprehension among 9th grade struggling readers showed no effect until the researchers looked separately at students with high- and low-level decoding skills. Doing so revealed that the students with higher-level decoding abilities did in fact make statistically significant gains in comprehension (Solis et al., 2015). This is complicated, however: Williams comments that because publishers usually design phonics materials for younger students, their use with teens can be “stigmatizing.”

I asked Carrie Simkin whether high school struggling readers are really students with learning disabilities that have gone undiagnosed. Simkin concedes this is a reasonable explanation for some students, but not all. “Maybe,” she counters, “they have an instructional disability. Our first impulse is always to look at the student, when maybe that student just didn’t get great instruction.”


3. Sustained support may be needed.
Some research suggests that short-term interventions may not be particularly effective. For example, in one study, researchers evaluated the effects of two different reading programs during an intervention year and the year that followed. There were gains in GPAs, grades and performance on state exams during the intervention year—but the benefits disappeared the year following (Somers et al., 2010). Olivia Williams notes that we can’t be sure whether this points to flaws in the interventions themselves, or whether it says something about the need to work with struggling readers over multiple years; more research is needed.


4. Executive functioning plays a role.

Recent research (not specific to high school students) demonstrates how executive functioning skills contribute to success in reading. These skills include cognitive flexibility (shifting), maintaining attention, using working memory, planning ahead, controlling impulses, and more (for a recent literature review, see Duke & Cartwright, 2021).

In one study, researchers looked at students who had poor reading comprehension despite adequate word recognition ability. The study revealed that a third of the students (36.8%) showed weaknesses in executive function but not in their component reading skills, like receptive vocabulary. In other words: for a third of the students in the study, weaknesses in executive functioning appeared to be the primary cause for their reading difficulty (Cutting & Scarborough, 2012).  

This suggests that what some students may need are interventions focused on executive functioning and the root causes of executive functioning delays or impairment, which include things like trauma, autism spectrum disorder, ADHD and depression.


5. Identity is important.

Only one of the 26 studies Olivia Williams examined factored in the social and emotional needs of high school students who struggle with reading (Frankel et al., 2015).  Williams writes: “The repeated experience of failure [by the time students reach high school] takes an emotional toll… Noncognitive aspects of academic development are important at all ages, but especially so in high school, where students with a history of “failure” may struggle with self-efficacy, motivation and engagement.”

What’s more, only one study among the 26 presented findings within the context of race, gender and class (Vaughn et al., 2015).  Williams writes: “Since we know that struggling readers are disproportionately minority, male and poor, it is worth exploring whether different reading interventions are more or less effective with these groups and whether the origins of their struggles demand different remedial attention.”

This is not just about the need to develop culturally-responsive curricula and interventions: it’s about the student’s overall experience of school. Students who are on the receiving end of any kind of “ism” in their school environment (racism, ableism, misogyny, homophobia, religious intolerance, and/or others) are less likely to be comfortable taking risks academically and more likely to be focused on shielding themselves from bias and aggression.



What can we do?

Based on my understanding of the research, here are some of my takeaways:

1. Avoid “one-size-fits-all” fixes. Carrie Simkin counsels, “Struggling readers shouldn’t be lumped together in a single, catch-all remedial class. Instead, through assessment, we can discern what kinds of support students need and, to the extent possible, treat them as individuals.” This might involve a combination of approaches, including special reading classes, tutoring services, virtual programs that students could take advantage of at home or in advisory periods, after school programs and more. Olivia Williams adds, “You have to know what’s available and have the time to plan and differentiate across reading levels.”


2. Look for authentic opportunities in core content classes. With proper support, every high school teacher can help strengthen students’ reading skills. This is particularly true when we think about “disciplinary literacy” and “academic literacy.” Disciplinary literacy involves developing the ways of thinking and communicating that are specific to a particular discipline. Academic literacy involves acquiring the skills needed to read, comprehend and learn from texts dealing with particular subjects (e.g., medical information; financial analyses).

Beyond academic and disciplinary literacy, core content teachers can help strengthen students’ general reading skills, but it has to be done with care. Carrie Simkin shares that we need to focus on “authentic opportunities” to do this—for example, decoding unfamiliar words in a biology class and understanding them by breaking them down into component parts (e.g., omnivore, ectotherm). Simkin suggests, “A literacy coach in a school can help a math or science teacher do this.”

In core classes, we can also apply universal design principles, normalizing audio access to content (and required subject-area assessments) for all students. This accommodation would ensure that students who are struggling readers can access the core content they need to know in an age-appropriate way while they are working in other settings to build their reading proficiency.


3. Build a bigger library. In reading intervention classes, teachers should use a wide range of texts that reflect student interests: it’s “literally anything they care about,” says Olivia Williams. Carrie Simkin adds, “Trust teachers’ professional judgment to curate resources. Give them time to know students and make personal connections.”

This speaks to the essential ingredient of student motivation. Hailey Love, a professor at the University of Wisconsin-Madison, says, “Often when children are perceived as being behind, they’re subject to practices that are actually found to decrease motivation.”


4. Set goals with students. In reading classes, teachers should establish daily purposes for instruction that connect to week-long, month-long and year-long goals created collaboratively with students. Carrie Simkin advises, “We have to ask kids, ‘What’s your goal?’ You have to give a purpose to everything, and kids have to buy into that.”


5. Let students lead. Add in opportunities for peer-mediated group discussions of texts, invite students to generate their own questions, and create other opportunities for students to play meaningful roles in classroom activities. Research supports the positive impact these practices can have (Vaughn et al., 2015; Balfanz et al., 2004; Frey & Fisher, 2014).


6. Recruit and retain exceptional reading teachers. In one study, comprehension gains from the same intervention were twice as high in classes taught by the most effective teachers (Balfanz et al., 2004). Olivia Williams writes, “The success of a reading intervention may not lie exclusively in the strength of the intervention materials or process alone, but may also depend upon a number of outside, less-tangible factors like a teacher’s ability to maintain engagement.”


7. Design special programs that offer struggling readers unique opportunities.  “If placement in remedial reading classes is a tangible reminder of the label of deficiency and serves as an affront to identity,” Olivia Williams observes, “then students may understandably choose to disengage with remedial strategies.”

When we spoke, Williams described with enthusiasm one program in which ninth grade struggling readers tutored second- and third-grade readers who were experiencing their own reading challenges. The ninth graders, who initially reacted with “anger and outrage at being categories as remedial,” grew to view the experience as a privilege. They practiced with the children’s books they used in the program—thus bolstering their own reading skills—so that they would be prepared to work well with their tutorees. The ninth graders scored an average of two grade levels higher on the program post-test and reported higher levels of motivation and attachment to reading (Paterson & Elliott, 2006).

This example reminded me of my own million-years-ago time as a high school English teacher, working with one class of students who were reading below grade level (in some cases, far below grade level). At the time, I wondered what experience I could create for them that would really engage them and reinforce their sense of themselves as worthy and capable. I ended up asking them if they’d like to create an online magazine full of their own writing and artwork—something no other students in the school were doing. They wrote, read each other’s work, peer reviewed, rewrote, created companion artwork, and then hand coded HTML to create the magazine’s website pages. We invited friends and family members to an after school launch party (because it was 1998 and not many of them had computers and internet access at home). The experience was painful for me as a teacher (dial-up modems, only sporadic access to the school’s computer lab…) but highly motivating for my reluctant readers and writers.
Fast-forward 25 years and I’m now a school board member in Arlington. I wonder what we could design, today, that would feel like a privilege and not a punishment for our high school students who are struggling readers.

For example, could we pair reading instruction with corporate/community job shadowing and paid internship experiences? Students would be incentivized to improve their general, academic and disciplinary literacy if they had the opportunity to spend part of their time in “real world” settings where the relevance of their reading skills was immediately evident. Are there summer learning experiences we could offer that look radically different from traditional summer school, combining environmental study (involving reading) with outdoor activities? How could we engage students in the social justice work that so many of them care about and layer in reading instruction?

I’m interested in this subject because it’s creative work—but also because it’s a moral imperative. In one of the articles I read, former NPR reporter Claudio Sanchez recalled visiting a public high school in Tennessee. There, a vice principal told him, “Having a high school diploma does not mean that you can read and write.”

​The United Nations, together with most of the humans on the planet, considers literacy to be a fundamental human right. It’s the very heart of public education. We must do more—and do better.


Think you know how to be inclusive? I thought I did, too.

1/31/2023

 
Last Saturday I went to Baltimore to attend an education conference and hear a talk by Shelley Moore, a Canadian educator, researcher and storyteller.

Shelley asked us to define for ourselves each of these terms:
  • Exclusion
  • Segregation
  • Inclusion
  • Integration

​Next, she showed us this slide:
​
Picture
What do you think? Which one of A, B, C or D represents inclusion? Which one shows integration? How about exclusion and segregation?

(You think about it for a minute while I eat a quick snack. :) Then scroll down a little for the Big Reveal.)

Picture
What’s the difference between “exclusion” and “segregation”? According to Shelley, exclusion is when the people inside the circle decide that individuals can’t be part of their community. Segregation happens when the people inside the circle decide that a particular group (or groups) don’t belong.

Shelley distinguishes between “integration” and “inclusion” in this way: integration happens when someone decides that it’s a good idea for those outside the circle to be brought in—but it’s often not by their own choice. She says it’s like a mandatory all-staff meeting: you know you have to attend, but when you get to the meeting you’ll likely sit next to your closest co-workers and you may not be all that interested in the updates from other teams or departments (particularly if you’re thinking, “This meeting could have been an email!”)

FYI, this tendency to prefer the company of your own group is perfectly natural, and at times necessary and comforting: Shelley calls it “congregation” when we are birds of a feather flocking together. (As a side question, Shelley asks: do our schools offer spaces and opportunities for congregation?)

Inclusion is different from integration because instead of thinking “I have to,” we think “I want to.”  That’s why the community in Shelley’s top circle looks different from the one on the lower right.


Except… after she’d shared this slide dozens if not hundreds of times, one of Shelley’s graduate students told her, “Shelley, I don’t think that this diagram [the top circle] is inclusion either.” And once her student pointed out a few things, Shelley realized the student was absolutely right.

Can you figure out why? There’s more than one change Shelley made; I’ll share them in Part Two next week.

The sorting hat successor

12/31/2022

 

Over the winter break, I’ve been thinking a lot about our APS students who are ready for advanced work. These students have been on my mind because of a recent report-out from the Gifted Services Advisory Committee and the recommendation currently under consideration by the School Board to expand intensified course offerings in middle school.

During my campaign and in my first year as a board member, I’ve talked a lot about every student getting the right level of support and challenge. This includes students who are testing and performing above grade level: they deserve their year's worth of academic growth, too, and to argue otherwise would mean accepting the idea that public education can only serve certain kinds of kids. I don’t believe any of us are well-served by a scarcity mindset.

So, how do we educate students who are ready for advanced work? (Note that I use the term “capable of advanced work” instead of “gifted” intentionally; these are separate but often related groups.) In broad strokes, the approaches have included separate magnet schools; acceleration by skipping grades or particular subjects; separate classes within a school; ability grouping within a general ed classroom; and personalized instruction. It’s a question with a complicated history and no perfect solution (yet).

I was one of these students and experienced all of the approaches mentioned above. I’m the parent of a student who craves more challenge and has on more than one occasion pleaded to be homeschooled or attend private school. And I’m a former teacher.

​
In 1992, I was a first year high school teacher and in my school system, like most across the country, tracking was accepted practice.

​“Tracking” was the pre-Harry Potter version of the Sorting Hat. Teachers and guidance counselors determined whether a student should be sorted into a vocational track, a college track, or honors-level coursework.

In my first year I taught two sections of “Tech Prep 10” and three sections of “College Prep 9” English. The Tech Prep English curriculum was very different and emphasized the kinds of real-world reading and writing tasks that students going straight into the workforce would be most likely to perform: interpreting lease agreements and employment contracts; filling out applications for jobs and bank accounts; writing resumes and cover letters.

Leaders in our school system launched Tech Prep with good intentions: the idea was to make the curriculum more relevant to students’ lives after high school. The problem, of course, was that the adults in charge got to determine each student’s life trajectory before they’d turned 14, and that often these decisions were colored by implicit (or sometimes explicit) bias.

In the 1980s and ‘90s, groups like the National Governors Association, the NAACP Legal Defense Fund and the Children’s Defense Fund rallied to end tracking, correctly arguing that it perpetuated racial and economic inequity by setting up segregated school experiences within single school buildings.

Mixed-ability classrooms then became the norm. Teachers were tasked with meeting a wider range of student interests, abilities and needs, as had been the case decades before in the days of one-room, mixed-age schoolhouses. In modern mixed-ability classrooms, “differentiated instruction” (which had always been a part of teaching, even in the days of tracking) became even more important.
​
In her book The Differentiated Classroom: Responding to the Needs of All Learners, Carol Ann Tomlinson writes that teachers who excel at differentiated instruction
​do not force-fit learners into a standard mold; these teachers are students of their students. They are diagnosticians, prescribing the best possible instruction based on both their content knowledge and their emerging understanding of students' progress in mastering critical content. 
​
They do not aspire to standardized, mass-produced lessons because they recognize that students are individuals and require a personal fit. Their goal is student learning and satisfaction in learning, not curriculum coverage."
​There’s a whole body of literature describing the strategies that teachers can use to do this well; three overriding considerations are training, class size and time.

I mention training because most often, teachers themselves weren’t taught this way. In their undergraduate schools of education, professors may have talked about differentiated instruction, but they weren’t modeling it in a large lecture hall. And once they’ve started teaching, educators’ ongoing professional learning is all too often a one-size-fits-all affair.

Class size is a factor because it’s harder to be a “student of your students” when your average high school class size is 29 (California) versus 15-16 (Maine, Vermont, New Hampshire).

And last but not least, teachers need the time to design differentiated learning experiences and continually assess student progress. But in reality, teachers’ time to plan and collaborate with colleagues on this most essential task is often insufficient, because there are too many other competing demands.


Perhaps due to the challenges cited above, or the top-down pressures created by federally-mandated school accountability and accreditation measures, ability grouping is again on the rise, though in different forms than 20th century tracking. These new forms of ability grouping are more flexible and (ideally) give students and families more say--but they still draw criticism. The debate about how to meet the full range of student needs continues.

Short of a serious overhaul of our Industrial-era public education system (which I’ve written about before, here and here), we need to continually question our assumptions and fine-tune our practices. Former school principal and author Peter DeWitt says it well:
​For some teachers [here I would say “schools” or “districts”], ability grouping is working, or at least they say it is. My suggestion is to prove it. Provide the evidence to show that students are making at least a year’s growth in a year’s time, and that they are actually engaged in learning that they want to get back to each and every day.

Prove that they are not being held from learning ever more than they could because they are in an ability group that may stifle learning. Provide evidence that ability grouping fosters the growth mindset that we so often talk about.

The same can be said for mixed ability grouping. Are we accelerating students through learning based on their own understanding, or are we merely creating a fixed situation even though the students are mixed? Do we have a 1-2 combination where we are making all of the students do the same thing?

As a former school leader I am less concerned by which method teachers are using, and more concerned with the evidence they have to prove that it’s working. If students are being challenged academically at the same time they are being supported socially-emotionally, then I would be happy with either method."

Are you ready for the end of average?

9/20/2022

 
You’re 21 years old, married to your high school girlfriend and already a father to two young boys. You dropped out of your high school in small-town Utah midway through your senior year because your principal told you and your parents there was no way you would graduate with a 0.9 GPA. You never really enjoyed or felt successful at school.

To support your family, you’ve worked nearly a dozen minimum-wage jobs and you rely on welfare checks to help keep your kids clothed, housed and fed. Your latest job? Administering enemas to residents in a nursing facility, a job you took because it pays $1 more per hour.

What’s going to happen to you, your wife, your kids?

If you’re Todd Rose, whose story this is, here’s what happens.

Your dad persuades you to get your GED. Your parents and in-laws scrape together money to help you enroll in night classes at the local college. Eventually, you graduate pre-med, earn your doctorate at Harvard, and become a Harvard professor.

At Harvard, Rose founded the Laboratory for the Science of Individuality. In 2016, he combined his personal story and his research in The End of Average--a book that rocked my world. (And no, that’s not hyperbole.) It’s changed the way I think about education.

​Rose opens the book with a problem that puzzled the U.S. Air Force in the 1950s: multiple, mysterious accidents that could not be explained by pilot error or mechanical malfunction in the aircraft.

They eventually discovered the cause: the cockpits had been designed using the average range of 10 body measurements from a population of approximately 4,000 pilots (e.g. height, thigh circumference, arm length, etc.). But zero pilots were “average” across all ten measurements. If a cockpit was designed for an average pilot, the cockpit fit no pilot. So the Air Force banned the average and forced jet manufacturers to design “to the edges,” meaning a cockpit that would be adjustable for even the tallest, shortest, thickest and thinnest.

What does this mean for education?

Think of a classroom or school designed for “the average.” It would likely feature
  • One style and size of student desks
  • Lots of whole-group instruction
  • One way for students to demonstrate what they know—e.g., a multiple-choice end-of-unit test that every student must take
  • Seat time: a standard number of hours all students must log to get a class credit.
  • One-dimensional, high-level reporting against an average: “I am a B student in math because I am above average.”
  • You’re gifted. Or not.
  • Rigid tracking systems where students are sorted based on performance relative to an average (that is, you are “honors track” or “remedial track” in most or all of your classes)
  • Standard operating procedures: all students are expected to eat at an assigned table in the cafeteria, walk silently in a straight line, and take notes in a certain way.

​This was Todd Rose’s K-12 school experience (and maybe yours, too). It wasn’t until college, when he discovered an honors program built around inquiry and the Socratic Method, that he felt inspired and challenged. Rose says, “I gradually realized that if I could just figure out how to improve the fit between my environment and myself, I might be able to turn my life around.”

In The End of Average, Rose explores the ways that none of us is really “average.” Instead, he argues, each of us has “jaggedness”— a unique set of strengths and weaknesses that all too often get obscured when we use overly simplistic, one-dimensional measurements.

Here's an example. Which man is bigger?
Picture

​Here's another example: Which 9th grade English student is smarter?
Picture

​Rose says, “If we want to know your intelligence, we give you an IQ test that is supposed to tap a range of abilities, but then we merge that into a single score. Imagine two young students have the same IQ score of 110 — the exact same number. One has great spatial abilities but poor working memory, and the other has the exact opposite jaggedness. If we just want to rank them then we could say the students are more or less the same in intelligence because they have the same aggregate scores. But if we wanted to really understand who they are as individuals enough to nurture their potential, we can’t ignore the jaggedness.”

"Right now because we believe in the myth of average, we believe that opportunity means providing equal access to standardized educational experiences,” Rose says in a Harvard interview. “However, since we know that nobody is actually average, it is obvious that equal access to standardized experiences is not nearly enough… it requires equal fit between individuals and their educational environments.”

What would a school or classroom committed to equal fit include?
  • Flexible seating
  • Dynamic grouping of students based on the level of support they need to master a particular skill or topic during that day/week/month
  • Multiple ways for students to demonstrate what they know
  • Multiple styles of instruction: project-based learning; workplace apprenticeships; virtual learning; etc.
  • Self-assessment and reflection: helping students understand their own strengths and weaknesses
  • Multiple categories of giftedness
  • More nuanced assessment and reporting. Instead of “I am a B student in math because I am above average,” a student could say, “I worked on these six math standards this quarter and here’s information about how well I understand each one.”
  • Flexible pacing: students can take the time they need to master a particular concept or skill. As soon as they’re ready for something more challenging, they move on.
  • Advancement based on competency instead of seat time.


School doesn’t have to feel like a 1950s Air Force fighter jet cockpit. Indeed, it can’t. For Rose, this is a social justice issue, it’s an economic imperative, and it’s deeply personal. “I know what it feels like, at least in my context, when you don't fit into the current system. Like the kid who is always feeling … worthless. And I also know what it means to find your fit—to actually find your potential and your calling in life. It leaves me with this sense that from the so-called bottom to the top of our academic system, there's an enormous amount of talent and potential and contributions waiting to tapped.”


​Images of the Rose family are from the Flip Your Script podcast website.
The "Bigger Man" graphic is from Todd Rose's TEDx talk.
The Jagged Learning Profile graphic is from Masters in Data Science.

“they don't pay me to like the kids”

8/9/2022

 
A few months before she died unexpectedly at age 61, Texas educator Rita Pierson gave a TED Talk and recalled a colleague telling her, "They don't pay me to like the kids." Her response: "Kids don't learn from people they don't like." 

We’ve known for quite some time that positive teacher-student relationships boost students’ academic achievement. We’ve always assumed that this is because students feel safe to take risks with someone they trust and are motivated to do their best work.

Research published earlier this month, however, explores a different explanation for the higher test scores and GPAs in classrooms where relationships are strongest: Are these students learning more because they are being taught more effectively? That is: do positive teacher-student relationships actually change the way that teachers teach?

It turns out the answer is “Yes.” This is some of the first research that really examines the effect of positive teacher-student relationships on teachers themselves.

The study recently published in the journal Learning and Instruction focused on evaluation data gathered over two school years for Missouri educators teaching grades 4-10. The researchers conclude:

Positive teacher-student relationships lead primary and secondary teachers to move effectively implement three complex teaching practices examined in this study: cognitive engagement in the content, problem solving and critical thinking, and instructional monitoring… teachers are more likely to check in, monitor, scaffold, provide more constructive feedback to students, have greater confidence in their students’ abilities and use better scaffolding strategies for critical thinking.

The researchers were also able to test “the direction of effect,” meaning they were able to show that the positive teacher-student relationships predict and precede higher-quality instruction. This was true regardless of the teacher’s years of experience, the percentage of economically disadvantaged students at the school, and the school-level proficiency rate on state tests.

Why do I bring this up right now? Because we’re heading into a new school year, and we would do well to spend some time in the first weeks attending to relationships. I don’t mean the traditional “fill out this questionnaire, Back To School Night” kinds of interactions: I mean prioritizing and investing the time it takes for teachers to deeply know their students, and vice versa. This investment will pay dividends all year long. Last August, I wrote about what this could look like. At the time I was thinking about its effect on students, but this recent research now has me considering its effect on teachers, too.

When I was a teacher a million years ago, conventional wisdom held that teachers should be especially stern the first few weeks of school. Lay down the law. Demonstrate that you are in control. This was especially true if you were a 23-year-old teaching high school students just seven or eight years younger than you.
​
There’s no question that teachers need classroom management skills. But they also need relationship skills, and the time to apply them, which I believe create the conditions for a well-functioning classroom.

Good relationships improve student learning. And it just may be that teachers have as much to gain as their students in the bargain.

    Author

    Mary Kadera is a school board member in Arlington, VA. Opinions expressed here are entirely her own and do not represent the position of any other individual or organization.

    Categories

    All
    Achievement
    Assessment
    Budget
    Communication
    Community
    Elections
    Facilities
    Family Engagement
    Future Of Education
    Governance
    Mental Health
    Relationships
    Safety
    Summer Learning
    Teachers
    Technology

    RSS Feed

  • Home
  • About Me
  • Blog
  • How I Voted
  • Contact