Mary Kadera
  • Home
  • About Me
  • Blog
  • How I Voted
  • Contact

Away for the day

7/3/2024

 
In the spring of 2007, while I was counting down the weeks until the birth of my oldest child, millions of other people were counting down the weeks until they could get their hands on this new thing called an iPhone.

I’d had a Blackberry when I was employed full-time, but when I left that job, had a baby, and switched to freelance work, I wasn’t convinced the juice was worth the financial squeeze. My flip phone and I stayed together for seven more years.

But in 2014 I relented, and right away, I was hooked. Check my email from anywhere? Text with babysitters twice as fast? Take pictures, edit them, and post on social media? What took me so long?

Still, I was acutely conscious of my device habits: I was pretty careful about my kids’ screen time and I wanted to be sure I was modeling moderation. I remember keeping my phone out of reach during certain periods of the day, and intentionally stating why I was using my phone if I had to interrupt my time with them: “I need to look up directions to the restaurant where we’re meeting Grandma and Grandpa,” or “I need to check for one email about a work deadline.”

Ten years on in 2024, things are different. When my kids stagger out of their rooms in the morning, I’m likely to be on my phone trying to finish the NYT Spelling Bee game or reading the news on the NPR app. When I cook I’m listening to Apple Music and following a recipe on my phone. Family conversations and outings are punctuated by pulling out phones to fact-check each other, pull up trivia, or sink periodically into our own texts, games, videos, and other diversions. It’s not uncommon for all four of us to be in a room together, each on their own device, in companionable silence.

I find this unsettling. Is it different from when I grew up and we’d all be at home, but each doing our own thing? When I’m on my phone, do I seem less accessible to other people than if I were reading a magazine or writing in a notebook or watching TV on a TV set?

Research conducted last year suggests that on average, American adults spend four and a half hours each day on their phones, up from three hours just a year before. We check our phones an average of 144 times each day, and 75% of us check our phones within five minutes of receiving a notification. A third of all US adults report that they go online “almost constantly,” up 10% from 2015.

This is the context for “Away for the Day” phone policies that school districts are implementing. APS will introduce an “Away for the Day” policy for students in all of its schools starting in August.  (Previously, some administrators and staff members had established an “Away for the Day” rule in certain schools and classrooms, but it was not standard practice.)

Teachers are tired of competing with phones for their students’ attention: Mitchell Rutherford, a veteran Arizona science teacher, recently made headlines when he left the profession, citing frustration with smartphones in school. He said, “It’s kind of like the frog in the boiling water. I guess it’s always been increasing as an issue. And then finally, I was like: Oh, we’re boiling now.” During the last school board meeting here in Arlington, one teacher gave public comment that phones have become “a black hole for brain power in every classroom.”

It's hard to argue with the idea that phones can be a seductive distraction: after all, haven’t they seduced most of us at this point?

In an ideal world, we’d teach students how to exercise good judgment about when and why they’re on their phones. We’d acknowledge that digital platforms can provide important information and social support for individuals who may be marginalized in their physical communities. An outright ban in schools wouldn’t be the answer: rather, we’d help students self-regulate and reflect on their relationship with digital devices and content.

The arguments against this approach include the “David and Goliath” concern that immensely powerful companies have created platforms and algorithms that are purposely addictive and undermine our volition. It’s also been noted that the executive functioning skills necessary for effective self-regulation aren’t fully developed in the adolescent brain.

I have a third concern: Who is going to teach and model this sound judgment and self-regulation? What will we do about the cognitive dissonance that comes up when young people hear us talk about their phone dependency while remaining oblivious to our own?

This is a school problem and a social problem. (If you agree that our relationships with our phones are becoming problematic, which I think is a fair assessment.) I’m intrigued by communities that are taking a holistic view of the issue and designing comprehensive solutions. In New York City, for example, the Department of Health and Mental Hygiene considers unregulated social media to be a “digital toxin” and states: “We take a classic public health approach that emphasizes regulations to minimize the production of the toxin, guidance to the public to reduce exposure, and support to individuals to build skills that buffer the toxin’s effects.”  
 
The City’s public health response to social media (which is a piece of, but not equivalent to, problematic phone use) includes encouraging families to delay the initiation of smartphone use until children are at least 14 years old and set shared norms of reducing screen time, especially near bedtime; establishing tech-free zones in schools and other community facilities; and creating community programs that avoid smartphone use during certain times or in certain places to promote social connection.
 
I’m interested in this example and curious how other communities are promoting  wellbeing, which surely must include encouraging healthy media and technology habits for people of all ages.
 
In 2007, Steve Jobs said of the iPhone, “Every once in a while, a revolutionary product comes along that changes everything.” Putting it Away for the Day feels to me like a necessary though vexingly limited response to a phenomenon that is profoundly altering how we think, behave, and build community.
 ​

AI, ready or not

7/16/2023

 
Picture
​OpenAI co-founder Greg Brockman shared this tweet at a conference earlier this year, and it's a great example of how artificial intelligence is going to change the way we teach and learn. Here's another example, in a headline from Education Week:
Picture
Khan Academy founder Sal Khan has declared, “I think we're at the cusp of using AI for probably the biggest positive transformation that education has ever seen." 

I don't know if I (yet) feel the same degree of enthusiasm for AI as Khan, Brockman and others, but I do agree with them on two fundamental points. First, it's that AI is no longer a sci-fi story--it's already with us, and evolving at breathtaking speed. Second, I agree with Brockman that we're in "an historic period where we as a world are going to define a technology that will be so important for our society going forward."  

What should this mean for schooling? And can our education systems keep up? Here are some things I've been thinking about.

AI changing how we teach

As Raymond's tweet illustrates, AI can provide students with always-on, incredibly responsive tutoring. My former boss and mentor Cindy Johanson--one of the most curious people I know--recently encouraged me to keep a ChatGPT* browser tab open and experiment with its capabilities. Before I wrote my last post about DIBELS I used it to educate myself about statistical validity. The Chat GPT experience is different than reading articles on the subject because it's conversational: immediately I was able to ask follow up questions and confer about how the concept would apply to actual data sets I was working with.

Students can use AI to hone their critical thinking and debate skills. A student at the Khan World School told Sal Khan, "This is amazing to be able to fine-tune my arguments without fearing judgment. It makes me that much more confident to go into the classroom and really participate."

You likely already know through media coverage (or maybe personal experience) that ChatGPT can churn out papers for students. It can also write collaboratively with them and provide feedback on students' writing (here's ChatGPT's critique of this post, for example). Khan shares, "The student will say, "Does my evidence support my claim?" And then the AI not only is able to give feedback, but it's able to highlight certain parts of the passage and says, "On this passage, this doesn't quite support your claim," but [then] Socratically says, "Can you tell us why?”

Students can use ChatGPT to change the reading level of a passage and translate it into other languages (see examples here and here). AI can present reading passages with embedded conversational prompts that check comprehension and invite analysis: Why did the author use that word? What’s the evidence to back up that argument? What follow up questions would you ask?

If that sounds like what human teachers do, you're not wrong: there is some overlap. There are things that AI can't do (more on that below), and if we're wise in our application of AI we'll use it to free up our human teachers' time for the work they are uniquely positioned to provide. Only a really good human teacher can intuit how a student's personal circumstances are affecting her learning, particularly if the student herself isn't able to verbalize those circumstances. Only human teachers can connect what a student is learning to her values, or draw unexpected connections across months of learning and multiple subjects as that student has experienced them. 

Teachers might have more bandwidth to do this important work when they deputize AI as a teaching assistant. AI can help teachers explore how they might want to present a particular concept, either generally or to specific individuals or subgroups. AI can help teachers differentiate and personalize instruction: for example, if a teacher wants to create personalized reading passages at the correct reading levels for all 25 students in his class, he can provide the parameters to ChatGPT, refine the stories it generates as he likes, and then use the stories in class later the same day. (Try it out. Pretty cool.)

AI can augment the work of human instructional coaches by providing feedback to educators about how they teach. A teacher can record video of themselves and ask the AI for a critique. For example: Did I call on some students (or types of students) more than others? What patterns did you notice in how I moved around the room? How long did it take students to settle in after a transition? (I'll add here that while this feedback is incredibly helpful to a teacher, the AI doesn't have all the context that the teacher does: for example, why I needed to spend more time with a particular student whose grandfather just died; how IEP  accommodations may be factoring in; etc.).

AI can also help teachers interact with parents and caregivers, particularly those who don't speak English. Watch this TED Talk by tech visionary Imran Chaudri: he's wearing an AI-enabled jacket that produces real-time translation of his words in his own voice! (at 06:50)


AI changing what we teach

If I'm wearing a jacket (or an earpiece, or watch or other wearable) that can do so many things, what is it that I myself need to know and be able to do, with my own brain and body?

We're going to have to work hard(er) to discern what's true. We know that AI doesn't perform perfectly: in today's manifestations, it's sometimes beset by "hallucinations" and produces false information. In part, this is because AI is trained on "large language models" that construct knowledge as the statistical relationships among particular words. AI researcher Yejin Choi comments, "These language models do acquire a vast amount of knowledge, but they do so as a byproduct as opposed to direct learning objective. Resulting in unwanted side effects such as hallucinated effects and lack of common sense. Now, in contrast, human learning is never about predicting which word comes next, but it's really about making sense of the world and learning how the world works." 

If AI can hallucinate, you need to be able to fact check across multiple sources. But what if the source itself isn't real? If you want to be wowed and profoundly unsettled, watch this deepfake demonstration from AI pioneer Tom Graham. Graham rightly says, "We are going to have to get used to a world where we and our children will no longer be able to trust the evidence of our eyes." 

We're really going to have to understand bias. AI is already making decisions on our behalf: if you're a hiring manager, it screens resumes to determine who you should interview; if you're a doctor, it reviews lab results to flag which patients need follow up. In making these decisions, AI is using algorithms that humans created and that prioritize certain pieces of data over others. AI is trained on data sets supplied by humans that may or may not encompass everyone. For example, when tech researcher Joy Buolamwini was a grad student at MIT, she was working with facial analysis software when she discovered that the software didn't detect her face. The people who coded the software hadn't taught it to identify dark brown skin. 

We're going to have to frame good questions.  Imagine you get a lot of data in a spreadsheet and you need to make sense of it. You ask AI, "Can you make me some exploratory graphs?" and it gives you a starting point to begin engaging with the numbers. But to make it more meaningful and relevant, you're going to have to ask the right follow up questions. "What happens if we change this value?" "What would happen if we delay by two years?"

We're going to have to know and apply our human values.  Maybe you're familiar with philosopher Nick Bostrom's famous thought experiment in which AI, directed to maximize the production of paper clips, decides that humans should be killed and converted into raw material to create more paper clips. Yejin Choi comments, "Now, writing a better objective and equation that explicitly states: “Do not kill humans” will not work either because AI might go ahead and kill all the trees, thinking that's a perfectly OK thing to do. And in fact, there are endless other things that AI obviously shouldn’t do while maximizing paper clips, including: “Don’t spread the fake news,” “Don’t steal,” “Don’t lie,” which are all part of our common sense understanding about how the world works."

We're going to have to be long-range thinkers.  OK, maybe there's little danger that you'll be turned into a paper clip, but there are other long-term, significant changes we'll need to explore in an inclusive way (see bias and human values, above) and plan for.

Nita Farahany is a neurotech and AI ethicist; in her recent TED Talk, she outlined how our personal data is being used: "As companies from Meta to Microsoft, Snap and even Apple begin to embed brain sensors in our everyday devices like our earbuds, headphones, headbands, watches and even wearable tattoos, we're reaching an inflection point in brain transparency…Consumer brain wearables have already arrived, and the commodification of our brains has already begun. It's now just a question of scale." 

What could this lead to? Farahany worries about "governments developing brain biometrics to authenticate people at borders, to interrogate criminal suspects' brains and even weapons that are being crafted to disable and disorient the human brain. Brain wearables will have not only read but write capabilities, creating risks that our brains can be hacked, manipulated, and even subject to targeted attacks."

If this keeps you up at night, you're not alone. Author and researcher Gary Marcus shares, "In other times in history when we have faced uncertainty and powerful new things that may be both good and bad, that are dual use, we have made new organizations, as we have, for example, around nuclear power. We need to come together to build a global organization, something like an international agency for AI that is global, non profit and neutral." Which brings me to...

We're going to have to practice global citizenship, deploying all of the skills listed above. Long-range thinking, asking good questions, applying our human values, spotting bias, discerning what's true--all of these will be skills greatly needed in the future.

Coming back to education: we should be asking ourselves how these skills are prioritized in today's teaching and learning. Let's make sure we're teaching to the real tests--and opportunities--ahead. 


*Note: ChatGPT is only one manifestation of AI. I highlight it here because it's been in the news and because it's easily accessible for those who want a test drive.

**I am grateful to TED, my former employer, for creating opportunities to learn about AI via TED Talks. I relied heavily on the knowledge shared by TED speakers in creating this piece. I promise I did not ask ChatGPT "Hey, write an overly-long article about how AI will change education using only information in TED Talks" -- though I could have done so. :) 
​

    Author

    Mary Kadera is a school board member in Arlington, VA. Opinions expressed here are entirely her own and do not represent the position of any other individual or organization.

    Categories

    All
    Achievement
    Assessment
    Budget
    Communication
    Community
    Elections
    Equity
    Facilities
    Family Engagement
    Governance
    Mental Health
    Relationships
    Safety
    Special Education
    Summer Learning
    Teachers
    Technology

    RSS Feed

  • Home
  • About Me
  • Blog
  • How I Voted
  • Contact