Mary Kadera
  • Home
  • About Me
  • Blog
  • How I Voted
  • Contact

AI, ready or not

7/16/2023

 
Picture
​OpenAI co-founder Greg Brockman shared this tweet at a conference earlier this year, and it's a great example of how artificial intelligence is going to change the way we teach and learn. Here's another example, in a headline from Education Week:
Picture
Khan Academy founder Sal Khan has declared, “I think we're at the cusp of using AI for probably the biggest positive transformation that education has ever seen." 

I don't know if I (yet) feel the same degree of enthusiasm for AI as Khan, Brockman and others, but I do agree with them on two fundamental points. First, it's that AI is no longer a sci-fi story--it's already with us, and evolving at breathtaking speed. Second, I agree with Brockman that we're in "an historic period where we as a world are going to define a technology that will be so important for our society going forward."  

What should this mean for schooling? And can our education systems keep up? Here are some things I've been thinking about.

AI changing how we teach

As Raymond's tweet illustrates, AI can provide students with always-on, incredibly responsive tutoring. My former boss and mentor Cindy Johanson--one of the most curious people I know--recently encouraged me to keep a ChatGPT* browser tab open and experiment with its capabilities. Before I wrote my last post about DIBELS I used it to educate myself about statistical validity. The Chat GPT experience is different than reading articles on the subject because it's conversational: immediately I was able to ask follow up questions and confer about how the concept would apply to actual data sets I was working with.

Students can use AI to hone their critical thinking and debate skills. A student at the Khan World School told Sal Khan, "This is amazing to be able to fine-tune my arguments without fearing judgment. It makes me that much more confident to go into the classroom and really participate."

You likely already know through media coverage (or maybe personal experience) that ChatGPT can churn out papers for students. It can also write collaboratively with them and provide feedback on students' writing (here's ChatGPT's critique of this post, for example). Khan shares, "The student will say, "Does my evidence support my claim?" And then the AI not only is able to give feedback, but it's able to highlight certain parts of the passage and says, "On this passage, this doesn't quite support your claim," but [then] Socratically says, "Can you tell us why?”

Students can use ChatGPT to change the reading level of a passage and translate it into other languages (see examples here and here). AI can present reading passages with embedded conversational prompts that check comprehension and invite analysis: Why did the author use that word? What’s the evidence to back up that argument? What follow up questions would you ask?

If that sounds like what human teachers do, you're not wrong: there is some overlap. There are things that AI can't do (more on that below), and if we're wise in our application of AI we'll use it to free up our human teachers' time for the work they are uniquely positioned to provide. Only a really good human teacher can intuit how a student's personal circumstances are affecting her learning, particularly if the student herself isn't able to verbalize those circumstances. Only human teachers can connect what a student is learning to her values, or draw unexpected connections across months of learning and multiple subjects as that student has experienced them. 

Teachers might have more bandwidth to do this important work when they deputize AI as a teaching assistant. AI can help teachers explore how they might want to present a particular concept, either generally or to specific individuals or subgroups. AI can help teachers differentiate and personalize instruction: for example, if a teacher wants to create personalized reading passages at the correct reading levels for all 25 students in his class, he can provide the parameters to ChatGPT, refine the stories it generates as he likes, and then use the stories in class later the same day. (Try it out. Pretty cool.)

AI can augment the work of human instructional coaches by providing feedback to educators about how they teach. A teacher can record video of themselves and ask the AI for a critique. For example: Did I call on some students (or types of students) more than others? What patterns did you notice in how I moved around the room? How long did it take students to settle in after a transition? (I'll add here that while this feedback is incredibly helpful to a teacher, the AI doesn't have all the context that the teacher does: for example, why I needed to spend more time with a particular student whose grandfather just died; how IEP  accommodations may be factoring in; etc.).

AI can also help teachers interact with parents and caregivers, particularly those who don't speak English. Watch this TED Talk by tech visionary Imran Chaudri: he's wearing an AI-enabled jacket that produces real-time translation of his words in his own voice! (at 06:50)


AI changing what we teach

If I'm wearing a jacket (or an earpiece, or watch or other wearable) that can do so many things, what is it that I myself need to know and be able to do, with my own brain and body?

We're going to have to work hard(er) to discern what's true. We know that AI doesn't perform perfectly: in today's manifestations, it's sometimes beset by "hallucinations" and produces false information. In part, this is because AI is trained on "large language models" that construct knowledge as the statistical relationships among particular words. AI researcher Yejin Choi comments, "These language models do acquire a vast amount of knowledge, but they do so as a byproduct as opposed to direct learning objective. Resulting in unwanted side effects such as hallucinated effects and lack of common sense. Now, in contrast, human learning is never about predicting which word comes next, but it's really about making sense of the world and learning how the world works." 

If AI can hallucinate, you need to be able to fact check across multiple sources. But what if the source itself isn't real? If you want to be wowed and profoundly unsettled, watch this deepfake demonstration from AI pioneer Tom Graham. Graham rightly says, "We are going to have to get used to a world where we and our children will no longer be able to trust the evidence of our eyes." 

We're really going to have to understand bias. AI is already making decisions on our behalf: if you're a hiring manager, it screens resumes to determine who you should interview; if you're a doctor, it reviews lab results to flag which patients need follow up. In making these decisions, AI is using algorithms that humans created and that prioritize certain pieces of data over others. AI is trained on data sets supplied by humans that may or may not encompass everyone. For example, when tech researcher Joy Buolamwini was a grad student at MIT, she was working with facial analysis software when she discovered that the software didn't detect her face. The people who coded the software hadn't taught it to identify dark brown skin. 

We're going to have to frame good questions.  Imagine you get a lot of data in a spreadsheet and you need to make sense of it. You ask AI, "Can you make me some exploratory graphs?" and it gives you a starting point to begin engaging with the numbers. But to make it more meaningful and relevant, you're going to have to ask the right follow up questions. "What happens if we change this value?" "What would happen if we delay by two years?"

We're going to have to know and apply our human values.  Maybe you're familiar with philosopher Nick Bostrom's famous thought experiment in which AI, directed to maximize the production of paper clips, decides that humans should be killed and converted into raw material to create more paper clips. Yejin Choi comments, "Now, writing a better objective and equation that explicitly states: “Do not kill humans” will not work either because AI might go ahead and kill all the trees, thinking that's a perfectly OK thing to do. And in fact, there are endless other things that AI obviously shouldn’t do while maximizing paper clips, including: “Don’t spread the fake news,” “Don’t steal,” “Don’t lie,” which are all part of our common sense understanding about how the world works."

We're going to have to be long-range thinkers.  OK, maybe there's little danger that you'll be turned into a paper clip, but there are other long-term, significant changes we'll need to explore in an inclusive way (see bias and human values, above) and plan for.

Nita Farahany is a neurotech and AI ethicist; in her recent TED Talk, she outlined how our personal data is being used: "As companies from Meta to Microsoft, Snap and even Apple begin to embed brain sensors in our everyday devices like our earbuds, headphones, headbands, watches and even wearable tattoos, we're reaching an inflection point in brain transparency…Consumer brain wearables have already arrived, and the commodification of our brains has already begun. It's now just a question of scale." 

What could this lead to? Farahany worries about "governments developing brain biometrics to authenticate people at borders, to interrogate criminal suspects' brains and even weapons that are being crafted to disable and disorient the human brain. Brain wearables will have not only read but write capabilities, creating risks that our brains can be hacked, manipulated, and even subject to targeted attacks."

If this keeps you up at night, you're not alone. Author and researcher Gary Marcus shares, "In other times in history when we have faced uncertainty and powerful new things that may be both good and bad, that are dual use, we have made new organizations, as we have, for example, around nuclear power. We need to come together to build a global organization, something like an international agency for AI that is global, non profit and neutral." Which brings me to...

We're going to have to practice global citizenship, deploying all of the skills listed above. Long-range thinking, asking good questions, applying our human values, spotting bias, discerning what's true--all of these will be skills greatly needed in the future.

Coming back to education: we should be asking ourselves how these skills are prioritized in today's teaching and learning. Let's make sure we're teaching to the real tests--and opportunities--ahead. 


*Note: ChatGPT is only one manifestation of AI. I highlight it here because it's been in the news and because it's easily accessible for those who want a test drive.

**I am grateful to TED, my former employer, for creating opportunities to learn about AI via TED Talks. I relied heavily on the knowledge shared by TED speakers in creating this piece. I promise I did not ask ChatGPT "Hey, write an overly-long article about how AI will change education using only information in TED Talks" -- though I could have done so. :) 
​

    Author

    Mary Kadera is a school board member in Arlington, VA. Opinions expressed here are entirely her own and do not represent the position of any other individual or organization.

    Categories

    All
    Achievement
    Assessment
    Budget
    Communication
    Community
    Elections
    Facilities
    Family Engagement
    Future Of Education
    Governance
    Mental Health
    Relationships
    Safety
    Summer Learning
    Teachers
    Technology

    RSS Feed

  • Home
  • About Me
  • Blog
  • How I Voted
  • Contact