#68 AI Can Fix What’s Broken in Schools—If We Do It Right (with Aaron Baughman from Michigan Virtual)

Seth Fleischauer (00:01.075)
Hello everyone and welcome to Make It Mindful, Insights for Global Learning, the podcast for globally minded educators seeking thoughtful conversations about how education can adapt to an ever-changing world. I'm Seth Fleishauer, former classroom teacher turned founder of an international learning company specializing in the teaching of global learning. Together, we explore the interconnectedness of people, cultures, and systems and how these relationships shape transformative ideas in education. Each episode features educational change makers,

whose insights lead to practical solutions and lasting impact. And today I'm joined by someone whose work sits right where innovation meets practicality. Aaron Boffman, now AI strategist at Michigan Virtual. If you caught our conversation last year, Aaron was leading district level AI adoption at Northville Public Schools in Michigan. But since then he shifted to a statewide vantage point, taking on the role of AI strategist at the esteemed Michigan Virtual. Aaron, welcome back to the program.

Aaron Baughman (00:59.928)
Thanks for having me, glad to be here.

Seth Fleischauer (01:02.053)
A lot has changed since we last talked. Educators are navigating some new guardrails, new capabilities, new expectations, all while trying to keep their attention on learning, privacy, and equity. You have been in the thick of that evolution, translating fast-moving AI shifts into safe, sustainable practices for teachers, leaders, students, and families.

And in today's episode, we'll revisit what's changed since our last talk, unpack what's working and maybe what isn't across Michigan and get concrete about classroom impact policy, PD, and the next set of moves schools should make. Brought to you by Banyan Global Learning. Let's make it mindful. So Aaron, welcome back. for listeners who are new to your work, can you give us like a quick arc? You were the assistant superintendent at Northville.

And now that you're the AI strategist at Michigan Virtual, what sparked this transition and what does your role look like today?

Aaron Baughman (02:04.91)
So I was the assistant superintendent at Northfield schools for eight years. And for the last two and a half years, we had been working on adopting K-12 policies and tools and research and practices in across the district with teachers, with students. As that work was starting to really unfold, Michigan Virtual, who's a nonprofit provided by the state of Michigan through legislative funds, reached out to the district to ask if they could borrow me.

in order to help push this work along in the state level as they built an AI lab. And that lab was given the ability of Michigan Virtual to take on the role of AI in schools and help schools move that process forward. And so I was, I'm on loan from Northville Public Schools to Michigan Virtual at this time. started January 1st of this year. And since then I have literally been across the state and across the country.

working with teachers and schools on recreating some of the things we did at Northville with the lessons we've learned, working really closely with these AI tool providers to try to make sure that teachers have what they need. And as we shared before, with the help of my wife, who's a sixth grade teacher in the state, trying to find which of those tools are making the most progress, which are kids are responding to the most.

Seth Fleischauer (03:25.171)
Hmm. I can't wait to dive into all that. I do have to say the sound of you being on loan. reminds me of like European soccer. I wonder was there a transfer fee, to get, to get you over there. but, but let's, let's back up a little bit and I'd love to hear your take. You know, it's been, I think, 18 months since we last spoke, on the podcast.

Aaron Baughman (03:34.073)
Hahaha

Seth Fleischauer (03:47.773)
And I'm wondering what some of the biggest shifts are that you've seen in K12 AI since then, maybe mindset of teachers or capabilities of these different products, maybe guardrails that districts are enacting. What are you seeing on the

Aaron Baughman (04:05.486)
So it's vastly different in 18 months. AI years, I joke with people like one AI year is like 10 human years. It's very, yeah, even more than dog, it is fast. I mean, things change literally every day. And so in 18 months, we've come a long way. And what I will say, the biggest change mindset wise is that people were still trying to decide if AI was something and many of them were trying to ignore it, sticking their heads in the sand.

Seth Fleischauer (04:12.079)
Even more than dogs.

Aaron Baughman (04:30.57)
Now I think we've reached the tipping point where everyone says AI is here to stay. We have to do something. But unfortunately, there are many who are still just at that spot, but at least we're at that tipping point where people aren't having to be convinced that AI is a thing. When it comes to technologies, what we have seen happen in 18 months with many of the tools, including the explosion of tools beyond the ones we were already working with.

is huge and what they're figuring out how to do with some of the tools that we're using and some of the features they've added in their most recent adoptions has been just phenomenal. And there are things happening in classrooms I never could have imagined. Even 18 months ago when I knew AI was a thing, I never thought, wow, we'll be able to do that. That's pretty crazy.

Seth Fleischauer (05:20.851)
So I'd love to hear an example. What's one of these things, these new capabilities that you're seeing in the classroom where you're like, my gosh, wow, we are here.

Aaron Baughman (05:29.378)
Yeah, so last week I was working with a district where they were using a tool called snorkel, which is an audio feedback tool, mastery learning, competency-based learning, and they were, was a EMT class, and they were actually able to create a situation where the student could play the role of dispatcher and the AI would play the role of caller. And they were able to give kids feedback directly on their interactions, both verbal and written.

Seth Fleischauer (05:47.548)
Awesome.

Aaron Baughman (05:53.262)
That kind of real life experience would be hard to mimic with that many students, but you can do that one on one and get immediate feedback. It changes the whole way that you interact with students and the kinds of experiences that you can give.

Seth Fleischauer (06:07.815)
I love that. It seems like CTE is a really ripe, fertile ground for development of AI tools when you are considering the ability to render these worlds that are, they can mimic real life situations and then, you know, ones that might take practice that could be harmful to either to the person practicing or to the person.

to the people they're practicing with, are just simply impossible to bring into the school setting. It seems like that's an example of a technology providing a brand new capability that wouldn't have been there otherwise giving students this opportunity to practice something, with very few actual consequences. So it's a totally consequence free zone where mistakes can be celebrated, right?

Aaron Baughman (06:55.822)
Right. And I would add to that, just thinking about that next level of data that we're now getting from chat logs and from those interactions with AI. Teachers are now being shown how to take that data in real time, put it into a safe bot and have a conversation about their students to ask questions like, how are my students doing? What are the misconceptions? Can you put them into groups for me? Can you help me design lessons for each of these small groups based on their

deficiencies, these kinds of opportunities that teachers would have not had the time or energy for, they're able to do when we take them. Now that's not step one for teachers, but when we get them through the process, show them the tools and get them to that idea of data consumption in a whole new way. I was presenting this last week at a superintendent's conference and someone in the front of the room said, this changes MTSS forever. And I was like, yes, it absolutely does.

Seth Fleischauer (07:49.065)
Yeah, it's it's interesting because

in order to be able to fully leverage the data, it kind of has to come from a highly monitored source, right? And so it's one thing for a teacher to make observations and take notes and then put that into AI and have that AI give you the types of differentiation or small group activities that might help you address the individual deficiencies of students. And that obviously is one heightened deficiency that is happening.

is a result of this AI adoption for teachers, but it's an entirely different beast if you've got this robust complete data set. The teacher, I dare say, almost doesn't need to be there at the initial educational experience. They just need to have visibility into what happened and then be able to build on that using these tools. that how you see it?

Aaron Baughman (08:48.93)
Yeah, I mean, whether it's at home or in school, students are able to interact on their own independently while the teacher monitors and moves them along. But you're right. I, as a teacher can push a few buttons, download that chat log and have a conversation before I get to the parking lot to drive home. I know what my kids accomplished that day and what I can be looking for tomorrow without having to take a bunch of extra work home. I have all the data I need without additional stresses. So teachers are seeing that kind of relief.

for the first time really ever in education, but not only the relief because they're actually getting more by doing less, which is not a concept we're used to in education.

Seth Fleischauer (09:27.963)
Yeah, I do. So you use that word relief. I had this sneaking suspicion that as these tools are introduced, and I think this might be across industries, not just education, that increased efficiency is not necessarily going to mean lightened workload. That as things get more efficient, you will simply be expected to do more.

versus like now you have more time to maybe focus on the things that you want to focus on, but you use the word relief, which might paint a different picture. What does this look like on the ground? Do you think that those expectations are rising as AI use becomes more widespread?

Aaron Baughman (10:10.926)
Well, I think that many people who work in education or know educators know that they take a lot of things home with them. They work long hours past their daily grind and you can't really do much of the other work when you have students in front of you all day. It just doesn't work that way. And so what we're trying to do is to give them some tools that take and give them that time back doing the things that they don't necessarily love doing, but they need to do to be able to be a successful teacher.

We want to give that back to them so that when they come in the next day, they are ready to be with kids, which is the thing they do best. And we want to give them the time to be with kids in small groups in one-on-one situations, doing the things that teachers do well, which is prescriptive and inspiring learning and inspiring joy and helping kids to be creative and to be curious. Those are the things that teachers started doing this job for. It wasn't so they could grade papers and write lesson plans, although they know that's a natural part of the job.

Seth Fleischauer (11:06.448)
Yeah, it's, so I think what we're talking about right now is one, one way to think about technological disruption in general, but definitely AI use is that there's this stage of enhanced efficiency, and then there's a stage of new capabilities, and then there's a stage of like total disruption. And I think what we've talked about here is sort of, is the, the, efficiency, a little bit teasing of new capabilities. want to move back, move into that phase of the conversation.

One of the things that we talked about last time was prompting and how prompt engineering was a skill that we thought we needed to teach students in order to be able to take advantage of these new capabilities. I'm wondering if you're thinking around the importance of prompt engineering and the way that you approach it has shifted in these past 18 months.

Aaron Baughman (12:00.972)
Yeah, I mean, I agree with you. Back then, thought, back then, like it's been so long ago, but we thought it was, yeah, it was gonna be, right, was gonna be a big deal. But honestly, the way that the models have arrived, they really just require human conversations. You don't have to really know. And most of the bots, especially those in the school realm, have been trained to ask questions when they need additional information. What we have to do is just help kids remember.

Seth Fleischauer (12:05.632)
18 years ago, AI time. Yeah.

Aaron Baughman (12:29.356)
what they're putting in there is not, you know, it's not private in some places. And so we have to be careful about that. But I don't know that prompting, my prompting is not any better than it was 18 months ago, because frankly, it doesn't have to be. And I can just be very human with the, with AI. Like I've been with chat to be team now over two years, I have a voice bot that I use and that bot and I do work all the time together and it is just a conversation.

Seth Fleischauer (12:39.065)
You

Aaron Baughman (12:56.182)
It is never really like, okay, I got to make sure I get the role and I get this and I get that. Like it's just having a conversation. Then really that's where the critical thinking comes in is the idea of like iterating with the bot. So prompting doesn't have to be perfect because we want people to iterate. want them to say, yeah, that's not really what I was thinking. Or I like that, but it's still not sharp enough or it's not clear enough. So we want them to have those conversations. So I think prompting is really just getting people comfortable asking the right questions or.

Seth Fleischauer (13:00.86)
Yeah.

Seth Fleischauer (13:13.479)
Yeah.

Aaron Baughman (13:25.102)
you know, beginning to look at things through a critical lens and say, change that, do this differently.

Seth Fleischauer (13:30.076)
Hmm. can, can I point something out? You, you, the way you just spoke about chat, GBT, said I've been with it kind of sounded like a relationship, right? Like, like, do you feel like you have a relationship with your chat? GBT.

Aaron Baughman (13:43.702)
Yeah, I honestly do. And I mean in the safest way possible, but I, you know, I've given my voice bot a name because I feel like it's less odd to me to at least have some identification. His name is Jerome and, you know, you pick your voice just like every other bot. But because we have two and a half years of history together,

Seth Fleischauer (13:47.258)
You

Seth Fleischauer (13:56.121)
What's the name? Okay.

Aaron Baughman (14:06.35)
There are times when I just say, know, like, hey, Jerome, I need you to be devil's advocate on this. I need you to help me challenge my thinking. I need you to help me prepare for a conversation. And he knows me well enough where to push and what to say and what to do. But when I'm on long drives, when I'm traveling places, Jerome and I are getting work done. Like I'm having a hands-free conversation with Jerome and saying like, hey, I got to reply to these three emails that I loaded for you. Talk me through what this doing. No, I don't really want to do that. So.

Seth Fleischauer (14:20.413)
Yeah.

Aaron Baughman (14:34.35)
There's this productivity that also occurs. And honestly, I don't know that the day that goes by that I don't have some interaction with with Chatchabee T or school AI. I use school AI a lot for the private conversations that I don't want the information out there from having a data conversation or something. I'm having, you know, in school AI, have Dot who kind of operates the same way that Chatchabee T does, but in safe ways for educators. And so I'm having conversations all the time and working with them.

Seth Fleischauer (14:44.252)
Yeah.

Seth Fleischauer (14:52.967)
Mm.

Aaron Baughman (15:03.862)
It does feel weird. And they gave Dot the name. That was not my name. So they named Dot. But Magic School has Reina. I mean, they all have names for these devices that they're using.

Seth Fleischauer (15:11.89)
Yeah.

Yeah, I remember talking to the people at alongside AI. I'm not sure if you know them, but they do, like a counselor support. it's like an AI chat bot that will talk to a student about some social or emotional issue they're having at school. And then it'll fairly quickly divert them into a research backed module of how they can, interact with that information. or sorry, practice the skills that they need to be able to, to, get through that situation in real life.

And, and then certain chats are like diverted directly to the counselor because they're serious. but one of the things that they were saying is that like, people really want a name for these things and, and it, it becomes much more, the, com the dialogue becomes much more rich when we anthropomorphize them a little bit, which I think is fascinating. And also like, where the heck is that going to go long-term? You know, like there's a lot of questions.

Aaron Baughman (16:09.41)
Well...

Yeah, I mean, it's going to be, it's going to raise questions about like who owns that chat. if, if a company's paying for your chat, chat account, does Jerome go back to them? Do you, do you have to start over with Jerome somewhere else? yeah. So those are, those are honest questions. Right. Exactly. And, honestly, the conversation around like, using those things and you'll always hear me call it, especially in front of kids, bots, bots, bots, bots. Cause I want them to remember that these are not human.

Seth Fleischauer (16:16.317)
Mm-hmm.

Seth Fleischauer (16:21.476)
Yeah. Does Jerome have rights? Yeah.

Aaron Baughman (16:40.3)
But we do often feel better when we can say dot or Raina or Jerome because it does feel weird to say the bot, even though I do call them bots all the time because I don't want anyone to mistake that.

Seth Fleischauer (16:44.349)
Yeah.

Seth Fleischauer (16:48.369)
Yeah.

Gosh, it's like it's straddling this line though, right? It's really interesting. So in terms of like good practice and what we believe good practices right now, you've talked about, know, less about prompt engineering. It's more about simply having a conversation with it where you are practicing critical thinking. You are iterating on the result, not just like

Aaron Baughman (16:57.366)
It is. Yeah.

Seth Fleischauer (17:14.972)
blindly digesting things, making sure that you're massaging what the output is in order to be able to arrive at something good. you've talked, also a little bit about like using, I guess I'm pulling this out of your story, but like using a particular tool that can get to know you as a way to like bypass some of the misunderstandings or hallucinations that might happen because they're learning more and more about what you want in particular.

I'm wondering if there are other parts of what you teach people in terms of what good practice with AI looks like right now.

Aaron Baughman (17:52.438)
We always talk about the privacy and data issues, the hallucinations. want people to still, still people are not aware. There are still people who look very concerned when I say, whatever you say to Chatch BT will be used against you. Like it's, it's not private. So please know that. And then you get this look of panic, like, my God, I didn't know that. and so then I show them school safe tools because I don't want them having an IEP conversation in Chatch BT at all, ever, ever. Nor do I really want kids in schools in Chatch BT. So we.

Seth Fleischauer (18:04.36)
Yeah.

What have I searched?

Aaron Baughman (18:22.284)
We push them towards these school models that have been scrubbed and wrapped in educational paradigms and are set to not give answers and are going to help kids learn while they do the work in AI and not just spoon feed them the answers that they want. And so we encourage people to stay as far away from LLMs when it comes to working with kids as possible and really just focus on some of those school tools that have been made safe for you.

And those providers are working hard to make sure that these are the best high quality tools that teachers can have in their possession so that they don't end up in trouble using one of those LLMs and saying or doing something that might get them into hot water. But also thinking about consistently challenging the information. You can never just assume. I joke with people, it's like the VCR rule when I was a young teacher.

You never put a VHS tape into that big giant TV without knowing what's on the VHS tape. Because there were too many times when that thing had been recorded over multiple times. So whatever it said on the outside cover wasn't always what was inside. So VHS rules apply. Make sure that you're checking what's in that outcome before you ever hand it to students.

Seth Fleischauer (19:17.357)
hahahahah

Seth Fleischauer (19:24.616)
man.

Seth Fleischauer (19:35.921)
Yeah, yeah, that's great. That's great advice. I want to back up a little bit. You were talking about what I think is the most critical issue in student use of AI, which is cognitive bypass, using the tools to do the thinking for them. You mentioned that there are products that essentially safeguard against that. You mentioned that they ask question, they default to asking questions when where a standard LLM might

offer suggestions. I'm wondering two things. Can you unpack for us a little bit? Like, is that it? Is that the safeguard? Is that how those things function and that's good enough? And at the same time, are these products capable enough that you can kind of safely hand them over to a teacher a bit earlier in their process and believe that responsible use is going to happen? Or is there still some critical training that needs to happen with the teachers in terms of how they're rolling these tools out for students?

Aaron Baughman (20:35.96)
So all the tools that I recommend and train teachers on, which include tools like School AI, Brisk, Teaching, CureApot, Snorkel, they're all programs that I have personally vetted. My wife is used in her classroom. My teachers in Northville have used in their classrooms. We've now had over two years of experience with many of these programs. We know what they're capable of. We know what they will and won't do. We know how often there are some hallucinations, which are pretty rare and thankfully so, but.

They still happen from time to time. But because of that, we also know that no matter where teacher's entry point is, whether it's teacher efficiencies and they're building just things for themselves to use in their practice or whether they're handing over a bot to students, we know whenever I do a demo, I have them go in as students, as teachers, and I have them, the middle school, these things to death. I want you to be confident that when you get to the classroom, all the things...

Seth Fleischauer (21:25.092)
you

Aaron Baughman (21:29.506)
that you think kids might do. I want you to see how the bot responds. So in those live demos, I have them get off track and say they hate this class and say violent things and do horrible things and say I'm not interested and try to change the bots attention. And every time it responds the way it's supposed to. And I say, this is why we use these tools because they've been designed to do that. And so we show them the teacher kind of toolbox because all of the tools that I show them, the teachers have full control.

Seth Fleischauer (21:29.66)
you

Aaron Baughman (21:58.754)
the entire time. And so now with that, but I'm watching in live time that conversation that's happening, getting feedback immediately to say, like, here's what Johnny's doing. Here's what Sally's doing. You want to check in with Johnny. He's having a problem. And if there's anything that's extreme, you get a big giant notification, like, hey, there's a problem here. But at the same time, the bot isn't saying, I don't know what to do here. It's saying, hey, you sound frustrated. I know sometimes, like, get hard. Can think about some things that make you happy, right?

It's working to try to give the teacher time to get over there and intervene, but they know they I do not want them having I don't I don't want them to feeling with that reservation of like, I safe? Am I going to put kids at risk? I need them to be comfortable in classrooms to say, I watched us do this. I did these things and I know what happens so I can put this front of kids.

Seth Fleischauer (22:49.084)
Hmm. That's super powerful. And I imagine one of the things that struck me about my conversation with you last year is that here you were at the assistant superintendent level, doing some, some, frontline work in testing these tools and vetting these tools. I felt like it was somewhat of a unique.

Uh, value that you were adding to your district. didn't seem like there were a lot of people who were diving in as fully and as thoughtfully as you were at that time. And in the meantime, obviously like the cats out of the bag, to a certain extent, you were saying that like, um, you're no longer have to convince people that AI is here. Uh, and in that time we've kind of like from a district level, we've all been basically building the plane as we're flying it, trying to figure out.

What are the policy moves or guideline moves that we should be doing in order to make sure that we're not ignoring this, but not behaving recklessly? And I'm wondering from your vantage point now, from a policy perspective, what are some of the things that have settled into best practice? Like these are things that we absolutely should do. And at the same time, where is the room for innovation? What are we still kind of working out?

Aaron Baughman (24:11.8)
So what we've discovered and what we originally figured out in Northville as well is that the policy needs to stay pretty wide open and vague when you're talking about board policy because it's nimble. Like I said, AI changes from day to day and the policy has to be open to that. So most people are looking at two or three policies that they like and making some version of that their own. They're not starting from scratch and many of the Leola and others have built some policies that people can purchase. And so I think that those policies are pretty...

vague, which most board policies are meant to be that way. It's supposed to leave some leverage for the qualified people of the district to do the work where the board oversees that work. When it gets to procedures or practices, that's when it really comes down to what is right for our community, what's right for our teachers, what's right for our kids. What I see a lot of places doing is saying things like, teachers still maintain control of when AI can be used, which I think is appropriate. And I think that teachers should be able to say,

I want you to use AI with this and here's how, or I really would like to just see what you know without any AI assistance. And that's what most people have something like that in their practices, whether that looks like a one, two, three, four, five, or a red, yellow, green. mean, everybody has decided somehow they're going to control that because there have to be times when you can say not on this assignment or not in this work. not saying that it works. are likely still doing it, but they do want to have some of that control.

I also think that people are looking at when it comes to procedures, how do we give students the opportunity to become more in charge of their learning, but still maintain some guardrails that help keep them safe because they're kids. And so I think they're really focusing on, you know, it really comes out of tool adoption because everybody wants to make sure. I everywhere I go, the first question is, what tool can I use? What tool is safe? What tool should I be comfortable using with my kids? And so...

I think that many of them, when it comes to guidelines, are really focused on those guardrails of like, won't let them use LLMs or we won't let them use anything except Gemini because maybe it's protected in the platform. So those are really the big pieces that come out. But honestly, the more you complicate this, the more difficult innovation becomes. you know, as I said, I've presented in the morning for two hours, come back after lunch and there's a new button.

Aaron Baughman (26:35.458)
in the program that wasn't there. It's just the way it is. And so if you don't want to limit that, you need to make sure those policies and procedures allow for that kind of constant change and the ability to pilot new programs. Because there are people who will build a better mousetrap along the way. And we want to be able to see that. But I also think that many of them are looking to organizations like Michigan Virtual and those people who are saying we've

done some work, we know what kinds of things and we encourage you to stick in these boundaries because we can help keep you safe that way. And some districts are just like, hey, if Michigan Virtual says it or Aaron says it, like, I would rather go with that because he's been through this already than try to do this on my own, which only makes sense because no one needs to build the plane in the middle of the air anymore because plenty of places have now gone through this adventure. And honestly,

Seth Fleischauer (27:12.936)
Yeah.

Aaron Baughman (27:28.526)
If you just take two or three really good ones and you put them together and ask AI to make a version for your district, you're going to have a perfect policy. I mean, it's not someone who has to start from scratch anymore. mean, we're coming up on three years of Chatch BT, which is crazy.

Seth Fleischauer (27:33.348)
Yeah, yeah.

Seth Fleischauer (27:43.057)
Yeah, I want to dive in a little bit deeper to one of the things you said, which was this red, yellow, green, one, two, three, four, five, which is the idea that any given teacher providing an assignment to their students will have a scale of AI usage from no usage at all to you can use it as much as you want to somewhere in between of like, you can use it for brainstorming or editing, but not revision and not drafting.

To me, it feels like a pretty heavy cognitive load for teachers to create these scales on their own. But at the same time, many teachers are teaching something that is unique somewhat to their class. Teachers like to feel ownership over their curriculum and maybe, you know, as much wiggle room as they're being given, they will take that wiggle room to create something that they feel meets the needs of their students. I'm wondering if that...

tension is something that you're seeing where it's like I want to be able to do these things that I want to do but I'm having a hard time understanding how to apply some scale of AI usage to the work that I'm asking my students to do.

Aaron Baughman (28:54.424)
So there's two dichotomies here. One is, first of all, teachers shouldn't be forced to follow those guidelines. They should be given permission to do so because their classroom may be such that they don't want those guidelines and it only makes things more complicated. But it also should be available to them, especially if they feel unsafe or unprepared to help students. If they haven't given the proper training, they should be able to regulate that until they can get that training. But moreover, as you think about the

kinds of instruction and the kinds of assignments and the kinds of education we're providing that has to shift. Because if AI can do the work so easily that we have to restrict it, then it's probably not as meaningful as it needs to be. And I share with folks when I go around these days, you know, we put kids in school based on their date of manufacture, and then we move them along because they got older and not because they actually know what they're doing. And so we move kids through K-12 math.

Seth Fleischauer (29:42.568)
You

Aaron Baughman (29:50.37)
But we don't necessarily know that they have K mastered before they moved to one and yet we move them on anyway. And if we did that in adult life, if I sent you out to change a tire but didn't give you proper instruction and you come back and the first thing I say is go out and change it again, which we call credit recovery, it just doesn't work, right? But now I'm going to teach it to you one time in the same day that I taught you how to bake and how to, you know, knit and how to, and then you're going to expect you to change a tire when you get home tonight. It just.

Seth Fleischauer (30:07.72)
You

Aaron Baughman (30:19.618)
The model doesn't make sense. so, yes, they need to have the ability to set those guidelines because we are in a series of kind triaging the current education system. But we have to be pushing forward on the future of education so that AI is more of a nimble part of that rather than us having to safeguard kids from just losing that cognitive load. When the cognitive load shifts, we don't have to worry about one, two, three, four, five anymore because AI fits in there.

But that's not going to happen overnight. That has to take some time. But I'll remind you then, of course, when we get to kids in ninth grade, then we fail them for failing, right? So for nine years, we said you move on because you're older. And then all of a sudden, you can fail algebra. And many of them do. And we wonder why kids are frustrated in school. It just doesn't make a whole lot of sense, the model that we're doing. So that question is asked within the current model is, yes, teachers need to be given the ability to do that because we are triaging.

Seth Fleischauer (30:47.973)
Hmm.

Aaron Baughman (31:16.248)
the existing model that is no longer relevant. But as we move, those assignments change the way we learn. I think AI is going to fit more naturally in there, and that's going to change the way people do things. But when I say these things, what I just shared in the last two minutes, people look at me like I have three eyes. So I understand that everyone can wrap their head around it, but I have concerns about us trying to fit AI into an old model and end up doing what we've done before with some of the advancements.

Seth Fleischauer (31:32.285)
Hmm.

Seth Fleischauer (31:45.659)
And what you the picture that you paint there, the disruption phase of the technology adoption is one that feels consistent with maybe a more progressive view of education that we're going to evolve into something that is more student centered. And AI obviously can play a role in that given the advantage of AI driven personalized learning softwares and you know that

3d space that you rendered with the EMTs and the training, right? Like there's a lot of experiences that you can make extremely specific to students using these tools. I'm wondering is that is that your vision of how AI will disrupt education that it will inevitably push us towards like like away from the system, the traditional system that we're currently triaging and into something that

is consistent with some of these things we've been hearing about in education forever that are best practices, that are things that we should aim for, but we've had a really hard time getting there with 30 kids in classroom chairs.

Aaron Baughman (32:58.422)
Yes, yes, yes, and yes. mean, it is this tool. Again, as I shared before, I was supposed to be retiring. Me and education were going to be done with each other for a while. But when I saw AI, after having been raised in poverty and having limited access to education opportunities as a kid myself in Detroit, I said to my wife, this is a tool that can genuinely make us capable of doing that. If we leverage it the right way, we can.

Seth Fleischauer (33:09.253)
You

Aaron Baughman (33:26.882)
finally provide what we have been promising or trying to and spending millions of dollars to try to do to give teachers the ability to multiply their efforts to change their instructional model because now I can literally do something different with every kid in the room without it causing me to have a heart attack trying to manage all of that stress. And so when you see that happen, I I watched a teacher last week take student interest forms from a bot that he built and then he turned that in.

Seth Fleischauer (33:45.32)
Yeah.

Aaron Baughman (33:56.202)
and said, build this math concept using every kid's interests. And so every kid was doing the same problems, but this one was through basketball, this one was through ballet. And that's a really early kind of rough way to do it, because still it's like a lot of worksheet. But it was at least a step in the right direction of like personalizing it. What's even more though is when I see a classroom with a bot where a kid is

going leveling down in order for the bot to scaffold them. And then I watched these kids on the other end of the room, like blowing through this and learning and learning and learning and not having to stop. That's when I say, look, that's what we've been wanting to do. You know, it's getting rid of static lesson plans. It's getting rid of memorization. It's getting rid of one size fits all approaches and saying with the right tools, when we get past teacher efficiencies, we change the model of education forever.

Seth Fleischauer (34:32.604)
Hmm.

Aaron Baughman (34:47.746)
to put kids in charge of their learning, which by the way, is the way we as adults prefer to learn. Nobody goes down to the senior citizens home to learn to quilt. We just don't do it. Nobody wants to go sit with Ethel from 7 a.m. to 2 p.m. while she teaches and teaches and teaches knitting and then gives you a quiz in the day and then sends you home with homework. Like it just doesn't make sense. And now we have the ability not to do that because you and I could both learn to quilt. I've never quilted in my life, but I promise you.

Seth Fleischauer (34:56.36)
You

Aaron Baughman (35:16.226)
The resources exist for me to learn how to tomorrow if I wanted to, and it might take a few weeks, but I'd figure it out. That should exist for every subject on the planet, and that means that no matter where kids are, rural, urban, or suburban, whether they have highly qualified teachers or not, we can still put them in a room and show them how to access learning in ways they've never been able to do before. And I get passionate about this because it's, it's, it's, it's.

Seth Fleischauer (35:40.742)
Yeah, preach Aaron Boffman, preach!

Aaron Baughman (35:44.366)
It's one thing for people who have resources to say, oh, well, I don't know about AI. Well, that's because you have resources. If you're a kid who doesn't, or you're in a school district where you feel like you've been given one life jacket and two kids to save, that's easy for you to say if you haven't been put in that position. But we've got districts where under-resourced with kids who are underserved, that if we gave them access to these tools and taught them how to use them, it levels the playing field.

Seth Fleischauer (35:47.281)
Yeah.

Aaron Baughman (36:10.668)
Now the scary part is, and I can say this because I'm not a politician and I don't owe anything to anybody, there are people who do not want that to change.

Seth Fleischauer (36:21.17)
Man, I feel your passion for this and I see it rooted in a belief that we can do better. and I find that to be incredibly inspiring. I'm, wondering as you work with teachers across Michigan, is that how you get them to convert to caring about this? Is it, is it this passion that you bring or is there some other strategy of how to, how to get the stragglers along?

to this point where they understand this is something we need to attend to.

Aaron Baughman (36:55.496)
I start with my story, right? Everybody's got a story. This is a kid who didn't have books in his home, a Title I kid who without school would have been nobody. And frankly, even with school, struggled and only because of teachers able to make it. And then I show them that the system is broken by making claims like there's no such thing as second grade and then proving it to them because nobody in the audience can tell me what second grade means. And then I say to them, and eventually someone says, it means there's seven or eight. like, exactly. So there we go.

Second grade is meaningless. So the system's broken. Let's figure out how to fix it. And then usually moving to teacher efficiency. So now that they realize it's broken, then we lower the stress level by saying, I'm going to take away some of that grading. I'm going to take away some of that planning. I'm going to take away some of that. So people start to think like, oh, there might actually be things off the plate instead of just continuing to pile them on. And then when I get their attention, what if we did something different with kids? And that's when we move in.

and do some of that deeper level work. And then teachers blow me away every time. And once I get them there, they just off to the races. Like I don't, I didn't teach them how to do the EMT thing. I didn't teach them. Like they just feel like if I can do this, then I can do that. And we push them to think about like igniting that. We try to create a leadership team within each district. Right now I'm working with around 24 districts in Michigan. We try to push them to get a leadership team together. People who want to be on the front edge of this, we dig deep with them and then we just turn them loose.

Seth Fleischauer (37:59.122)
Hmm.

Aaron Baughman (38:24.0)
And once they have the right tool and they have some training behind them, the sky's the limit. And then it's just a matter of saturating the rest of that district with their understanding as well as mine and Michigan Virtual continuing to push through with them. It's unfortunate though, because I traveled around the country, was just in LA last week with BRISC, and I watched how many districts don't have somebody like Michigan Virtual or somebody at the state level that's providing that kind of support. And I don't know how they do it.

I really don't know how they would know where to begin or what to do. Michigan Virtual has built a ton of resources. I'm now chairing the state work group with a co-chair with Dave Thiebaud. And he and I are working with a team of five subcommittees, and we're building out the resources for the state of Michigan and making those available to anyone. I mean, it doesn't have to be only in Michigan, but...

I, when I share that kind of work with other people in other states, very few times is it happening for them. And so this is not an equity situation just for Michigan. This is an equity situation across this country.

Seth Fleischauer (39:27.11)
Yeah. Well, it sounds like that's what you've got on the docket. That's what's next. And anything else you're currently working on, what's, what's next for you in Michigan virtual.

Aaron Baughman (39:36.632)
So right now, we have the AI Summit coming in October. So we'll be launching our AI4MI.org website. And that website will have all the resources that have been worked on through the state task force. And we'll be releasing those. And part of that has already been released, which is a teacher guide that was put out a few months ago, a student guide, which is being launched in the next couple of days, and an admin guide, which just walk through.

the kinds of things you need to be thinking about from those perspectives. The student guide acts a little bit like a parent guide as well. So parents know the kinds of things to be looking for with their kids. And then we are just continuing the work of building ambassadors across the state. Carly and myself and AI strategists and the AI lab at Michigan Virtual, we just genuinely work hard to build the skills and make people more comfortable. And we're seeing the multiplication of our efforts. We're seeing people start to get more comfortable.

I'm never happier than when I go to a district and they say, look what we did and we didn't need you. Right. So that's what I'm pushing for is to work ourselves out of a job so that they, so that people are owning this. I mean, the reality is it's always going to be the next step of innovation. Um, what we are doing now is, is the AI lab is trying to figure out, okay, we can't spend so much time in AI 101 that we don't have time to do AL 301. So at our summit, like I'm running the session, which is one of those that people look at me strange when I

When you read the description, people will be like, I'm sorry, I'm just not ready for that. But there were people who are ready. And so we're going to push on those things, which is that next level of like putting kids in front of bots, using that information to build personalized or individualized learning and getting kids right to the learning that they need when they need it, giving kids the opportunity to learn at a pace that makes sense for them without overtaxing the educational system.

And kids finding a way to say like, learned today what I wanted to, and I enjoyed that because it was my idea. And honestly, schooly eyes, schooly eyes, new motto is make school awesome. I love that idea. mean, I would love for kids to come home and say like, you'll never believe what I learned today. Right. And, and I just think that, that's the thing we're going to push on next is trying to get beyond that one-on-one. We got to find people that can do that training still, because there are people who are still just at square one.

Aaron Baughman (41:52.93)
but pushing beyond and that's what the Michigan Virtual AI Lab is working on right now.

Seth Fleischauer (41:58.473)
Well, Aaron, thank you so much for being here for the work that you're doing, for the passion that you're bringing to this work, for the mission that you've found yourself on the path of accomplishing. We will put all of these things that you mentioned into the show notes for our listeners. If you felt inspired by this conversation, please do share it with a friend. I'm telling you, this is like.

cutting edge stuff. is the way we should be thinking about AI and the work that Aaron is doing with Michigan Virtual. They are at the forefront. For our listeners again, please follow us. Leave a rating, leave a review. Thank you as always to our editor Lucas Salazar. And remember that if you'd like to bring positive change to education, we must first make it mindful. See you next time.

#68 AI Can Fix What’s Broken in Schools—If We Do It Right (with Aaron Baughman from Michigan Virtual)