Wednesday, August 21, 2024

AI is antithetical to what we are doing in education

 In late June, I posted the following on Twitter: 


...and it blew up. 

 I wanted to flesh this out more, as far, far too much of what is circulating is of the "how to use AI in education" rather than asking what we're doing here at all*. 

From what I have seen in the field, this seems to stem from this ongoing sense of education as a race. It's easy to point fingers on where that comes from--we did, for example, have a federal education effort called "Race to the Top" not that long ago!--but it has steeped into even local district conversations. How many of us have heard district leadership talking about "becoming the best X district" of some kind?

If that is how we view education, not as doing our very best for kids but of doing "better than," then naturally there is an ongoing fear that other districts are somehow going to get ahead of us. They, somewhere else, are going to do better, are going to prepare better, and so we must constantly ensure, not that we are doing our best, but that we're doing whatever the next thing is.

It's like an arms race, and, like an arms race, there are not actually any winners.

And so to artificial intelligence. 

Let me be specific here: we're talking about "generative AI" here, like ChatGPT (with the ever-changing endings).

So what are the problems here?

  1. Generative AI exists through plagiarism. 
    Sounds harsh, right? But AI generates its text through consuming text from across the internet, creating algorithms of what likely will follow. Once that is generated, it is impossible to trace back where the ideas, the work, the images, and even the text itself came from. It thus cannot be properly cited, a very basic requirement of appropriate intellectual work, which we teach children from the time they start school.  
    This scraping of the internet has already led to charges not only of plagiarism, but of outright copyright violation, as when this past December, The New York Times filed suit against OpenAI and Microsoft, a number of major authors filed suit against OpenAI regarding the use of their own work, and Getty sued Stability AI for the use of its images.
    Some have argued that generative AI itself should be cited, as it did the work, but this misses the actual sourcing of the generative AI's "work" which by its very creation cannot be cited. For more on this discussion, see here.

    That should, all by itself, be enough to end any question of the use of generative AI for educational pursuits, but there are other further reasons we should stay away.

  2. Generative AI is not infrequently wrong, and what's worse, it is agnostic to truth.
    Generative AI has no notion of "correct" or "incorrect" or "right" or "wrong." It works in terms of "likely to come next" or not. This was widely noted in Google's Overview feature, giving advice like to eat rocks, add glue to pizza cheese, and mix vinegar and bleach to clean a washing machine.
    A study out of Purdue University covered by Quarz in May found that it was wrong 52% of the time, and, what's more, programmers failed to catch the errors 39% of the time. In other words, "just check the work" isn't good enough, either. 
    There have been arguments forwarded that "it will learn," but the link at the top is to a further discussion of the idea of being "agnostic to truth." It is always worth remembering that this a PRODUCT, and the companies, investing millions in this technology, plan to make money on it, even as that thus far has been panning out poorly.
    It cannot learn; it only can generate "what comes next."
    See this flowchart for deciding on using it with regard to truth.

  3. Generative AI, by virtue of what it draws from, is particularly prone to replicating bias and stereotypes.
    This won't surprise you if you have been following this discussion for any time, but generative AI pulls information from, yes, biased humans, and gives back biased results, even with guardrails.
    See a discussion of this here. You can find all sorts of articles written about attempts to work around, but, believe it or not, what it comes back to is people have to be writing policy and making decisions. 
    Further, human beings may learn from and replicate the bias of the AI system! 

    We already have far too much bias in our educational system; the last thing we need is to use systems that make that worse.

  4. Generative AI is catastrophically environmentally destructive.
    In a reasonable world, this alone would be enough to stop this in its tracks: the energy consumption of generative AI now is vast, and it is getting bigger. Just making an image uses as much energy as charging your phone. In May, Ariel Cohen, writing in Forbes, outlined how its usage is pushing the world towards an energy crisis

    As Just Security noted in a post
    The computing power required to build generative AI demands quantities of energy and water on a scale that can cause severe and long-term damage to the environment. This makes the development of generative AI a human rights issue.

    It is, of course, our children who will bear the burden of what decisions we fail to make around climate and energy now. 


  5. Generative AI does work that we actually should be teaching kids to do.
    We need kids to learn to do research.
    We need kids to learn to check their sources.
    We need kids to learn to write from scratch, and rewrite, and redraft, and edit. 
    We need kids to create, imagine, question. (h/t to reader Caoimhim for this add!)
    There is not a thing that generative AI is doing that we don't actually want kids to learn to do themselves, and in fact, it is why they are in school.

The best resource I have read on this issue specific to education is this recently released Education Hazards in Generative AI by Benjamin Riley of Cognitive Resonance and Paul Bruno of the University of Illinois Urbana-Champaign. It's very readable, very useful, and easy to share. Pass it on.


__________________________________________________________
*Chicago Public Schools being the most recent example, aptly answered by Benjamin Riley in a Twitter thread and Peter Greene on his blog.

General note on this post: I realize that posts like this frequently bring out the "you're just being a Luddite!" respondents, so let me take that one up front: the Luddites didn't hate technology for technology's sake. As Current Affairs notes
...the Luddites didn’t hate machines either—they were gifted artisans resisting a capitalist takeover of the production process that would irreparably harm their communities, weaken their collective bargaining power, and reduce skilled workers to replaceable drones as mechanized as the machines themselves.
Hm. Sounds smart. In other words, as Ben Aaronovitch writes in one of his Rivers of London books: 



No comments:

Post a Comment

Note that comments on this blog are moderated.