I expected to be depressed by the work of the AI Task Force, and indeed I am!
Despite this sentence in the opening paragraph:
Educators across the Commonwealth are navigating how to harness these tools to support teaching and learning, while also addressing emerging concerns related to academic integrity, bias, data privacy, misinformation, and instructional quality.
(emphasis mine)
...and one subsequent mention of "risk" a bit farther on, this report (is it even a report?) makes no effort at all to grapple or even raise the actual issues with generative (I presume that is what we are discussing) AI in education.
To review:
- It purports to do for them work that they should be learning to do themselves: drafting and redrafting, outlining, researching, and more, is THE WORK OF LEARNING.
- Generative AI does its work through plagiarism and theft of intellectual property. The work of others, uncited and without consultation, are fed into the gaping maw of generative AI for reuse and regeneration.
AKA: exactly what we teach kids not to do, to steal other people's work without citation? That's how generative AI functions. - It adopts and magnifies bias. It is consuming what we create, and it does not have the guardrails and other pushbacks that some have adopted to move away from that.
- It is massively environmentally irresponsible. We cannot pretend to care about the children being our future and actively race to destroy the planet they'll inherit at the same time.
One cannot write policies (and here we go with policies again), write curriculum, create professional development, and so forth that is, at ground, about ADOPTION, given the above.
The question should not be how should we? It should be if we should at all.
And the answer is no.
No comments:
Post a Comment
Note that comments on this blog are moderated.