Many of you may be aware that a potentially revolutionary new technology called ChatGPT was recently released to the public. For those of you who haven’t seen this platform, it is a remarkable text generation tool that can perform a broad range of language-based tasks.
I’ve spent a fair amount of time on the platform and have been stunned at the responses for my queries:
– Write an essay about symbolism in the novel A Passage to India.
– Write a song in the style of Kenny Chesney about a first date.
– Describe a social gathering in the style of Jane Austen .
In short, the responses were good. Shockingly good.
As you might imagine, this has been a source of serious concern within many traditional education systems. And rightfully so. In just a few seconds, ChatGPT can summarize a reading or pen an essay better than the vast majority of students (and perhaps many teachers). Little exists currently to decipher text that has been generated through AI versus that from the hands of a human, leaving English teachers everywhere clinging to their participles for dear life.
So what’s my take on this? It will probably come as no surprise that my response is different from that of our educational counterparts. If we can get beyond the initial amazement of what ChatGPT can do, we can begin to assess it for what it really is. And what we’ll find is that, yes, ChatGPT is a troubling development for many educational structures. At the same time, however, we’ll also find that the unique skills that Acton learners are developing every day in the studio become indispensable in this brave new world.
A few examples —
While AI will always surpass an individual human in its sheer accumulation of knowledge, it will never be able to make judgments about this knowledge in the way a human can. Every judgment, every decision about “What is best?” requires a moral/ethical decision. AI simply cannot do this. It is limited by the patterns it has been told to mimic and therefore cannot make such determinations. So while it can “write like Hemmingway,” it cannot decide whether Hemmingway’s writing is superior to Faulkner’s. You may argue that determining the superiority of deceased writers is inconsequential, but it is precisely this ability to discern and make keen judgments of complex, nuanced content that is used in higher stakes decisions of all kinds. AI can help us gather information in such situations, but it cannot make sense of it.
Likewise, AI cannot reason or evaluate. It can only reproduce patterns based on the inputs that it’s been trained on. It cannot synthesize them to create novel solutions (the definition of creativity and the source of all innovation), or evaluate them for moral quality.
Reproducing patterns without synthesis or judgment. A breadth of knowledge but with little depth in how to apply it. Inability to innovate. An utter void in areas of moral and ethical decision making. Does this sound familiar? This is textbook “learning to know,” the foundation of many education systems. In fact, I would argue that ChatGPT is the quintessential, most pure and idealized version of “learning to know” that the world has ever seen. And with many of our educational counterparts’ programs largely built upon this principle, it’s no wonder that ChatGPT is such a source of concern, for it poses an existential threat to its paradigm with a single query box.
But at Acton Academy Northwest Indianapolis we’ve known that solely “learning to know” has been obsolete for some time. When information was hard to access, it made a bit more sense to memorize facts, but our world today is desperate not for more facts, but for meaning.
This is why our approach is built around our three pillars.
1. Learning to learn — by which we mean uncovering and practicing the recipes for accomplishing a task. While YouTube and the internet in general have made such recipes more transparent, it still takes judgment to know which tool to choose, when, and why. These are decisions that AI simply cannot make.
2. Learning to do — by which we mean putting these recipes into practice. This requires courage and judgment, things that AI will never be capable of.
3. Learning to be — which is categorically beyond the capacity of computers, and always will be. Taking on a hero’s journey; leading and inspiring others; contemplating beauty, truth, and goodness and then learning to use your gifts to serve others — these are things that bits and bytes simply can’t do. It’s a category error to believe otherwise.
So what do I think? I think an Acton Indy education is more critical now than ever before. While much of the educational world is terrified of a threat to its paradigm, Acton learners will be able to harness and leverage ChatGPT for what it is able to do…and think beyond it for what it can’t.
Did you find this interesting? Here are a few more that you might like: