The risks of letting AI write for us

I recently met with an artist working on a project that would promote arts education in rural communities. He was gushing about how ChatGPT had helped him write a grant request, enabling him to secure three-year funding. “For someone like me who isn’t very good at structuring my thoughts, ChatGPT is heaven-sent.” This story was in stark contrast to the rants of a fellow educator earlier that week. According to her, grading essays has been frustrating because students submit largely artificial intelligence-generated work (AI). She kept seeing the same fluffy phrases and poorly used qualifiers. While she wished her students had put in more effort to edit their work and make their essays more humanlike, we both concluded that their writing skills might not yet be developed enough to spot the generic AI voice in their outputs.
These stories highlight two differing views on the impact of AI software like ChatGPT on how people think and write. Generative AI tools thrive when you need regurgitated information that does not require much creativity. Since many of the things people write and communicate in workplace settings can be largely accomplished with templated responses, generative AI tools deliver on their promise of boosting productivity. People who are not as skilled in wielding the complexities of language can now better convey their thoughts and ideas. In my organization, for example, staff members have used the program to write first drafts of letters and email responses. Although they recognize the generated work as mediocre, the effort needed to revise and improve it is still significantly less time-consuming than starting from scratch, freeing them to focus on other tasks.
The dependence on AI to craft an outline of what we want to say in exchange for a bit of efficiency is precisely what many educators dread about these programs. Part of the writing process is finding your voice by sitting through the discomfort of coming up with that first draft. When we constantly allow AI to do the heavy lifting for us, we also rob ourselves of the opportunity and the motivation to sharpen our skills to be effective communicators, like assessing how to frame an issue and the best way to deliver a message. This becomes quite problematic for students whose concern should not be productivity, but mastery of analysis, synthesis, and original thinking. By constantly outsourcing their writing outputs, they are stunting their growth in these areas and potentially putting a very low ceiling to the level of proficiency and creativity they can achieve.
As I had discussed in my previous column, “An educator’s AI dilemma,” choosing to ban AI tools in schoolwork is both ineffective and counterproductive. Even if schools block these programs on school networks, there is no way to prevent students from using them at home. While teachers may try to use AI detection like GPTZero or Grammarly’s AI checker, the results are not always accurate. A more savvy student can also just use AI humanizing software to evade these tools. Some educators have opined that homework no longer has use, knowing that students will have access to ready-made answers anyway.
This is not only a histrionic response; it is also quite lazy. Educators were never meant to teach using cookie-cutter approaches; our assignment has always been to teach according to our students’ context and circumstances. The first step we need to take is to accept that AI has rendered many traditional methods of homework and performance tasks less reliable and ineffective in fostering essential student skills. This should be followed by planning creatively how teachers can modify their lesson plans to see how ChatGPT could be strategically integrated into their coursework. The best description I have heard someone use is to treat these tools the way we view calculators—allowed for some but not for others. For example, some teachers allow students to use ChatGPT to research a particular topic, verify the sources presented, and then ask them to write about the topic using longhand inside the classroom. The teacher then provides constructive feedback on the students’ drafts and allocates another classroom session to revise their work.
Teachers should also see these activities as teaching opportunities to push students to critically examine the strengths and shortcomings of AI. By highlighting the limitations of the assistance it could provide—particularly in terms of accuracy and originality—perhaps teachers can also help students understand why they cannot cheat themselves out of an education by being too reliant on AI for their thinking and writing. Rather than mindlessly using AI tools as a convenient shortcut, these discussions should instill a sense of responsibility to practice discernment and discipline. When used thoughtfully, these tools can enhance technological literacy without compromising critical and creative thinking— a lesson not just for students but also worth reflecting on in our own AI use.
—————–
eleanor@shetalksasia.com