Thoughts on thinking
Following the Truman maxim, “The only thing new is the history you don’t know,” I kept my mouth shut about AI prognostications until a mere couple of weeks ago at the time of this writing. In my undergraduate classes I was clear, “My opinions continue to evolve,” on AI. Students, I knew, would not be good stewards of new technology for sake of learning. They would, instead, seek efficiency and ease.
Early in the AI days it was easy for me to suggest, “You should avoid using AI” because it was pretty bad. Students would submit a Photoshop-generated work and someone would have six fingers on three hands (no kidding) and not see any problem with it. It’s easy to suggest, “Maybe Grandma shouldn’t have three hands.”
But now it’s mostly good enough at avoiding these obvious failures I’m inclined to suggest to students, “Go nuts.” For several reasons:
- Undergrads are going to use AI generators anyway
- There’s no going back
- They will be with us forever now, in some form
- They don’t care
But the biggest reason is I just can’t argue against it because history is repeating itself. Humans have always absolved themselves from thinking too hard about some things. Whether it’s an algorithm, a newspaper, a church, or a shaman, people often hoist the hard work of hard thinking to others.
Humans have always had some weird new tool to contend with. Whether it’s Wikipedia, a computer, a typewriter, or all the way back to the invention of paper itself, someone has made the argument it’s terrible and shouldn’t be used. Just in my generation we were told we needed to be able to use libraries, card catalogs, Wikipedia, calculators, or computers because … because.
AI, to me, is the most similar with Wikipedia. None of my school teachers allowed anyone to use Wikipedia “Because anyone can edit it.” Which was precisely the point, and, frankly, Wikipedia is genuinely one of the best, most compelling outcomes of the Internet. It’s truly remarkable. And a great place to start researching — even if you just get an overview of a subject and check the sources. I always started with Wikipedia, Encarta, or similar and just worked backwards. I don’t think I’m a worse person because of that.
ChatGPT is just the next step in this long process.
As I’ve argued before, AI is more likely to just demand more of all of us. Like how people wrote letters with brevity and in relatively short time of their “work day” because it took a while and mail didn’t move that fast. An average office worker can send more emails that are three times longer in one month than John Adams wrote to Abigail in his entire life.
Non-teacher tech-types are looking around at thinly pontificating that “education will just have to adapt.” So far, there’s not been a whole lot of recognition of how that works. There seems to be a reaction among educators to defend against AI at all levels by requiring pen-and-paper essays in class or other draconian measures.
My opinions continue to evolve, but I think educators are likely going to have to adapt in ways that reconfigure what it means “to learn” and “to think”.
Before I get to what that might look like, let me say one personal note: I think all the hand-wringing over student “outcomes” and the success of human endeavor is not that important. Most people never do most of the things school (at any level) tries to prepare them for. The effort is mostly in exposing young people to a variety of skills and subjects so they can explore their own paths. And a great many of those people will never do anything related to any of it. That doesn’t make them losers. Just that your neighborhood welder or general contractor likely never gave a crap about literature anyway. Likewise, most people don’t read books and books are the great stores of deep, intense thought. Frankly, I like that most people don’t read books. Just means if you have a reading habit that includes meaty subjects (no smutty romance novels here), you get to be a comparative genius the rest of your life.
Grading and scoring based on what it means to think
Even before AI I’ve long sought to score students based on whether I see “a mind at work.” It should be sorta self-evident that having a mind at work on something — whether it’s neuroscience or cabinet making — is the key to a happy, fulfilling professional life. To know that you’re able to push the bar forward at some scale.
So, let’s see that mind at work by:
- Requiring students to submit the prompt. I know you’re using AI because you’re not that good of a writer. And you know you’re using AI because you don’t care that much. Can we just agree it’s always been this way and the act of practicing is useful in and of itself? So just submit the prompt to me.
Most students aren’t very good at prompts anyway. They just copy the text of the assignment page into ChatGPT and say “Help me write a paper for this”. Prompt engineers — a term that is absolutely not going away — know that you have to be clearer. Like:
You are a skilled website developer and are tasked with creating a series of webpages for a small legal firm in Indiana. As the developer, you are also required to help create a site structure that is optimized for search engines. Help me generate some ideas for pages that are compelling and competitive with other firms specializing in mergers and acquisitions. Furthermore, explain your thinking and cite competitors and other sources with clickable links. Ensure that each page contains ideas for ways to make it interactive and value-add for visitors and the agency.
Is this a great prompt? I don’t know. But I know it’s better than “This assignment asked for 10 webpage ideas for a law firm. Here’s ten ideas…”
- Consider more oral conversation to demand more on-the-spot thinking and silent reflection. For online classes like mine, this might mean more recorded videos or 1:1 interaction. But it gives a student’s inner voice a moment to shine. And when that voice is quiet, it’ll be obvious.
Thinking is an active skill, and I don’t think anyone’s arguing that people will somehow stop having conversations in the break room with colleagues or their boss. Being able to think quickly is worth evaluating. We can achieve this by being prepared to pepper students with an onslaught or rapid-fire questions.
- Randomize the request and eliminate rubrics.
I have a class rubric that I hate and never look at. I sorta ignore it because “Life does not come with a rubric.” Somehow in education we have to have neat little score cards to facilitate equity, I guess. As if none of us face an onslaught of, “Client said they wanted this written in blue 18pt Arial” and upon seeing it they say, “Actually, let’s just make it red 12 point Comic Sans.” Stuff changes based on whims all the time, at least in my design and knowledge work. Rubrics are too rigid. And so is AI (right now). For years, even prior to AI, I’ve given students prompts and then on their first draft said, “Oh, your client said they don’t like this, this, and this. Change that.”
Sometimes this gets weirdly specific. If I see a student used AI or some automated tool to draw a room, I’ll jump back with, “Oh, the client loves this. But the wallpaper doesn’t match their real wallpaper. They want it to match.” And when they try to hammer out the AI engine to get it to revise something it generated days ago, things usually get worse. Or change so dramatically it’s easy to say, “No no, I liked everything else, just change the wallpaper in the image background.” A perfectly reasonable human request that AI, at least right now, sucks at.
- Work to help students define a style.
Orson Welles’ old chestnut, “Style is knowing who you are, what you want to say, and not giving a damn” matters a lot to artistic endeavors, and I think that includes writing, too. ChatGPT definitely has an image generator style and a writing style. This will likely change, but since most students have no style and no writing skill (by nature of youth), there’s no way for an AI tool to “match their style.” It’d be like fitting Adam and Eve with new clothes that match their style when that style is “naked”.
I find most undergraduates are at least a little bothered by the notion they have no style.
- Encourage original creation where possible through more niche topics. In other words, don’t expect much original thinking on yet another book report about To Kill a Mockingbird.
But you can get more original thinking out of lesser-known books, perhaps by local authors. These books tend not to exist in ChatGPT’s index and thus make it difficult to impossible to use AI for at all.
- Provide prompts for students to start from. This will reduce the constant bashing for “answer all these questions” and “do this for me” and, maybe, encourage a few to question a few things.
You could also provide bad prompts designed to fail, offer incorrect responses, etc. Alternatively, providing some prompts can be a way to encourage AI as a tool, like, “Imagine you are an editor for The New Yorker and you need to edit this piece. Show me areas where my arguments are weak, the language is poor, and identify specific areas for improvement…”
I’m sure my opinions on this will continue to evolve. For now, this seems a start to evolve with things.