Socrates thought the invention of paper was bad because we’d become “too dependent on writing” and “stop using our memories.”
That was a bad take.
In more modern history, Grandpa thinks it’s terrible that you and I (statistically) don’t know how to change a tire on a car and don’t know “how to do anything without a computer.” My dad always lamented that. “That computer has to do everything.”
Well. Yeah, it does.
But something has changed.
Do you know what older people say about you, in aggregate, as a generation now? Gen-Z “Can’t focus”, “Can’t learn new things”, “Can’t reason.”
As one teacher put it, “They’re not there. They have a level of apathy I’ve never seen before. …they’re like addicts.”
This is a stark contrast to new-fangled comics/radio/TV is weird and scary or “kids these days can’t write in cursive!” This is straight up an attack on your humanity. It’s like they’re saying Gen-Z is somehow sub-human, void, and drug-addled. Boomers never called me (a Millennial) sub-human, and Boomers know a thing or two about addiction.
For years now, upon learning that I teach at IU, people ask me, “So … are the kids alright?” They ask with a tone that assumes everything is terrible.
I have consistently said, “I find my students to be shockingly polite, kind, generous, and nice people.” I still say that.
Then I’d add, “They face the hardest employment market since the Depression. Future knowledge workers, of whom I interact with the most, have such a high bar.”
I don’t say that anymore.
Instead, now I add, “Something is shifting. I’m not sure I’m smart enough to understand what it is. My colleagues say it was COVID shutdowns. Some think it’s AI. Others blame smartphones. But whatever it is, I am a little worried.”
The older I get the more I realize the answer to most problems is rarely black and white and it’s usually “a little of all of the above and more.”
About two years ago a young woman told me she was “terrified” about AI. This was when ChatGPT first rolled out.
Recently, OpenAI announced their new image generation model. It’s only for paid users as of this writing, and I’ve been using it. It’s impressive, and I genuinely think it poses obvious challenges to the future when you can’t necessarily trust anything you see ever again. But it’s also not great for my use cases anyway. But it’s definitely getting better.
And there’s the rub. AI is arguably better than many novice design professionals and students. It’s also a better writer than most students today.
All this to say, you reading this right now have a tremendous amount of pressure to perform. You always have. Everyone from your elementary school custodian to the President has been telling you to go to college and get a “good” career, not just a job. But a career.
Now you’re here and you have even more pressure because literal goshforsaken robots can seemingly write, design, edit, and even think more, better, or perhaps as good and certainly faster than you.
Even if you’re “pretty good” at one thing, there’s no substitute for time and experience — and that much is out of your control (for now). Heck, even Bill Gates thinks basically every job is replaceable, including doctors. This is a little absurd, because AI is not going to rewire my house, change my tire, or cook me food.
But AI can make images, design flyers, write stuff, and to some extent teach us things. And if you thought, “I’ll make video games”, “I’ll be a sound engineer”, “I’ll make videos”, “I’ll make webpages,” the bottom is about to fall out of the market.
As a career goes I’m not worried for me just yet, but I admit for some low-level tasks the AI’s pretty good and it has sucked a lot of the “highly profitable” tasks out of my day. What used to be easy, “Let me just copy/paste this into a webpage” has become “How do I justify my skills and expertise? This now needs to be an engaging experience full of interactive elements, custom graphics, and rich media.”
In some ways, I think the market for creative professionals like us is about to look like groceries: a lot of big corporations are going to do all the work for you and package mass-market, mass-produced “food products” that most everyone likes. Meat will be dyed pink to look more appealing (like AI engines “showing their thought process”), food will be frozen and microwaveable (“instantly available”), cheap, and laden with the same salt, sugar, fat, and chemicals that were engineered to be addictive and tasty (like AI’s consistent, surreal visual style).
But on the other end of this are farmer’s markets. There is a big difference between a strawberry at Kroger in February vs. July and a world of difference compared to a farmer’s market strawberry in July. The farmer’s market is small, more expensive, and on paper worse in virtually every way. If it rains, you might literally get wet. But no one denies the farmer’s market food is just plain better.
Most people literally do not know or care anything about technology. Every local business website sucks, grandma’s still using a busted phone from 9 years ago, and most people are just cheap with no taste. It’s one thing to buy cheap lamps and shelves from Wal-Mart when you have little income. But even solidly middle-class people buy cheap stuff and “it’s fine.”
“It’s fine” and “It’s good enough” are most people about most things most of the time. It’s why non-car people buy Hondas and non-tech people buy $600 laptops from BestBuy. It’s why someday Facebook is going to just start generating ads for people — from the language, creative, artwork, etc. — with no human intervention. And for a lot of businesses, this will “just fine” to “great, cause it’s cheap!”
These are not our people. Our people are at the farmer’s market.
You and I — we talk to people with taste. We want to play games that are fun and breathtaking. We want to listen to music by people we love and respect. We want to watch movies that are groundbreaking works of art. People like you and me like to watch people perform at high levels, even on simple binge TV shows like The Great British Bake-Off or Hell’s Kitchen.
Sure, a candy bar or Velveeta cheese is good once in a while, but not all the time. Too much of that and it literally kills you. We all enjoy a bad movie for the sake of being bad once in a while.
Generationally, this is all changing very quickly into one homogenous stew. Grandma already couldn’t tell the difference between Google results and Google AdWords results. It’s about to get worse before it gets better.
But what increasingly separates generational trends isn’t just age (though that is part of it).
If it is true that you can’t be bothered to read more than a paragraph or two without getting fidgety, or you can’t watch a (good) movie without staring at your phone for half of it, or you can’t wrestle with big ideas, then … I don’t know what that means for you. It’d be like if you couldn’t bring yourself to eat vegetables.
I’m no Shigeru Miyamoto or Steve Jobs, but I can write emails without having to ask ChattyG for help. I can read long bits of text like this and form opinions. I can pay attention to someone at dinner and ask questions and be engaged with them without a phone glowing near me.
…can you? Because the assessment from most adults around you is, “No, not really. They can’t.”
If you look at the results of OpenAI’s new image generation technology and think, “Uh…what?” you can and should take this time right now to think beyond what an AI generator can do. Prepare your mind for the mental decathlon that awaits you for the next fifty or sixty years of your life.
I don’t know what the future holds, but if it’s worth anything to you there are some things I think AI will always be terrible at that’s worth investing some of your time and attention toward in all your work:
- AI tools generate a “final product”. It’s not really possible to go back later and change something or change something precisely yet, cause it’s just a big JPG. This is not how reality works for most people. I’ve tried using it to generate vector files, and it’s basically not even possible. That’s not what people need. It’s just what AI does.
- AI tools can only remix what’s already been done. It’s not additive to the body of human work, and based on current industry thinking, may not be able to for a long time given how the models are trained. ChatGPT probably could not have ever or wil ever dream up the iPod, nor the “1,000 songs in your pocket” ad to go with it.
- The humanity of a piece is rarely if ever reflected in AI. AI’s work is generic, broad, and inherently kinda dull. It works in Hoboken just as much as it does in Bloomington or Austin. Focus on local taste, regional aesthetics, and audience preferences.
- Infographics require deep research, information architecture, and good design. Maybe AI will get better at this, but for now it isn’t even trying.
- Harry Truman once remarked “The only thing new is the history you don’t know.” History is rich and locked behind museums and libraries. It requires someone on the ground sorting through their shelves to do anything with that information. AI can’t model what it can’t read. The same goes for photos of real scenes and real places with real things happening.
- There’s what stuff does and what people need. Be a thing people need, like solving a big problem, educating people, raising money, etc. even if they don’t know they need you yet. Be like that fresh July strawberry at the farmer’s market.
- Focus on experiences and remember people only do two things online in one of two ways: you watch or read to learn or be entertained. That’s all the Internet really is.
And finally: always create more than you consume.