Marginalia: AI, self-driving cars, and the delusion of perfection
Know how when you get a new car, you suddenly start noticing lots of cars of the same model or the same color everywhere? That’s how I’m feeling as I read Matthew Crawford’s book, Why We Drive: Toward a Philosophy of the Open Road. It might not be apparent from the title, but the two themes that keep jumping out at me are the dangers in our headlong rush toward applying AI to everything and the problem of perfectionism.
If you follow my work, you know I’m co-authoring a book, I Am Perfectly Flawsome: How Embracing Imperfection Makes Us Better. That explains my sensitivity to things that look like perfectionism.
[Shameless plug: We’re looking for beta readers as we get close to the galley stage, so if you’d like to help out let us know; hit reply if you’re seeing this in an email, or leave a comment if you’re reading on the blog.]
And you may have seen my posts on the legal and professional dangers of over-dependence on AI:
- Taming the AI Beast: How to make ChatGPT serve, not enslave you
- Are We Having Fun Yet? (with AI)
- [TfTi] Who is the Author of a book these days?
What does any of that have to do with driving?
Well, I’m only about halfway through the book, but Crawford’s main focus to this point has been on the development of self-driving or heavily AI-assisted cars, drawing upon lessons we seem to be having a hard time learning from recent experience with airplanes.
And embedded in his discussion of the rapid evolution of these AI-assisted machines and our ability to use them, I kept writing notes in the margins drawn from our work on perfectionism. So let’s start there.
Imperfect, and Always Will Be
Getting right to the heart of the matter, Crawford points to the messy period we’re in right now, with most of the technology inserted into cars aimed at “semiautonomous” AI-assisted driving. The car may have systems that control braking and stability in emergencies, but provide the driver with guidance and warnings about navigation, lane departure, and many other things during a trip.
Among the problems are many studies (of both drivers and commercial airline pilots) showing that such systems tend to lull us into complacency and make us less attentive to the tasks we’re supposed to remain responsible for. Worse, our actual skills for taking control when we’re needed atrophy when rarely used, which one study called the “deskilling effect” of antilock braking and stability control systems.
But the most disturbing observation, to me, came when the “human factors” experts Crawford quotes treated this as a temporary situation:
“The transition to driverless cars will be difficult, ‘especially during the period when automation is both incomplete and imperfect, requiring the human driver to maintain oversight and sometimes intervene and take closer control.'”
I wrote in the margin in big letters, underlined, with an exclamation point: Always!
The implication seems to be that if we can just get the car completely and perfectly controlled by the AI, everything will be fine. Ask Elon how that’s going with his self-driving cars and rocket ships.
The same experts seem to know that this complete and perfect control is nowhere on the horizon, even if they are among the delusional engineers and executives who apparently think someday it will be. Writing about navigation systems that mistake a path for a road and tell us to follow it into a lake, or driverless cars that are well programmed to stop for pedestrians in crosswalks, but not between intersections, Crawford quotes them again:
“Complex automation systems that rely on artificial intelligence solve ‘most problems with ease, until they encounter a difficult, unusual case, and then do not.’ When they do not, they may not know that they have failed.”
We’ve all heard about how ChatGPT and Gemini are prone to giving us “hallucinations” when we push too hard for concrete answers, preferring — we must assume based on their programming — to tell lies rather than appear not to know. A weakness of some humans, too, I know.
As we write in our book, perfection does not exist. Not now, or in some programmers’ dreamed-of future.
“Even the laws of physics are subject to an uncertainty principle, chaos theory, and randomness.”
So, if we’re not able to program perfection into the AI, where does that leave us as a driver, or pilot, or their passengers?
“A Techno-Zoo for Defeated People”
As I noted, I’m only halfway through the book, but the chapter I’ve just started, called “Folk Engineering” has me thinking Crawford will be offering some more hopeful ideas for avoiding the implications of the subheading I just typed above.
But I suspect his suggestions will require us — the users and recipients of these evolving technologies — to take an active role, both in demanding better design and engineering and in learning how to use them with skill and purpose.
In his chapter called “Automation As Moral Reeducation” he hints at what will be needed, a willingness and confidence in our ability to take charge of the tools when appropriate. A knowledgeable balance between the deference to the machine’s automated systems that comes from their reliability most of the time and an assertiveness to override them when that will yield a better result.
As he points out, achieving that balance will force both the techno-wizards who produce them and we who use them to communicate better than, say, Boeing did in withholding crucial details about its software upgrades to the 737 Max that prevented pilots from acting correctly when the new autopilot adjustments kicked in, causing two catasrophic crashes. We’ll need to gain enough understanding of how cars and other AI-controlled tools work — and why — so we can know when and how to intervene.
Back to the delusion of perfection problem, Crawford is urging us to resist the profit-driven “self-deception of messianic techies” and avoid the atrophy of our skills, particulary our creativity. He sums up the problem this way:
“Systems designed to minimize the role of human intelligence tend to be brittle, as they are not able to anticipate every contingency. When they fail, their failures tend to be systemic, in proportion to the comprehensive reach of their control. Essentially, we are being asked to place complete faith in a committee — at Boeing, at Airbus, and soon at Tesla and Waymo — that it was able to grasp every pertinent consideration in designing the system … On this design side … there is certainly some hubris. But for the user of the system, the pilot or driver, a very different mentality is encouraged [one he calls a dispiriting deference to the machine.]”
He acknowledges that gaining these new kinds of knowledge and skill will require serious effort, suggesting that:
“They often call on that narrow sliver of our competence that is based on language: for a pilot, perhaps reading a manual that describes various autothrottle modes.”
He’s clearly also worried about the much older way humans have acquired and passed on learning: by doing. And in airplanes and automobiles, the very existence of the automated controls interferes with our ability to learn and embody the skills we may need to intervene or override successfully.
That’s why he leaves that chapter with the warning that in the “dispirited state” he sees resulting from our present trajectory,
“… we do become incompetent. … the world becomes a techno-zoo for defeated people, like the glassy‑eyed creatures in WALL‑E …”
As I was writing this post, a similar warning came in on LinkedIn from AI expert Lori Mazor, entitled, Humans On, AI Off: How I got beaten at my own game, where she asked the AI tool Claude to help with the writing. And in a portion written entirely by Claude, came this dismal assessment:
“The rise of AI in the writing industry isn’t a collaboration; it’s a farce. We’re not leveraging the strengths of human creativity and machine intelligence; we’re surrendering our humanity to the algorithms.“
Making Space for Skilled Human Activity
To avoid leaving us completely … well, dispirited … I’ll close with some brighter thoughts from the beginning of the “Folk Engineering” chapter. Although the book is about cars and driving — taking me back to my teenage backyard mechanic days — the following statement by Crawford can be applied just as well, I believe, to using our fingers for typing and hands for manipulating a mouse or stylus when using the AI-driven, writing and design tools available to creatives today:
“I think a fully free relationship to technology would be one that neither shuns it as alienating magic nor accepts uncritically the agenda that is sealed inside the black box.”
He’s writing about his current project of rebuilding from the ground up a 1975 VW Beetle and explaining that he’s not a Luddite, but is “giving it state-of-the-art digital engine management, using the do-it-yourself platform MegaSquirt.”
That hat-tip to a DIY tech platform provides the connection to a crucial point he made a few paragraphs earlier, referring to the Beetle itself as a physical object we can look at, touch, and in those ways understand. He writes:
“Because it is accessible to your imagination in this way, you may find that your want to do stuff to it. That is, you may be tempted to become a folk engineer.”
Once again, I totally get what he’s talking about. I had that feeling of wanting to “do stuff to” bicycles, a 1953 Chevy field car, and the 1961 Triumph Spitfire I worked on nearly every weekend during college, both to keep it running and doll it up, when I was much younger. For the last two decades, I’ve directed most of my physical world “do stuff to” energies toward renovating whatever house Yvonne and I currently owned.
But I’ve also had that feeling about document design in the early versions of Word and Powerpoint, and to use another phrase from Crawford gotten “all worked up about the possibilities” when I was first introduced 20+ years ago to website design with the former Macromedia DreamWeaver, graphic design with Fireworks, and then page layout with Adobe InDesign.
In those cases, I found myself wanting to do stuff with the software.
And that’s the feeling I think we should look for when interacting with AI. It’s a tool. Take the time to learn enough about how it works so you can do stuff with it, too.
In the Introduction to Why We Drive, Crawford is writing about the potential encroachment of technology into a shrinking “space for skilled human activity.” He makes a strong case for the danger and how it may be included as one underlying cause for the populist feeling and protest around the world.
I wrote in the margin there:
“If we want to keep ‘space for skilled human activity’ we may need to keep reskilling, upskilling, and opening new spaces for such activities.”
In the spirit of human-machine collaboration, I’ll add this bit of advice from Lori Mazor’s piece, but written (apparently) by Claude:
“So, dear readers, I urge you to be cautious in this brave new world of AI authorship. Don’t be fooled by the shiny allure of machine-generated content. Embrace the imperfections, the quirks, and the idiosyncrasies that make human writing so compelling.”
Even the AI itself knows that embracing imperfection makes us — and our work — better!
Leave a Reply