Let the machine do the dishes.

Yellow dishwashing gloves in a sink are propped on stacked glasses and a red pot, with one glove positioned to look like it’s giving the middle finger. A dish rack and soap sit nearby.
Photo credit: Theresa Thompson

The least intelligent artificial intelligence recently clarified my feelings about where I can accept AI. Or, at least, where I understand its appeal.

Shortly after getting my AirPods, I was doing the dishes and listening to a podcast, which paused for Siri to announce that my wife “sent a photo: a black spider on a white surface.” I understood this as a call to action: my wife, afflicted by arachnophobia, was asking me to take care of the situation.

I was surprised by Siri’s description, though. My phone is a yellow iPhone 14 Plus, from early 2023. Apple Marketing would tell me that it’s old enough to be taken on Antiques Roadshow. Well, not in so many words—but they would tell me, less than two years after the iPhone 14 was introduced, that this vintage device was so woefully out of date that it would not support Apple Intelligence.

Of course, Apple’s applications of machine vision predate the current “generative” AI boom. I don’t know what kinds of models are involved in Siri announcing the content of incoming photo messages, but I imagine they are closer to the ones that analyze images in your photo library to identify pictures of “cars” or “dogs” or other search terms—which is to say, the kinds of models, of machine learning, of AI that hardly anybody objected to five years ago.

Regardless of the type of model, it’s also a type of AI application I am comfortable with. I wasn’t in a position to pick up my phone when I was up to my elbows in dirty pans and dishwater; I couldn’t read a message or look at a photo, so Siri did it for me.

Linguist, professor, and AI skeptic Emily Bender often says that she won’t read “synthetic text.” I don’t take that strong of a stance, but I think it has an admirable, bracing clarity. And I do understand where she is coming from. I don’t want to read a novel written by AI. Or a news article. Or a social media post.

When we read, part of what we do is connect to another mind. Whether the words on the page make sense, whether they are good, even whether they are true, doesn’t change the fact that contact has taken place. If I read something I know was written by a human, I can’t be sure if that human knew what they were talking about, or told the truth. One thing I do know for sure, though: another mind thought of these words.

And that, I think, is why I’m comfortable when Siri describes a picture to me. The mind I care about connecting with is the person who took the picture, or the person who sent it to me (in the case of the errant spider, it was the same person). If I can’t see the picture, I don’t really care who describes it for me, or whether there is a “who” involved in the first place.

A common sentiment among AI objectors seems relevant here: why would I bother to read something you couldn’t be bothered to write? As bad as Siri is at so, so many things, here it illustrated that there is a case where I do want to read (or hear, anyhow) what no one could be bothered to write. I want the information, even if it wasn’t worth anyone’s time to write down.

And that pattern is bigger than my trivial oh-my-God-look-at-this-spider example above. One of the recent headline features in Microsoft Teams, Zoom, and other corporate spyware meeting software is automated meeting transcripts and AI-powered summaries, including action items for follow up.

People—I think rightly—often get hives thinking about Big Tech recording and transcribing their every word, even within the highly circumscribed circumstance of a work meeting. On the other hand, taking meeting notes is tedious. It’s hard to do well (at least to my standards), and harder still to do well if you are also trying to lead or even participate in the meeting. It has to get done, but it’s also not really anyone’s actual job, even if it is their responsibility. Think about it: if your entire company got to stop taking meeting notes tomorrow, how many people would the firm be able to lay off as a result? The answer is probably somewhere between zero and you’re a liar.

And meeting minutes pass the same test the narrated photo did: I care about the information in the notes. I’m not particularly interested in connecting with the mind that wrote them. I care about what was agreed upon by the people sitting in the meeting—those are the minds I want to hear from.

It has become a cliché that generative AI gets it exactly backwards: we want technology to automate the drudgery of folding the laundry, and free us up to make art; we don’t want technology to make art to free us up to go fold the laundry. Meeting minutes are the information worker’s version of folding the laundry. It’s not poetry. It’s a chore. I barely want to spend any time reading meeting minutes; I certainly don’t care whether or not you spent your time writing them down.

None of which is to say the current LLM-based tools to do this are the right ones, that they do it right, or that I want you to feel good about using them. But there is such a thing as digital drudgery. I understand why someone would want a machine to wash their metaphorical dishes. I do, too.