Oh, neat
We are all desperate. New product categories are never enough. Technology must distend, blob-like, across all of civilization.
In our frantic desire to find the next thing, we turned upon hollow, planet-destroying grifts, all of which failed to find right relationship to broader society. And now we have machine learning.
Machine learning is both more and less than what we think it might be. It is both more and less sophisticated, always in a becoming. “Wait,” they say. “Wait a year.” Who cares? You are attached to a hive mind now. Texting alone is a miracle. It was never enough for technology to just be there, though, to vibe, because people are greedy and want more profit. The market rewards this.
Machine learning is what happens when we’ve given up on everything else and surrender to a half-remembered past. It is only as good as its inputs, so we made the inputs big. In doing so, we risk surrendering construction of the future to something only fractionally human.
For now, there is utility. I used to have lots of moments where I would work with something new and say “oh, neat” about a thing, from autocorrect to mapping to high-quality touchscreens. I ran out of “oh, neat” moments a few years ago. Few things light me up in this industry anymore. This seems fine; it’s ok for a thing to become diffuse and quotidian, since that means it’s effectively won. Now I would rather look within, and to each other, to determine what technology is doing both to & for us.
For the moments, increasingly rarer, when technology really does help us, “oh, neat” is a good yardstick. We should always be suspicious of hype. Where does it come from? Who stands to benefit? What would durability look like, instead?
I’ve had one “oh, neat” moment with machine learning, and that was with transcription. I use a transcription service for podcast episodes, interviews, strategy calls, and usability tests. But transcription services, especially the big ones, are… questionably ethical. And they’re expensive! And so I would rather not have them as a line item on my proverbial balance sheet. Now there’s an open-source model for transcription that works well enough, is easily run on your own machine, generates subtitles for video(!), and costs zero dollars.
I ran it on a recent podcast, and it performed better than my existing transcription tool. “Oh, neat,” I thought.