“AI Slop”

Tech Lowdown with Dillon Roach

I’ll be brief. I value your time. I wrote this myself.

In a time where ‘Slop’ wins the Merriam Webster Word of The Year, I think it’s important to start with some of the important notes, and I’ll come back to those shortly. My name is Dillon Roach. I’m a part of the engineering team at OpenTeams, and I’m that coworker who’s always sharing the latest paper, headline, or buzzy claim about anything ‘AI.’ When not directly working on client work and/or internal projects, I’m likely tinkering with the latest open source AI lego-blocks just to see what kinds of new things are possible. It moves fast; and with that in mind I’ve been asked to write a series of posts to provide my own perspective on things.

To jump right in: Slop. Spend more than a moment online and you’ll find articles bemoaning it, articles produced with it, slop ‘content’ everywhere on social media. If you’re at all like me, you’re tired of reading it and reading about it; so why on earth am I writing about it here? Because, while some have gotten to the point of calling anything ‘GenAI’ touches “slop,” you have other folks using the tools to solve novel mathematics (ex 1), write competition code outside their field of expertise (ex 2), and the list goes on. So where does AI use churn out slop and where do you build genuine value? Two points stand out to me as crucial.

First, utilizing these tools is a new skillset. Anyone capable of writing can pull up an LLM and ask for anything under the sun and something will come back out. However, it takes understanding of the strengths and weaknesses of the tools to know how an LLM can fail at the most basic math question you pose, yet be able to write high level math proofs that can surprise world class mathematicians. Each one of us is at a different level along this new journey; and so far, while there are ways that help and ways that hurt, there’s no single ‘right way’ of doing things – get informed, read what others are doing, then go play jazz.

Second, and more importantly, respect for others’ time. The models are perfectly capable of taking a prompt that took fifteen seconds to write and churning out fifteen pages of something. Nobody really wants to read that, though. Nor do they want to spend their own time verifying the information. Similarly, you can vibe code an entire tech stack, but unless you read through and validate what’s come out yourself, the most likely result is some other person having to spend the time cleaning up the eventual mess. As they say in driver’s education: don’t overdrive your headlights. The models can dump thousands of lines of text in a few moments, but until you have a chance to catch up and verify what’s come out, you’re passing that along to the next person.

With those two simple points in mind, you can expect my forthcoming posts to focus on the latest tech and techniques to help you enhance your AI skillset, while keeping an eye on quality and process so you’re enhancing your work, rather than replacing it with an imitation.

Share:

Related Articles

Davos, Sovereignty, and the Quiet Power of Europe’s Open Source AI

Every January, the global economic conversation moves to a small alpine town in Switzerland. Heads of state, founders, and technologists gather in Davos to debate the future. This year, beneath the familiar headlines about geopolitical tension and economic uncertainty, another theme dominated nearly every private conversation: artificial intelligence.

Read More