My issue with AI
I don't hate AI. I don't love AI. Frankly, before about three years ago I couldn't really be bothered to care about AI. Some people use it, others thought it was stupid. I assumed they were both right.
Then ChatGPT became popular in the zeitgeist. The term AI went from “Artificial Intelligence” to mean whatever they had posted to social media.
ChatGPT, Claude, Llama, and so many others aren't “AI”. I mean, they are... but that term is too damn broad. The tools are Generative AI, and more specifically a Large Language Model which are different from other flavors of AI.
Generative AI/LLMs are that super racist on-line tool that try and others to hurt themselves.
Generative AI/LLMs can't draw hands, or fill eyes with substance, or remember where arms go.
Generative AI/LLMs think this is a parrot:
If you are ever looking for a laugh – visit duck.ai and try to find various versions of unicorns, parrots, Justin Bieber...
I never knew that Justin and Tux were so similar...
Generative AI's aren't terrible – but people act as if basic features are somehow revolutionary and that the tools are seconds away from making everything that isn't AI obsolete. The language is dangerous at best and the philosophy is minimizing and terrible.
We've normalized dangerous language
If we don't consider the implications of our technologies – we allow ourselves to walk willingly into ignorance. We know that the AIs themselves are working to modify data sets as they exist. It isn't intentional or nefarious but it is happening.
They sourced their data from public spaces. Public forums and places where social and non-social conversation happens. The digital spaces that we humans originally dominated are now being shared by tools that are consuming what we said and sometimes giving it back to us.
- Reddit shares jump after OpenAI ChatGPT deal
- Reddit sues Anthropic for scraping website content without consent
- LLM Training Data: The 8 Main Public Data Sources
- etc. etc. etc.
GenAI tools are creating things that they are then sometimes also consuming. When you look at the journey generated content makes, it is surprising and fascinating. The cycle itself sounds like a sci-fi horror monster, or maybe a side-joke in a B-movie? Maybe a Lannister? Warning: this link contains more detail about Game of Thrones than you likely want to know.
The problem isn't an easy one. How do you create tools that work well with us without making them sound like us? The language needs to be clear and easy to understand in order to be useful. But if it sounds human, how can you stop YOUR tool from consuming it as if it were human? The problem is complex...
The solution is really simple. We already figured it out, actually. Around 1979 in the training labs of IBM.
https://simonwillison.net/2025/Feb/3/a-computer-can-never-be-held-accountable/
A computer can never be held accountable
therefore a computer must never make a management decision
We knew back when the world was monochromatic that computers were only as good as the people wrangling them. That hasn't changed... the are still bad at decision making and still need to be told how to do things.
Business is going to forever try and find the cheapest way forward. We need to make sure they don't forget that we exist while they do that.
GenAI and other tools are never going to go away. Even if the bubble bursts, the technology isn't going to truly go away. It might pivot, sure, it might even take a major hit as global economies realize water is a finite resource. But it isn't going to just fade into obscurity.
Putting your heads in the sand won't fix it.
Regular sanity checks will...
Designing systems that aren't completely autonomous with shadowy third-parties handling the minute will help.
Asking for where the sources come from.
And then confirming the sources aren't just regurgitating older AI models.
Listen to people who have experience beyond the scope of MBA when they explain very real limitations. Sure, sometimes they are scared, sometimes they aren't fully aware or on-board. But sometimes you're asking for something that has a cost you don't want to acknowledge.
AI isn't terrible. It isn't the monster under the bed or in the closet. It has slightly moved the cheese and the smart mouse will follow and find it.
My issue with AI is how it is being so terribly misused. Can it help you accomplish your goals? Maybe. Can it do all of the work for you while you sit back, smoke cigars and laugh at how smart you are compared to everyone else?
Helpful checklist
I'm only half-joking
[ ] Is this project beneficial to society or just financially?
[ ] Could you accomplish this without the direct involvement of your AI-friend?
[ ] How much do you understand what is going on with your "project"?
[ ] No, seriously. If hacked - can you explain *what* happened to Grandma's retirement fund, or are you just as surprised as everyone else?
[ ] Is the AI asking you to do ...strange things?
(Weird programming logic? Strange or outdated data practices? Off-topic or seemingly *pushy* or agenda based reasoning?)
[ ] Is Crypto involved?
[ ] Be honest, is there a token or something that says *fungible* in the design?
[ ] Fine, we all know it's crypto but whatever.
[ ] Were you inspired by YouTube or a real-world need?