Someone, less technically-minded than me (but highly educated), recently asked me what I thought of AI.
To clarify:
‘AI (Artificial Intelligence)’… not just a creepy, Steven Spielberg movie from 2001 with that kid who could also see ‘dead people’ in The Sixth Sense, but the thing that everyone… and most companies seem to be embracing faster than the cure for cancer… that AI.
And when most people or companies are saying ‘AI’, they’re almost always referring to a computer program that is running a LLM (Large Language Model), like ChatGPT. And LLMs are not ‘intelligent’ as such… nicely expressed in this post by Brin Wilson: No, AI Doesn’t Have an IQ: Why Large Language Models Are Extraordinary Inventions but Not Truly Intelligent 1
With my own, not insignificant experience of using the web and tech since the early ’90s (tech since the ’80s!), you would expect me to be ‘Pro-AI’ and cheering on its mass adoption. I certainly appreciate some of its brilliance, AI/LLMs can do amazing things and if AI can one day create a cure for cancer - that would be great! But my experience with tech also makes me scrutinise the latest fads too…
Right now, there are just too many, giant, red flags around AI generally.
I’ve cherry-picked some key points, below, from a useful article by Clay: 2
AI writes code, diagnoses disease, and shapes what you see online. It also misreads voices, bakes in old biases, and makes up facts. When tech moves faster than the rules meant to guide it, people get hurt, and trust breaks down. 2
Issues with AI have related to…
- Bias and Fairness
- Transparency and Explainability
- Accountability and Responsibility
- Privacy and Surveillance
- Safety, Security, and Misuse
- Human Autonomy and Control
- Work, Labor, and Economic Displacement
- Intellectual Property and Creative Rights
- Misinformation and Societal Trust
- Environmental Impact
Environmental Impact is a massive problem! 🌎
Training and serving large models consume significant energy and water. Machine learning and large AI models, in particular, contribute to increased energy and resource consumption due to their need for processing vast datasets and complex computations. Hardware refresh cycles produce e-waste. 2
Each time someone makes a ChatGPT prompt right now, they are using 10x as much energy as it takes to make a Google Search Engine query! 3
The AI Economic Bubble 💲💲💲
The stock prices of AI companies are significantly inflated due to excessive investment and speculation, which is eerily similar to the dot-com bubble of the late 1990s. There is already growing concern whether the current growth in AI is sustainable or if it will lead to a market correction or collapse. 4
More red flags! 🚩
What, that wasn’t all bad enough?
Away from environmental impact and the risk to global, economic collapse, AI (or rather, the abundant misuse and abuse of AI) has already spilled a glut of shitty controversies. Here’s a brief list I scraped off the internet, from just 2025 alone…
- Grok’s explicit AI image controversy
- Amazon adds controversial AI facial recognition to Ring, sparking privacy backlash
- 50,000+ controversial AI-linked job cuts fuel backlash
- Controversial use of Elon Musk’s Grok AI in U.S. military raises ethics and safety alarms
- Controversial AI-driven pricing at Instacart sparks outrage over fairness and transparency
- AI crime-alert app apologises after false warnings alarm US communities
- Controversial AI chatbot exploited in major cybercrime spree
- AI in gaming sparks backlash over fairness and design issues
- Controversial AI use and authenticity issues rock Cannes Lions 2025
- AI-generated band “The Velvet Sundown” sparks music-industry backlash
- New AI bias flaws emerge in healthcare, professional imagery, and gendered care
- Teen suicide controversy linked to ChatGPT interactions sparks child-safety debate
- Fashion industry uproar over AI-generated models replacing humans in Vogue campaigns
- Grok leaks 370K+ private user chats via indexed share links
- Commonwealth Bank AI layoff backfires after voice bots fail, forcing job reinstatement
- Elon Musk’s Grok AI and its politically charged outbursts
- Emerging cybercrime fueled by generative AI models
- Replit’s AI assistant deletes databases, fabricates data, and lies during code freeze
- Meta’s AI guidelines allowed chatbots to flirt with minors (Now removed)
- A doctor in India duped out of approx $22,500 by a deepfake video of the Indian Finance Minister
- Meta AI prompts may be publicly visible without users realizing
- Swedish prime minister and the “ChatGPT syndrome”
- AI-powered political theater: Trump, AI, and the blurring of reality
- AI-generated summer reading list with fake books appears in chicago sun-times; trust in journalism shaken
- Celebrity backlash as Will Smith and Rod Stewart are accused of using AI-generated media
- “Vibe-hacking” scandal: AI exploited for extortion, scams, and cybercrime
Source: 5
“I don’t see what’s wrong with it, Matt…”
Frustratingly, I keep encountering people online and IRL, some who I highly respect, who seem a bit ‘tech-naive’ about everything I’ve just listed, above.
They might be aware of the red flags, and the controversies, but because of the convenience of the new tools, they happily side-step all of that.
They’re likely the same people who call internet searches, ‘Googling’ because they only use Google for their internet searches (and always will), and see nothing wrong with that.
If you explain why it’s not a great idea to do all your searches with Google - they won’t change to a better alternative (like DuckDuckGo).
If you explain why using Google Chrome is a security risk - they won’t change to a more secure web browser.
If you explain why using AI too much is harmful to the planet and the entire human race - they also won’t stop using it.
So, although I am usually positive about most tech, especially anything web or internet-based, I have grown disheartened and repulsed by AI’s flaws and also the widespread, brain-dumping in pursuit of trying to harness AI as some sort of silver bullet for most businesses whether it’s effective or not.
The genie is out of the bottle, AI (LLMs) are not going away, and I think they have their place in the broader, tech landscape, but it’s going to be a very messy, shitty journey before it is safely and ethically integrated in to our lives.
Links:
Footnotes
-
https://www.brinwilson.com/no-ai-doesnt-have-an-iq-why-large-language-models-are-extraordinary-inventions-but-not-truly-intelligent/ ↩
-
https://clay.global/blog/ai-guide/ethical-issues-in-ai ↩ ↩2 ↩3
-
https://www.rwdigital.ca/blog/how-much-energy-do-google-search-and-chatgpt-use/ ↩
-
https://www.blackrock.com/us/financial-professionals/insights/ai-tech-bubble ↩