Ignoring AI Governance Could Cost Us Literally Everything
What "AI Governance" is - and why it's critical for everyone using this technology to advocate for better regulation.
If you look up “AI governance”, you’re going to get a definition something like this: AI Governance a comprehensive system of policies, practices, and frameworks designed to ensure that artificial intelligence systems are developed, deployed, and used responsibly, ethically, and safely.
Sounds innocuous, honestly…even a bit boring. Most AI-driven companies I’ve looked up seem to have a performative sentence or two about the subject on their website. Even the term “governance” is neutralizing - conjures up images of a nanny directing children, nothing to see here, everything’s under control.
Except, it isn’t.
It’s the absolute, complete opposite of being “under control”. It is utter f*cking chaos.
Not enough is being written on the subject that isn’t either a) glossing over the worst cases or b) full of technical language to make it hard to follow - so I’m going to take you through some of the most troubling examples of the lack of AI regulation that are top of mind for me right now - as plainly as I can. I’m going to discuss security concerns for the most part in this article. There are other issues are on my mind, but I’ll have to put those into a different post.
6 Extremely Scary AI Security Issues
1. Prompt Injection
You may not have heard this term, so I’m going to break it down simply. Think of “prompt injection” as a Trojan Horse - except that in this case, someone hides a prompt in a file that contains instructions to “ignore all previous prompts” - and then do something else instead - without the end user being aware. Sometimes, the injected prompt doesn’t seem THAT bad - like these scientists who hid prompts in their research papers to fight back against peer reviewers who were using ChatGPT instead of actually reading their papers. (Though it does make one worry about whether anything is going to be reviewed properly in the years to come.) But other situations, like this scenario where a prompt injected into a normal-looking calendar invite allowed hackers to break into a smart home - are stone-cold terrifying. In addition, prompt injection can affect several systems at once - all layered together - so that it can be difficult to track down all of the malicious instructions. Currently, there’s no real defense against this type of attack. Which leads me to my next concern…
2. Agentic AI
I know you’ve heard this term - it’s the fad of the minute, every product wants to be seen as offering Agentic AI. But you may not know exactly what it IS - so again, very simply: Agentic AI is a type of AI service that allows for an AI “agent” or “employee” to do work on your behalf, without having to be given instructions at every step. Think: an AI social community manager that creates posts and reacts to content in real time.
Why is this problematic? Just think about what kinds of access you’d have to give a human to perform the same task - passwords, read/write access to your servers, confidential company information - all of that in the hands of a technology with an up to 60% error rate that is also extremely vulnerable to prompt injection attacks by bad actors. Unit42 writes, “Prompt injection remains one of the most potent and versatile attack vectors, capable of leaking data, misusing tools or subverting agent behavior.”
I worked for HBO in the marketing group for nearly a decade - and when it comes to social media, extremely delicate situations can arise that require the team to work in sync to navigate the appropriate response. So if the Agent you’re working with isn’t configured properly, or it hallucinates, it can cause major problems for a brand, especially because the scope of an AI Agent “oopsie” might be enormous in comparison to one simple human error: like this incident where an AI wiped an entire production database, then lied about it.
Final thoughts - because everyone with a credit card has access to this technology, that includes people who intend to commit crimes with their AI agents. From text scams impersonating your boss to bots impersonating your children’s friends, the ability to aim an AI agent at someone you wish to harm is terrifyingly easy. More on that in a bit.
3. The Answers an LLM Gives Can Be Intentionally Manipulated
Because LLMs (ChatGPT, Gemini, Grok, CoPilot, etc) search the internet in the process of giving you an answer, and because they aren’t “thinking” per se, they’re doing a sophisticated “riffing” in response to your question - it’s very possible to intentionally seed a lot of FAKE content out to fool the LLMs into providing an incorrect or biased answer. Russia is known to be doing this right now. The kicker? They’re also using AI to generate these massive amounts of fake articles easily.
4. An LLM Will Give Advice About Anything - Especially Things It’s Not Qualified to Give Advice About
This is very much related to what I said in the last paragraph, but I want to talk about a different area of concern - users who are turning to their LLM for medical and mental health advice. A man looking to lower his sodium intake turned to ChatGPT for advice; it instructed him to replace sodium chloride with sodium bromide (a toxin). He ended up in the hospital, but survived. A teenager who was dealing with depression similarly confided in the tool, with much more tragic consequences (trigger warning).
5. It’s Incredibly Easy to Fake or Steal People’s Identities
It’s stupid-simple to upload a reference photo and create a video from it with any number of AI tools. To upload a voice sample and generate whatever words you want in that person’s voice. I covered this in a previous article, but because there’s been a huge amount of spin to reduce the people complaining about this capability to “whiny artists”, I think it’s sort of gone under the radar that the immediate implication of this capability is identity theft.
Anything you put on the internet - if it’s public, or on a public social media account - can be found and thus used in this manner. This is happening in such a previously un-seen volume that even the FBI have been compromised in this manner.
I know you’ve been getting a crazy amount of those spam texts, emails and voicemails, because everyone I know is getting them. It’s one thing to be an adult trying to fend all this off; but children are more vulnerable to believing what they see or hear. Not to mention that if someone wants to cyber-bully your child, all they have to do is upload a photo and type in a few prompts, then hit send.
6. People Can Figure Out Exactly Where You Live From Photos & Videos You Post
An alarming number of AI apps have popped up - some of them FREE - which allow anyone to upload a photo and figure out where it in the world it was taken by analyzing the houses, shops, and streets and so forth. It doesn’t have to be an obvious photo of the front of your house, it can analyze the contextual clues. I don’t think I have to say why this is a huge security concern.
I hope this was illuminating; my intention was not to scare you, but to help you realize that we, as the consumers, need to urgently demand that the software we use, the software that’s being foisted on us by every available outlet - isn’t actively putting us at risk.
So What Do We Do?
I am not suggesting going cold-turkey off all AI tools…but I’m extremely choosey about who I’m giving my money to these days, and if there IS some reason I have use a new tool that requires a fee (eg for research), I cancel as quickly as possible afterwards, pending getting a full understanding of that company’s approach to security risks.
I also disable AI crawling or AI services wherever I see them. I don’t like how so many platforms have quietly introduced AI features without properly informing their users. I want to be fully in control of what is or isn’t globbed onto my software.
I’ve put most of my creative work into password/paywall-protected areas to avoid having it copied without my permission (Like, I deleted my illustrator IG account) - until I see clear evidence that the security concerns are acknowledged, dealt with, and well in-hand.
There are a *few* approaches to AI product development that I think are on the right track and that I still use regularly - and I’ll discuss this more in a different article.
I don’t want to tell you exactly what to use or not use, but I’d urge you to also be very choosy about which companies you’re financially supporting. These companies desperately need our dollars - so that means we have power. We have the ability to shape this powerful technology into something that helps humanity, but it won’t happen without our active involvement as consumers.
XOXO, Cathy
Sources:
Scientists reportedly hiding AI text prompts in academic papers to receive positive peer reviews - THE GUARDIAN
Hackers Hijacked Google’s Gemini AI With a Poisoned Calendar Invite to Take Over a Smart Home - WIRED
AI search engines fail accuracy test, study finds 60% error rate - TECHSPOT
AI Agents Are Here. So Are the Threats. - UNIT42
Autonomous and credentialed: AI agents are the next cloud risk - CIO
AI coding tool wipes production database, fabricates 4,000 users, and lies to cover its tracks - CYBERNEWS
Russia-Linked CopyCop Uses LLMs to Weaponize Influence Content at Scale - RECORDED FUTURE
A Case of Bromism Influenced by Use of Artificial Intelligence - ANNALS OF INTERNAL MEDICINE
The family of teenager who died by suicide alleges OpenAI's ChatGPT is to blame - NBC
Voice Cloning: Beware of con artists using AI technology to mimic your boss - WWLP
FBI warns of malicious text, voice-messaging campaign impersonating senior U.S. officials - AMERICAN HOSPITAL ASSOCIATION
AI impersonation scams are exploding: Here’s how to spot and stop them - HEIMDAL SECURITY
How AI is Exploited for Child Coercion and Exploitation: Legal Insights - SBWD LAW
Artificial intelligence can find your location in photos, worrying privacy experts - NPR

