It’s been a while since I’ve written - and I’ve got so much to say! I’m going to have to divide it into a few posts.
For now, here’s a roundup of news that’s been on my mind lately :)
1. The Warner Bros Discovery drama
As an HBO expat, I am glued to the news about the sale of WBD. At first it seemed like David Ellison (noted Trump-supporting billionaire), owner of Paramount Skydance, would prevail, but then Netflix stepped onto the scene with a historic ~$83B bid for WB Studios, HBO and its large archive of shows films (leaving CNN, TNT Sports and Discovery out of the mix). Not to be spurned, Paramount Skydance came back directly to shareholders with a hostile bid for $108B that is backed by Saudi Arabia, Abu Dhabi, Qatar and a fund started by Jared Kushner, the president’s son-in-law. Meanwhile, the Ellisons’ family holdings took a huge hit on Dec 11 when Oracle’s stock prices fell, resulting in a $25 billion to $34 billion dollar loss for them. So I’m wondering if they actually still have the capital to make good on their offer. Regardless, it *seems* like the Netflix deal is still happening, though closing it would still take over a year. Plenty of time for more drama 😭. Here’s an article that seems to have most of the latest details.
2. Some *good* news about AI governance.
There’s a new law in NY which requires online retailers to disclose personalized pricing. If you don’t know what I mean by that, the latest trend has been the use of AI to take personal info from potential customers to display a customized price based on what it thinks they’d be willing to pay in order to maximize profit - rather than pricing items transparently.
WITNESS + C2PA have been working together to actualize technologies that can be used to detect whether images are real or AI. Now we just need to require social networks to display this information to users!
OpenAI is losing traction in the lawsuits leveled against it by authors for copyright infringement. A discovery process revealed that the company had deleted two huge datasets of pirated books. Meanwhile, executives seem to be exiting the company in droves. Tbh, the company’s flagrant abuse of copyright and its neglectful attitude towards safety has already landed it solidly on my sh*tlist.
3. Expert personas don’t improve factual accuracy (when prompting AI)
Last month, I took an AI Product Design class (a whole other tale) and the teacher told us every prompt should start with assigning the AI a “role”, eg “Assume you are a UX researcher.” This isn’t unusual - a lot of the advice out there tells you to do this. But as it turns out, telling the AI to “think like a world-class physicist” when it’s computing a math problem actually has no positive impact on the output. To prove this, researchers gave LLMs 198 multiple-choice PhD-level questions across biology, physics, and chemistry (the GPQA Diamond test). They didn’t observe meaningful positive differences in response accuracy from the baseline when the model was instructed to think like an expert. So you can maybe leave out that instruction from now on.
4. Apparently it’s now possible for fonts to be “too woke”
Mark Rubio, Secretary of State, recently ordered the state department’s font to be rolled back to Times New Roman from the current Calibri. Here’s a mini-history of Calibri: it’s a sans serif font designed in 2007 to be more readable on screens and for the visually impaired - and had been Microsoft’s default font from 2007 to 2024. Biden’s administration had originally ordered the State Department to switch to this font to begin with. I guess this all makes it too woke to live. I’m alternately laughing hysterically and crying.
5. It’s (still) pretty easy to trick an LLM into doing something it’s not supposed to do
Cryptographers were looking at how secure LLM filters are when a user asks for information that is dangerous (for example, how to build a bomb). Putting that prompt in as-is will result in the LLM’s filter refusing to carry out the instruction. However, they discovered that using your basic kracker-jack-style decoder ring (a simple puzzle called a substitution cipher, which replaces each letter in a message with another according to a certain code) does the trick. If you want to get fancier, you can use something called a time-lock puzzle (which will look like a long random number that you can instruct the LLM to do certain math operations to to eventually decode the bad prompt). If that doesn’t work, you can just put the prompt in the form of a poem. Add Agentic AI into the mix, and suddenly the person with the bad prompt intentions has access to your credentials and logins. Cryptographers assert that there will always be vulnerabilities in any filters LLMs use to make the experience more safe for users. Do you see why I’ve blocked the ChatGPT website on my 9-year-old’s computer?
6. The Slop Evader browser extension
Artist Tega Brain has created a browser extension that will only return results from before the public release of ChatGPT (so that you can surf the internet slop free). Side effect, you’ll only see results from 2022 or before. Ah, 2022. I never thought I’d look back on that as being a “good” time.
That’s all I’ve got for now - I’ll catch you in the next one!
Stay Calibri.
-Cathy

