It's time for a semi-annual check-in on a hyperfixation that we'd rather not have.
I was writing a longer article about large language models due to what we call "a negative hyperfixation" on them, but decided to strip it back a bit.
It might surprise some friends to learn, but we do not actually consider ourselves an "AI skeptic" at this point. We fully acknowledge the transformative power it can have - for example, it would be difficult to deny the sheer number of people who have vibe coded something that fixes some minor issue about their time spent on the computer, even with no-to-limited knowledge of the language(s) used to accomplish that task. Some are just using it as "better search", digging through the pile of ads and SEO slop.
But a lot of that software has a limited audience; just the creator or an individual team. If it is shared more widely, security issues tend to be exposed pretty quickly [1]- and anyone reliant on pure vibe coding has limited ability to resolve.
Some of the open source libraries depended on in vibe coded apps are dying either because of an endless stream of junk pull requests or because their revenue model was based around sophonts going to their documentation and signing up for a premium service.
And of course, for search, something has heavily contributed to the amount of websites that only exist to display ads and marginally correct information.
(And then there's the misinformation, the lack of ability to trust any viral cute animal video, the use of AI generated imagery for propaganda by certain governments...)
We are not AI skeptics. But I can't think of any other technological change in our lifetime where I've been so fascinated by a technology that we do not want to use.
And so, a thought experiment.
the thought experiment
If all societal, ethical, etc. issues with AI were somehow resolved - a clean source of infinite energy, all artists and authors who's work was stolen for training data were fairly compensated, RAM prices returning to normal levels - would you use AI more than you currently do? Would you want to?
Maybe in a Star Trek-style post-scarcity society, the answer would be "maybe a little" - after all, all holodeck applications appear to be vibe coded. But of course, the world we live in only appears to be moving further and further away from that - with the horrors of capitalism only made more horrible with companies having the excuse of AI to blame.
As things currently are:
- We will never intentionally click the ✨ AI button ✨ when browsing your shopping site rather than doing a traditional search. [2]
- If you've made a free app but the AI features need a subscription, we will never pay for them. [3]
- We work with artists frequently, and if they need clarification, we'd much rather give our terrible MSPaint (pre-AI infestation)/Rian Johnson-quality doodles rather than slop.
- You'd genuinely have to pay us before we let Claude Code or OpenClaw with a cloud backend do literally anything with our emails/text messages/photos/other most personal and private data. I want as few companies having access to that stuff as is possible. If it's not an offline thing, fuck that (and given the potential for data loss, we don't really want to use that kind of thing offline either). [4]
We do occasionally fuck around with local models because we're curious and we want to learn about the capabilities. It's difficult to have a hyperfixation - even a negative one - with no outlet, and that's as ethical as we can do given the energy use is minimal on our puny little Macbook compared to a datacentre GPU. We have plenty of ways to waste significantly more electricity for a more sustained period (for example, turning the Xbox on).
But I say fuck around because these things never stick; the second we need more SSD space, any model downloaded is the first thing to go.
Local models are what we think of when it comes to the 'genie is out of the bottle' phrase that the AI hype crowd is so fond of. Even if something happens to all the big companies, anyone can just have an open source voice cloner on their hard drive now. That just exists. It can never be fully removed from the internet in the same way the DMCA takedown never got rid of Another Metroid 2 Remake. And that scares the shit out of us, but it's an important, terrible fact about the world we now live in.
The key example of the lack of desire to use for us is Apple's much maligned Apple Intelligence. Tools designed for minimal energy use on modern processors, on device, "free" (with cost of device). We leave it turned on because some of the features are marginally useful - for example, the improved Photos search has been able to find things that previous versions of Photos search did not (though this is certainly more evolution than revolution). Notification summaries, while frequently incorrect or unhelpful, can at least be laughably shit - and sometimes they do help with a friend taking way too many paragraphs to say something.
But there are some tools where even if they were literally the best in the world (which they aren't), I just can't see us ever using them. I am never going to ask it to write for us. I do not want to generatively remove imperfections in photographs or replace faces or whatever Google is doing on the Android side with that. I am never going to create a "genmoji" on purpose. The image generator is the only one of these things that can be uninstalled and we did that basically immediately.
infinitely finite
Of course, we can only judge our own reactions. But I don't think we're alone; I can't say that I saw a positive response to Mozilla's announcement of Firefox becoming a "modern AI browser", as one example (particularly with one of Firefox's first AI moves being to max your CPU just to see what should be in a tab group).
Which presents a little bit of a Problem. Part of the ridiculous valuations of the current "don't call it a bubble" bubble is the assumption that everyone will want to use it, that everyone will want to see it. That you won't be filled with resentment every time Google Chrome prompts you to enter "AI mode" when you're just trying to enter a URL [5]. That scrolling shortform slop will be as desirable to see on Disney+ as the actual professionally produced media its based on. That you won't get frustrated if you're reading a book and suddenly there's six different "WHY IT MATTERS" headers within the space of four pages. [6]
Our personal belief is that at some point there will be some kind of course correction given just how much of this feels... unsustainable. But like a lot of things right now, things feel like they're going to get worse before they have any chance of getting better.
In the meantime, all we can do is keep creating, keep learning [7], keep supporting each other, and keep bullying the shit out of any brand that's so cheap that they use it for marketing
footnotes
(disclaimer: this was just one of the first results for "openclaw security") ↩︎
We certainly have accidentally clicked one or two when they've loaded in after the rest of the page to make it obvious that it was just bolted on. In some cases they loaded in late because the site was also trying to render a big tutorial message to say "LOOK AT OUR AI BUTTON". ↩︎
There is the harder question of what to do about an app that we do use every day but has started adding more AI features to the existing subscription. ↩︎
Given the sheer amount of "I told a model to clean up temporary files and it nuked photos/my whole database/months of research/my wife's user account on the computer/etc."... we had one data loss scare recently when our laptop looked like it had died, we don't need another. ↩︎
This example is mostly here because of how many places at work I'm seeing it in enterprise installs that are meant to be locked down. I'm going to have to look at some of the Group Policies we're deploying. ↩︎
More on this at some point. ↩︎
We're currently investigating if Astro would be a better fit for some personal projects we've stalled on than Eleventy - but Astro has more JavaScript focus when developing even if it doesn't make it clientside, so we've been learning more of the fundamentals at our own pace. ↩︎