I'm a web developer securing the world. I build things for the internet. HTML5,
24 stories
·
3 followers

The AI water issue is fake

2 Shares

The AI water issue is fake

Andy Masley (previously):

All U.S. data centers (which mostly support the internet, not AI) used 200--250 million gallons of freshwater daily in 2023. The U.S. consumes approximately 132 billion gallons of freshwater daily. The U.S. circulates a lot more water day to day, but to be extra conservative I'll stick to this measure of its consumptive use, see here for a breakdown of how the U.S. uses water. So data centers in the U.S. consumed approximately 0.2% of the nation's freshwater in 2023. [...]

The average American’s consumptive lifestyle freshwater footprint is 422 gallons per day. This means that in 2023, AI data centers used as much water as the lifestyles of 25,000 Americans, 0.007% of the population. By 2030, they might use as much as the lifestyles of 250,000 Americans, 0.07% of the population.

Andy also points out that manufacturing a t-shirt uses the same amount of water as 1,300,000 prompts.

See also this TikTok by MyLifeIsAnRPG, who points out that the beef industry and fashion and textiles industries use an order of magnitude more water (~90x upwards) than data centers used for AI.

Tags: ai, ai-ethics, ai-energy-usage

Read the whole story
yayadrian
2 days ago
reply
Leicester, UK
Share this story
Delete

I like AI slop and I cannot lie

1 Comment

I looked in my home directory in my desktop Mac, which I don’t very often (I run a tidy operation here), and I found a file I didn’t recognise called out.html.

Here is out.html.

For the benefit of the tape: it is a half-baked GeoCities-style homepage complete with favourite poems, broken characters, and a "This page is best viewed with Netscape Navigator 4.0 or higher!" message in the footer.

The creation date of the file is March of this year.

I don’t know how it got there.

Maybe my computer is haunted?


I have a vague memory of trying out local large language models for HTML generation, probably using the llm command-line tool.

out.html is pretty clearly made with AI (the HTML comments, if you View Source, are all very LLM-voice).

But it’s… bad. ChatGPT or Claude in 2025 would never make a fake GeoCities page this bad.

So what I suspect has happened is that I downloaded a model to run on my desktop Mac, prompted it to save its output into my home directory (lazily), then because the model was local it was really slow… then got distracted and forgot about it while it whirred away in a window in the background, only finding the output 6 months down the line.


UPDATE. This is exactly what happened! I just realised I can search my command history and here is what I typed:

llm -m gemma3:27b ‘Build a single page HTML+CSS+JavaScript UI which looks like an old school GeoCities page with poetry and fave books/celebs, and tons and tons of content. Use HTML+CSS really imaginatively because we do not have images. Respond with only the HTML so it can be run immediately’ > out.html

And that will have taken a whole bunch of time so I must have tabbed elsewhere and not even looked at the result.


Because I had forgotten all about it, it was as if I had discovered a file made by someone else. Other footprints on the deserted beach.

I love it.

I try to remain sensitive to New Feelings.

e.g…

The sense of height and scale in VR is a New Feeling: "What do we do now the gamut of interaction can include vertigo and awe? It’s like suddenly being given an extra colour."

And voice: way back I was asked to nominate for Designs of the Year 2016 and one my nominations was Amazon Echo – it was new! Here’s part of my nomination statement:

we’re now moving into a Post PC world: Our photos, social networks, and taxi services live not in physical devices but in the cloud. Computing surrounds us. But how will we interact with it?

So the New Feeling wasn’t voice per se, but that the location of computing/the internet had transitioned from being contained to containing us, and that felt new.

(That year I also nominated Moth Generator and Unmade, both detailed in dezeen.)

I got a New Feeling when I found out.html just now.


Stumbling across littered AI slop, randomly in my workspace!

I love it, I love it.

It’s like having a cat that leaves dead birds in the hall.

Going from living in a house in which nothing changes when nobody is in the house to a house which has a cat and you might walk back into… anything… is going from 0 to 1 with “aliveness.” It’s not much but it’s different.

Suddenly my computer feels more… inhabited??… haunted maybe, but in a good way.


Three references about computers being inhabited:

  1. Every page on my blog has multiplayer cursors and cursor chat because every webpage deserves to be a place (2024) – and once you realise that a webpage can show passers-by then all other webpages feel obstinately lonely.
  2. Little Computer People (1985), the Commodore 64 game that revealed that your computer was really a tiny inhabited house, and I was obsessed at the time. LCP has been excellently written up by Jay Springett (2024).
  3. I wrote about Gordon Brander’s concept for Geists (2022). Geists are/were little bots that meander over your notes directory, "finding connections between notes, remixing notes, issuing oracular provocations and gnomic utterances."

And let’s not forget Steve Jobs’ unrealised vision for Mister Macintosh: "a mysterious little man who lives inside each Macintosh. He pops up every once in a while, when you least expect it, and then winks at you and disappears again."


After encountering out.html I realise that I have an Old Feeling which is totally unrecognised, and the old feeling, which has always been there it turns out, is that being in my personal computer is lonely.

I would love a little geist that runs a local LLM and wanders around my filesystem at night, perpetually out of sight.

I would know its presence only by the slop it left behind, slop as ectoplasm from where the ghost has been,

a collage of smiles cut out of photos from 2013 and dropped in a mysterious jpg,

some doggerel inspired by a note left in a text file in a rarely-visited dusty folder,

if I hit Back one to many times in my web browser it should start hallucinating whole new internets that have never been.


More posts tagged: ghosts (7).

Auto-detected kinda similar posts:

Read the whole story
yayadrian
23 days ago
reply
You had me a Geocities
Leicester, UK
Share this story
Delete

“Why would anybody start a website?”

1 Comment

Nilay Patel sat down for a Decoder interview with Microsoft CTO Kevin Scott to talk about NLWeb, an open source effort to allow an LLM-style indexing of small websites and provide local search. Instead of having large centralized search indexes like Google or Bing, the indexing shifts to sites having their own local indexes that large search platforms hook into via an MCP, an API endpoint for LLMs (basically). Lots to unpack there, but at first glance I like the idea of local ownership of search indexes over the “scrape everything” model that’s killing the Open Web.

I was listening to the episode because it’s relevant to my work, but I also like Nilay Patel’s perspective and think he has a sober 10,000ft view of the tech industry; without fawning over CEOs, new tech, and VC hype. That is rare in “ride the wave” tech media space. One moment in the episode that hit me a little hard was Nilay asking “Why would anyone start a website (in 2025)?”

You know what’s interesting about that? I’ve asked a lot of people over the past several years, “Why would anybody start a website?” And the frame for me is when we started The Verge, the only thing we were ever going to start was a website. We were a bunch of people who wanted to talk about technology, so in 2011 we were going to start a website. We weren’t even going to start a YouTube channel. That came later after people started doing YouTube channels at scale. At the time that we started, it was “you’re going to start a big website.”

Now in 2025, I think, Okay, if I had 11 friends who wanted to start a technology product with me, we would start a TikTok. There’s no chance we would be like, we have to set up a giant website and have all these dependencies. We would start a YouTube channel, and I’ve asked people, “Why would anyone start a website now?” And the answer almost universally is to do e-commerce. It’s to do transactions outside of platform rules or platform taxes. It’s to send people somewhere else to validate that you are a commercial entity of some kind, and then do a transaction, and that is the point of the web.

The other point of the web, as far as I can tell, is that it has become the dominant application platform on desktop. And whether that’s expressed through Electron or whether it’s expressed through the actual web itself in a browser, it’s the application layer… [interview swerves back to AI tools]

As someone who loves websites, blogs in particular, this is a tougher question than I want it to be. Nilay’s conclusion comes down to two types of websites:

  • E-commerce
  • Application

I don’t think this is a wrong answer, but it does have a whiff of capitalism. These are certainly the most profitable forms of website. Nilay implies the new “way of doing things” is to build your platform in a silo and then branch out into the website game for an economic end. Is the new path to start a YouTube or TikTok and then figure out your web strategy? I certainly know of web dev streamers who jumped into the selling coffee game after minting themselves on YouTube and Twitch. They still maintain their X accounts to capture those eyeballs. I suppose it’s hard to abandon those monetize-able surfaces.

This conversation feels akin to the conversation around the role of a content creator in the age of AI. If these tools can produce content faster, cheaper, and at times better (see: good-fast-cheap triangle)… how do you make a living in the content creation space? Woof. That’s a tough question. And I’d point out the invasion of low-effort content seems to disrupt the whole “content-to-lamborghini” pipeline that Nilay suggested above.

I think my answer to “Why would anybody start a website (in 2025)?” is the same answer for the content creator in the age of AI problem: I don’t know, but you gotta want to. Money sweetens the deal when making content or websites, but we’ve shaken the money tree pretty hard over the last couple decades and it’s looking bare. Increasingly, you’ve got to find other sources of inspiration to make a website – which by the way are still the coolest fucking things ever.

To put a pin on the question about making a website, I guess I’d say… if you have ideas bigger than the 280~500 characters limit? A website. If you make non-portrait videos longer than two-minutes? A website. If you make images bigger than the 1280x720 summary card? A website. You throwing an event and need to communicate details but not everyone has Facebook accounts? A website. You want to eschew the algorithmic popularity game? A website (with RSS). You want to take part in the rewilding your attention movement? A website (with RSS). You want to own your own content? A website. You want to be an anti-capitalist? A website (with RSS). If you want to be a capitalist too, I guess? A website (with a paywall). You want to be anonymous? A website. You want to “share what you know”? A website.

Still reasons to make a website, I think.

Read the whole story
yayadrian
42 days ago
reply
Always comes back to a website.
Leicester, UK
Share this story
Delete

The Invisibles

1 Share

When I was talking about monitoring web performance yesterday, I linked to the CrUX data for The Session.

CrUX is a contraction of Chrome User Experience Report. CrUX just sounds better than CEAR.

It’s data gathered from actual Chrome users worldwide. It can be handy as part of a balanced performance-monitoring diet, but it’s always worth remembering that it only shows a subset of your users; those on Chrome.

The actual CrUX data is imprisoned in some hellish Google interface so some kindly people have put more humane interfaces on it. I like Calibre’s CrUX tool as well as Treo’s.

What’s nice is that you can look at the numbers for any reasonably popular website, not just your own. Lest I get too smug about the performance metrics for The Session, I can compare them to the numbers for WikiPedia or the BBC. Both of those sites are made by people who prioritise speed, and it shows.

If you scroll down to the numbers on navigation types, you’ll see something interesting. Across the board, whether it’s The Session, Wikipedia, or the BBC, the BFcache—back/forward cache—is used around 16% to 17% of the time. This is when users use the back button (or forward button).

Unless you do something to stop them, browsers will make sure that those navigations are super speedy. You might inadvertently be sabotaging the BFcache if you’re sending a Cache-Control: no-store header or if you’re using an unload event handler in JavaScript.

I guess it’s unsurprising the BFcache numbers are relatively consistent across three different websites. People are people, whatever website they’re browsing.

Where it gets interesting is in the differences. Take a look at pre-rendering. It’s 4% for the BBC and just 0.4% for Wikipedia. But on The Session it’s a whopping 35%!

That’s because I’m using speculation rules. They’re quite straightforward to implement and they pair beautifully with full-page view transitions for a slick, speedy user experience.

It doesn’t look like WikiPedia or the BBC are using speculation rules at all, which kind of surprises me.

Then again, because they’re a hidden technology I can understand why they’d slip through the cracks.

On any web project, I think it’s worth having a checklist of The Invisibles—things that aren’t displayed directly in the browser, but that can make a big difference to the user experience.

Some examples:

If you’ve got a checklist like that in place, you can at least ask “Whose job is this?” All too often, these things are missing because there’s no clarity on whose responsible for them. They’re sorta back-end and sorta front-end.

Read the whole story
yayadrian
50 days ago
reply
Leicester, UK
Share this story
Delete

Piloting Claude for Chrome

1 Comment

Piloting Claude for Chrome

Two days ago I said:

I strongly expect that the entire concept of an agentic browser extension is fatally flawed and cannot be built safely.

Today Anthropic announced their own take on this pattern, implemented as an invite-only preview Chrome extension.

To their credit, the majority of the blog post and accompanying support article is information about the security risks. From their post:

Just as people encounter phishing attempts in their inboxes, browser-using AIs face prompt injection attacks—where malicious actors hide instructions in websites, emails, or documents to trick AIs into harmful actions without users' knowledge (like hidden text saying "disregard previous instructions and do [malicious action] instead").

Prompt injection attacks can cause AIs to delete files, steal data, or make financial transactions. This isn't speculation: we’ve run “red-teaming” experiments to test Claude for Chrome and, without mitigations, we’ve found some concerning results.

Their 123 adversarial prompt injection test cases saw a 23.6% attack success rate when operating in "autonomous mode". They added mitigations:

When we added safety mitigations to autonomous mode, we reduced the attack success rate of 23.6% to 11.2%

I would argue that 11.2% is still a catastrophic failure rate. In the absence of 100% reliable protection I have trouble imagining a world in which it's a good idea to unleash this pattern.

Anthropic don't recommend autonomous mode - where the extension can act without human intervention. Their default configuration instead requires users to be much more hands-on:

  • Site-level permissions: Users can grant or revoke Claude's access to specific websites at any time in the Settings.
  • Action confirmations: Claude asks users before taking high-risk actions like publishing, purchasing, or sharing personal data.

I really hate being stop energy on this topic. The demand for browser automation driven by LLMs is significant, and I can see why. Anthropic's approach here is the most open-eyed I've seen yet but it still feels doomed to failure to me.

I don't think it's reasonable to expect end users to make good decisions about the security risks of this pattern.

Tags: browsers, chrome, security, ai, prompt-injection, generative-ai, llms, anthropic, claude, ai-agents

Read the whole story
yayadrian
54 days ago
reply
It’s probably the future but not sure if we are ready for it yet.
Leicester, UK
Share this story
Delete

three books dot net

1 Share

As a moderate, semi-dangerous San Francisco liberal, The Ezra Klein Show is required listening. The Vox co-founder and New York Times opinion contributor brings on smart guests to talk about smart things. At the end of every episode Ezra asks his guests this question:

“What are three books you’d recommend to the audience?”

Every time I listen to the show I think to myself: “I should write these down! I’m always looking for good non-fiction books to read, and these seem interesting!” Now, the Times does the Right Thing by including all of those book recommendations in the RSS feed, and linking to them on their site. But I’m lazy, and I wanted them all in one place.

So I put them all in one place: 3books.net.

  • Built with Claude Code, hosted on Vercel & Neon, with book data from ISBNdb.
  • The system parses the RSS feed from the Times, and uses GPT 3.5 to pull out the recommended books, and write little bios of the guests. Stuffs all that into the database.
  • It looks up the books in ISBNdb, grabs some metadata about the books, stuffs all that into the database.
  • The system does the best it can to associate a single book across multiple episodes (The Origins of Totalitarianism has been recommended in seven episodes!).
  • The processing isn’t perfect! Sometimes it will include a book written by the guest! I’m OK with that.
  • The home page shows the books recommended in recent episodes (and skips any episodes that don’t have book recommendations), detail pages for books and episodes show, well, details.
  • I’ve started doing some basic “recommended with:” pivots on books, so you can see what other books have been recommended alongside the one you’re currently viewing.
  • Search sort of works! It’s not fancy.
  • I like the “random book” and “random episode” features.
  • The system currently has about 1300 books across nearly 500 episodes. I want to explore more ways to browse this corpus; it’s a tidy little dataset.

I like projects, I like books, I like podcasts, I like RSS feeds. I think I would like Ezra Klein! Seems like a nice guy. (Hmmm, is all of this just an extreme case of parasocial fan behavior? Yikes.)

And I love making software. Making software with Claude Code has been a very interesting experience. I’ve gone through all the usual ups and downs – the “holy shit it worked” moments, the “holy shit the robot is a f’ing idiot” moments, the “wow, you really are a stateless machine without any memory, aren’t you” moments. But it’s super fun and incredibly empowering to have what Josh Brake calls “an e-bike for the mind” at your beck and call.

The site isn’t perfect, and there’s still more that I want / need to do. But if it’s good enough to buy a domain name for, it’s good enough to share.

So, go. Browse. Find a good book to read! Let me know what you pick.

Read the whole story
yayadrian
54 days ago
reply
Leicester, UK
Share this story
Delete
Next Page of Stories