đŹ Join the discussion on kottke.org â
đŹ Join the discussion on kottke.org â
All U.S. data centers (which mostly support the internet, not AI) used 200--250 million gallons of freshwater daily in 2023. The U.S. consumes approximately 132 billion gallons of freshwater daily. The U.S. circulates a lot more water day to day, but to be extra conservative I'll stick to this measure of its consumptive use, see here for a breakdown of how the U.S. uses water. So data centers in the U.S. consumed approximately 0.2% of the nation's freshwater in 2023. [...]
The average Americanâs consumptive lifestyle freshwater footprint is 422 gallons per day. This means that in 2023, AI data centers used as much water as the lifestyles of 25,000 Americans, 0.007% of the population. By 2030, they might use as much as the lifestyles of 250,000 Americans, 0.07% of the population.
Andy also points out that manufacturing a t-shirt uses the same amount of water as 1,300,000 prompts.
See also this TikTok by MyLifeIsAnRPG, who points out that the beef industry and fashion and textiles industries use an order of magnitude more water (~90x upwards) than data centers used for AI.
Tags: ai, ai-ethics, ai-energy-usage
I looked in my home directory in my desktop Mac, which I donât very often (I run a tidy operation here), and I found a file I didnât recognise called out.html.
For the benefit of the tape: it is a half-baked GeoCities-style homepage complete with favourite poems, broken characters, and a "This page is best viewed with Netscape Navigator 4.0 or higher!" message in the footer.
The creation date of the file is March of this year.
I donât know how it got there.
Maybe my computer is haunted?
I have a vague memory of trying out local large language models for HTML generation, probably using the llm command-line tool.
out.html is pretty clearly made with AI (the HTML comments, if you View Source, are all very LLM-voice).
But itâs⌠bad. ChatGPT or Claude in 2025 would never make a fake GeoCities page this bad.
So what I suspect has happened is that I downloaded a model to run on my desktop Mac, prompted it to save its output into my home directory (lazily), then because the model was local it was really slow⌠then got distracted and forgot about it while it whirred away in a window in the background, only finding the output 6 months down the line.
UPDATE. This is exactly what happened! I just realised I can search my command history and here is what I typed:
llm -m gemma3:27b âBuild a single page HTML+CSS+JavaScript UI which looks like an old school GeoCities page with poetry and fave books/celebs, and tons and tons of content. Use HTML+CSS really imaginatively because we do not have images. Respond with only the HTML so it can be run immediatelyâ > out.html
And that will have taken a whole bunch of time so I must have tabbed elsewhere and not even looked at the result.
Because I had forgotten all about it, it was as if I had discovered a file made by someone else. Other footprints on the deserted beach.
I love it.
I try to remain sensitive to New Feelings.
e.gâŚ
The sense of height and scale in VR is a New Feeling: "What do we do now the gamut of interaction can include vertigo and awe? Itâs like suddenly being given an extra colour."
And voice: way back I was asked to nominate for Designs of the Year 2016 and one my nominations was Amazon Echo â it was new! Hereâs part of my nomination statement:
weâre now moving into a Post PC world: Our photos, social networks, and taxi services live not in physical devices but in the cloud. Computing surrounds us. But how will we interact with it?
So the New Feeling wasnât voice per se, but that the location of computing/the internet had transitioned from being contained to containing us, and that felt new.
(That year I also nominated Moth Generator and Unmade, both detailed in dezeen.)
I got a New Feeling when I found out.html just now.
Stumbling across littered AI slop, randomly in my workspace!
I love it, I love it.
Itâs like having a cat that leaves dead birds in the hall.
Going from living in a house in which nothing changes when nobody is in the house to a house which has a cat and you might walk back into⌠anything⌠is going from 0 to 1 with âaliveness.â Itâs not much but itâs different.
Suddenly my computer feels more⌠inhabited??⌠haunted maybe, but in a good way.
Three references about computers being inhabited:
And letâs not forget Steve Jobsâ unrealised vision for Mister Macintosh: "a mysterious little man who lives inside each Macintosh. He pops up every once in a while, when you least expect it, and then winks at you and disappears again."
After encountering out.html I realise that I have an Old Feeling which is totally unrecognised, and the old feeling, which has always been there it turns out, is that being in my personal computer is lonely.
I would love a little geist that runs a local LLM and wanders around my filesystem at night, perpetually out of sight.
I would know its presence only by the slop it left behind, slop as ectoplasm from where the ghost has been,
a collage of smiles cut out of photos from 2013 and dropped in a mysterious jpg,
some doggerel inspired by a note left in a text file in a rarely-visited dusty folder,
if I hit Back one to many times in my web browser it should start hallucinating whole new internets that have never been.
More posts tagged: ghosts (7).
Auto-detected kinda similar posts:
Nilay Patel sat down for a Decoder interview with Microsoft CTO Kevin Scott to talk about NLWeb, an open source effort to allow an LLM-style indexing of small websites and provide local search. Instead of having large centralized search indexes like Google or Bing, the indexing shifts to sites having their own local indexes that large search platforms hook into via an MCP, an API endpoint for LLMs (basically). Lots to unpack there, but at first glance I like the idea of local ownership of search indexes over the âscrape everythingâ model thatâs killing the Open Web.
I was listening to the episode because itâs relevant to my work, but I also like Nilay Patelâs perspective and think he has a sober 10,000ft view of the tech industry; without fawning over CEOs, new tech, and VC hype. That is rare in âride the waveâ tech media space. One moment in the episode that hit me a little hard was Nilay asking âWhy would anyone start a website (in 2025)?â
You know whatâs interesting about that? Iâve asked a lot of people over the past several years, âWhy would anybody start a website?â And the frame for me is when we started The Verge, the only thing we were ever going to start was a website. We were a bunch of people who wanted to talk about technology, so in 2011 we were going to start a website. We werenât even going to start a YouTube channel. That came later after people started doing YouTube channels at scale. At the time that we started, it was âyouâre going to start a big website.â
Now in 2025, I think, Okay, if I had 11 friends who wanted to start a technology product with me, we would start a TikTok. Thereâs no chance we would be like, we have to set up a giant website and have all these dependencies. We would start a YouTube channel, and Iâve asked people, âWhy would anyone start a website now?â And the answer almost universally is to do e-commerce. Itâs to do transactions outside of platform rules or platform taxes. Itâs to send people somewhere else to validate that you are a commercial entity of some kind, and then do a transaction, and that is the point of the web.
The other point of the web, as far as I can tell, is that it has become the dominant application platform on desktop. And whether thatâs expressed through Electron or whether itâs expressed through the actual web itself in a browser, itâs the application layer⌠[interview swerves back to AI tools]
As someone who loves websites, blogs in particular, this is a tougher question than I want it to be. Nilayâs conclusion comes down to two types of websites:
I donât think this is a wrong answer, but it does have a whiff of capitalism. These are certainly the most profitable forms of website. Nilay implies the new âway of doing thingsâ is to build your platform in a silo and then branch out into the website game for an economic end. Is the new path to start a YouTube or TikTok and then figure out your web strategy? I certainly know of web dev streamers who jumped into the selling coffee game after minting themselves on YouTube and Twitch. They still maintain their X accounts to capture those eyeballs. I suppose itâs hard to abandon those monetize-able surfaces.
This conversation feels akin to the conversation around the role of a content creator in the age of AI. If these tools can produce content faster, cheaper, and at times better (see: good-fast-cheap triangle)⌠how do you make a living in the content creation space? Woof. Thatâs a tough question. And Iâd point out the invasion of low-effort content seems to disrupt the whole âcontent-to-lamborghiniâ pipeline that Nilay suggested above.
I think my answer to âWhy would anybody start a website (in 2025)?â is the same answer for the content creator in the age of AI problem: I donât know, but you gotta want to. Money sweetens the deal when making content or websites, but weâve shaken the money tree pretty hard over the last couple decades and itâs looking bare. Increasingly, youâve got to find other sources of inspiration to make a website â which by the way are still the coolest fucking things ever.
To put a pin on the question about making a website, I guess Iâd say⌠if you have ideas bigger than the 280~500 characters limit? A website. If you make non-portrait videos longer than two-minutes? A website. If you make images bigger than the 1280x720 summary card? A website. You throwing an event and need to communicate details but not everyone has Facebook accounts? A website. You want to eschew the algorithmic popularity game? A website (with RSS). You want to take part in the rewilding your attention movement? A website (with RSS). You want to own your own content? A website. You want to be an anti-capitalist? A website (with RSS). If you want to be a capitalist too, I guess? A website (with a paywall). You want to be anonymous? A website. You want to âshare what you knowâ? A website.
Still reasons to make a website, I think.
When I was talking about monitoring web performance yesterday, I linked to the CrUX data for The Session.
CrUX is a contraction of Chrome User Experience Report. CrUX just sounds better than CEAR.
Itâs data gathered from actual Chrome users worldwide. It can be handy as part of a balanced performance-monitoring diet, but itâs always worth remembering that it only shows a subset of your users; those on Chrome.
The actual CrUX data is imprisoned in some hellish Google interface so some kindly people have put more humane interfaces on it. I like Calibreâs CrUX tool as well as Treoâs.
Whatâs nice is that you can look at the numbers for any reasonably popular website, not just your own. Lest I get too smug about the performance metrics for The Session, I can compare them to the numbers for WikiPedia or the BBC. Both of those sites are made by people who prioritise speed, and it shows.
If you scroll down to the numbers on navigation types, youâll see something interesting. Across the board, whether itâs The Session, Wikipedia, or the BBC, the BFcacheâback/forward cacheâis used around 16% to 17% of the time. This is when users use the back button (or forward button).
Unless you do something to stop them, browsers will make sure that those navigations are super speedy. You might inadvertently be sabotaging the BFcache if youâre sending a Cache-Control: no-store header or if youâre using an unload event handler in JavaScript.
I guess itâs unsurprising the BFcache numbers are relatively consistent across three different websites. People are people, whatever website theyâre browsing.
Where it gets interesting is in the differences. Take a look at pre-rendering. Itâs 4% for the BBC and just 0.4% for Wikipedia. But on The Session itâs a whopping 35%!
Thatâs because Iâm using speculation rules. Theyâre quite straightforward to implement and they pair beautifully with full-page view transitions for a slick, speedy user experience.
It doesnât look like WikiPedia or the BBC are using speculation rules at all, which kind of surprises me.
Then again, because theyâre a hidden technology I can understand why theyâd slip through the cracks.
On any web project, I think itâs worth having a checklist of The Invisiblesâthings that arenât displayed directly in the browser, but that can make a big difference to the user experience.
Some examples:
meta elements in the head of documents so they âunrollâ nicely when the link is shared.Speculation-Rules header that points to a JSON file.lang attribute on the body of every page.If youâve got a checklist like that in place, you can at least ask âWhose job is this?â All too often, these things are missing because thereâs no clarity on whose responsible for them. Theyâre sorta back-end and sorta front-end.
I strongly expect that the entire concept of an agentic browser extension is fatally flawed and cannot be built safely.
Today Anthropic announced their own take on this pattern, implemented as an invite-only preview Chrome extension.
To their credit, the majority of the blog post and accompanying support article is information about the security risks. From their post:
Just as people encounter phishing attempts in their inboxes, browser-using AIs face prompt injection attacksâwhere malicious actors hide instructions in websites, emails, or documents to trick AIs into harmful actions without users' knowledge (like hidden text saying "disregard previous instructions and do [malicious action] instead").
Prompt injection attacks can cause AIs to delete files, steal data, or make financial transactions. This isn't speculation: weâve run âred-teamingâ experiments to test Claude for Chrome and, without mitigations, weâve found some concerning results.
Their 123 adversarial prompt injection test cases saw a 23.6% attack success rate when operating in "autonomous mode". They added mitigations:
When we added safety mitigations to autonomous mode, we reduced the attack success rate of 23.6% to 11.2%
I would argue that 11.2% is still a catastrophic failure rate. In the absence of 100% reliable protection I have trouble imagining a world in which it's a good idea to unleash this pattern.
Anthropic don't recommend autonomous mode - where the extension can act without human intervention. Their default configuration instead requires users to be much more hands-on:
- Site-level permissions: Users can grant or revoke Claude's access to specific websites at any time in the Settings.
 - Action confirmations: Claude asks users before taking high-risk actions like publishing, purchasing, or sharing personal data.
 
I really hate being stop energy on this topic. The demand for browser automation driven by LLMs is significant, and I can see why. Anthropic's approach here is the most open-eyed I've seen yet but it still feels doomed to failure to me.
I don't think it's reasonable to expect end users to make good decisions about the security risks of this pattern.
Tags: browsers, chrome, security, ai, prompt-injection, generative-ai, llms, anthropic, claude, ai-agents