I'm a web developer securing the world. I build things for the internet. HTML5,
22 stories
·
3 followers

“Why would anybody start a website?”

1 Comment

Nilay Patel sat down for a Decoder interview with Microsoft CTO Kevin Scott to talk about NLWeb, an open source effort to allow an LLM-style indexing of small websites and provide local search. Instead of having large centralized search indexes like Google or Bing, the indexing shifts to sites having their own local indexes that large search platforms hook into via an MCP, an API endpoint for LLMs (basically). Lots to unpack there, but at first glance I like the idea of local ownership of search indexes over the “scrape everything” model that’s killing the Open Web.

I was listening to the episode because it’s relevant to my work, but I also like Nilay Patel’s perspective and think he has a sober 10,000ft view of the tech industry; without fawning over CEOs, new tech, and VC hype. That is rare in “ride the wave” tech media space. One moment in the episode that hit me a little hard was Nilay asking “Why would anyone start a website (in 2025)?”

You know what’s interesting about that? I’ve asked a lot of people over the past several years, “Why would anybody start a website?” And the frame for me is when we started The Verge, the only thing we were ever going to start was a website. We were a bunch of people who wanted to talk about technology, so in 2011 we were going to start a website. We weren’t even going to start a YouTube channel. That came later after people started doing YouTube channels at scale. At the time that we started, it was “you’re going to start a big website.”

Now in 2025, I think, Okay, if I had 11 friends who wanted to start a technology product with me, we would start a TikTok. There’s no chance we would be like, we have to set up a giant website and have all these dependencies. We would start a YouTube channel, and I’ve asked people, “Why would anyone start a website now?” And the answer almost universally is to do e-commerce. It’s to do transactions outside of platform rules or platform taxes. It’s to send people somewhere else to validate that you are a commercial entity of some kind, and then do a transaction, and that is the point of the web.

The other point of the web, as far as I can tell, is that it has become the dominant application platform on desktop. And whether that’s expressed through Electron or whether it’s expressed through the actual web itself in a browser, it’s the application layer… [interview swerves back to AI tools]

As someone who loves websites, blogs in particular, this is a tougher question than I want it to be. Nilay’s conclusion comes down to two types of websites:

  • E-commerce
  • Application

I don’t think this is a wrong answer, but it does have a whiff of capitalism. These are certainly the most profitable forms of website. Nilay implies the new “way of doing things” is to build your platform in a silo and then branch out into the website game for an economic end. Is the new path to start a YouTube or TikTok and then figure out your web strategy? I certainly know of web dev streamers who jumped into the selling coffee game after minting themselves on YouTube and Twitch. They still maintain their X accounts to capture those eyeballs. I suppose it’s hard to abandon those monetize-able surfaces.

This conversation feels akin to the conversation around the role of a content creator in the age of AI. If these tools can produce content faster, cheaper, and at times better (see: good-fast-cheap triangle)… how do you make a living in the content creation space? Woof. That’s a tough question. And I’d point out the invasion of low-effort content seems to disrupt the whole “content-to-lamborghini” pipeline that Nilay suggested above.

I think my answer to “Why would anybody start a website (in 2025)?” is the same answer for the content creator in the age of AI problem: I don’t know, but you gotta want to. Money sweetens the deal when making content or websites, but we’ve shaken the money tree pretty hard over the last couple decades and it’s looking bare. Increasingly, you’ve got to find other sources of inspiration to make a website – which by the way are still the coolest fucking things ever.

To put a pin on the question about making a website, I guess I’d say… if you have ideas bigger than the 280~500 characters limit? A website. If you make non-portrait videos longer than two-minutes? A website. If you make images bigger than the 1280x720 summary card? A website. You throwing an event and need to communicate details but not everyone has Facebook accounts? A website. You want to eschew the algorithmic popularity game? A website (with RSS). You want to take part in the rewilding your attention movement? A website (with RSS). You want to own your own content? A website. You want to be an anti-capitalist? A website (with RSS). If you want to be a capitalist too, I guess? A website (with a paywall). You want to be anonymous? A website. You want to “share what you know”? A website.

Still reasons to make a website, I think.

Read the whole story
yayadrian
8 days ago
reply
Always comes back to a website.
Leicester, UK
Share this story
Delete

The Invisibles

1 Share

When I was talking about monitoring web performance yesterday, I linked to the CrUX data for The Session.

CrUX is a contraction of Chrome User Experience Report. CrUX just sounds better than CEAR.

It’s data gathered from actual Chrome users worldwide. It can be handy as part of a balanced performance-monitoring diet, but it’s always worth remembering that it only shows a subset of your users; those on Chrome.

The actual CrUX data is imprisoned in some hellish Google interface so some kindly people have put more humane interfaces on it. I like Calibre’s CrUX tool as well as Treo’s.

What’s nice is that you can look at the numbers for any reasonably popular website, not just your own. Lest I get too smug about the performance metrics for The Session, I can compare them to the numbers for WikiPedia or the BBC. Both of those sites are made by people who prioritise speed, and it shows.

If you scroll down to the numbers on navigation types, you’ll see something interesting. Across the board, whether it’s The Session, Wikipedia, or the BBC, the BFcache—back/forward cache—is used around 16% to 17% of the time. This is when users use the back button (or forward button).

Unless you do something to stop them, browsers will make sure that those navigations are super speedy. You might inadvertently be sabotaging the BFcache if you’re sending a Cache-Control: no-store header or if you’re using an unload event handler in JavaScript.

I guess it’s unsurprising the BFcache numbers are relatively consistent across three different websites. People are people, whatever website they’re browsing.

Where it gets interesting is in the differences. Take a look at pre-rendering. It’s 4% for the BBC and just 0.4% for Wikipedia. But on The Session it’s a whopping 35%!

That’s because I’m using speculation rules. They’re quite straightforward to implement and they pair beautifully with full-page view transitions for a slick, speedy user experience.

It doesn’t look like WikiPedia or the BBC are using speculation rules at all, which kind of surprises me.

Then again, because they’re a hidden technology I can understand why they’d slip through the cracks.

On any web project, I think it’s worth having a checklist of The Invisibles—things that aren’t displayed directly in the browser, but that can make a big difference to the user experience.

Some examples:

If you’ve got a checklist like that in place, you can at least ask “Whose job is this?” All too often, these things are missing because there’s no clarity on whose responsible for them. They’re sorta back-end and sorta front-end.

Read the whole story
yayadrian
16 days ago
reply
Leicester, UK
Share this story
Delete

Piloting Claude for Chrome

1 Comment

Piloting Claude for Chrome

Two days ago I said:

I strongly expect that the entire concept of an agentic browser extension is fatally flawed and cannot be built safely.

Today Anthropic announced their own take on this pattern, implemented as an invite-only preview Chrome extension.

To their credit, the majority of the blog post and accompanying support article is information about the security risks. From their post:

Just as people encounter phishing attempts in their inboxes, browser-using AIs face prompt injection attacks—where malicious actors hide instructions in websites, emails, or documents to trick AIs into harmful actions without users' knowledge (like hidden text saying "disregard previous instructions and do [malicious action] instead").

Prompt injection attacks can cause AIs to delete files, steal data, or make financial transactions. This isn't speculation: we’ve run “red-teaming” experiments to test Claude for Chrome and, without mitigations, we’ve found some concerning results.

Their 123 adversarial prompt injection test cases saw a 23.6% attack success rate when operating in "autonomous mode". They added mitigations:

When we added safety mitigations to autonomous mode, we reduced the attack success rate of 23.6% to 11.2%

I would argue that 11.2% is still a catastrophic failure rate. In the absence of 100% reliable protection I have trouble imagining a world in which it's a good idea to unleash this pattern.

Anthropic don't recommend autonomous mode - where the extension can act without human intervention. Their default configuration instead requires users to be much more hands-on:

  • Site-level permissions: Users can grant or revoke Claude's access to specific websites at any time in the Settings.
  • Action confirmations: Claude asks users before taking high-risk actions like publishing, purchasing, or sharing personal data.

I really hate being stop energy on this topic. The demand for browser automation driven by LLMs is significant, and I can see why. Anthropic's approach here is the most open-eyed I've seen yet but it still feels doomed to failure to me.

I don't think it's reasonable to expect end users to make good decisions about the security risks of this pattern.

Tags: browsers, chrome, security, ai, prompt-injection, generative-ai, llms, anthropic, claude, ai-agents

Read the whole story
yayadrian
20 days ago
reply
It’s probably the future but not sure if we are ready for it yet.
Leicester, UK
Share this story
Delete

three books dot net

1 Share

As a moderate, semi-dangerous San Francisco liberal, The Ezra Klein Show is required listening. The Vox co-founder and New York Times opinion contributor brings on smart guests to talk about smart things. At the end of every episode Ezra asks his guests this question:

“What are three books you’d recommend to the audience?”

Every time I listen to the show I think to myself: “I should write these down! I’m always looking for good non-fiction books to read, and these seem interesting!” Now, the Times does the Right Thing by including all of those book recommendations in the RSS feed, and linking to them on their site. But I’m lazy, and I wanted them all in one place.

So I put them all in one place: 3books.net.

  • Built with Claude Code, hosted on Vercel & Neon, with book data from ISBNdb.
  • The system parses the RSS feed from the Times, and uses GPT 3.5 to pull out the recommended books, and write little bios of the guests. Stuffs all that into the database.
  • It looks up the books in ISBNdb, grabs some metadata about the books, stuffs all that into the database.
  • The system does the best it can to associate a single book across multiple episodes (The Origins of Totalitarianism has been recommended in seven episodes!).
  • The processing isn’t perfect! Sometimes it will include a book written by the guest! I’m OK with that.
  • The home page shows the books recommended in recent episodes (and skips any episodes that don’t have book recommendations), detail pages for books and episodes show, well, details.
  • I’ve started doing some basic “recommended with:” pivots on books, so you can see what other books have been recommended alongside the one you’re currently viewing.
  • Search sort of works! It’s not fancy.
  • I like the “random book” and “random episode” features.
  • The system currently has about 1300 books across nearly 500 episodes. I want to explore more ways to browse this corpus; it’s a tidy little dataset.

I like projects, I like books, I like podcasts, I like RSS feeds. I think I would like Ezra Klein! Seems like a nice guy. (Hmmm, is all of this just an extreme case of parasocial fan behavior? Yikes.)

And I love making software. Making software with Claude Code has been a very interesting experience. I’ve gone through all the usual ups and downs – the “holy shit it worked” moments, the “holy shit the robot is a f’ing idiot” moments, the “wow, you really are a stateless machine without any memory, aren’t you” moments. But it’s super fun and incredibly empowering to have what Josh Brake calls “an e-bike for the mind” at your beck and call.

The site isn’t perfect, and there’s still more that I want / need to do. But if it’s good enough to buy a domain name for, it’s good enough to share.

So, go. Browse. Find a good book to read! Let me know what you pick.

Read the whole story
yayadrian
20 days ago
reply
Leicester, UK
Share this story
Delete

Quoting Steve Krouse

1 Comment and 2 Shares

When you vibe code, you are incurring tech debt as fast as the LLM can spit it out. Which is why vibe coding is perfect for prototypes and throwaway projects: It's only legacy code if you have to maintain it! [...]

The worst possible situation is to have a non-programmer vibe code a large project that they intend to maintain. This would be the equivalent of giving a credit card to a child without first explaining the concept of debt. [...]

If you don't understand the code, your only recourse is to ask AI to fix it for you, which is like paying off credit card debt with another credit card.

Steve Krouse, Vibe code is legacy code

Tags: vibe-coding, ai-assisted-programming, generative-ai, steve-krouse, ai, llms

Read the whole story
yayadrian
47 days ago
reply
This makes a lot of sense
Leicester, UK
Share this story
Delete

‘Abandoned NYC’, Photos of New York City’s Abandoned Spaces by Photographer Will Ellis

1 Share

Abandoned NYC Photos of New York City

Since 2012 Brooklyn-based photographer Will Ellis has been documenting eerie abandoned locales in New York City in his ongoing photo series Abandoned NYC. The series has taken Ellis to all five boroughs, including a decaying mental hospital in Queens and an abandoned dormitory in Staten Island. Ellis has distilled 150 photos from the series into a photo book (available from the author and from Amazon). Ellis will be discussing New York City’s abandoned spaces in a lecture at the New York Public Library on May 7, 2015.

Abandoned NYC Photos of New York City

Abandoned NYC Photos of New York City

Abandoned NYC Photos of New York City

Abandoned NYC Photos of New York City

Abandoned NYC Photos of New York City

photos by Will Ellis

via Ufunk.net

Read the whole story
yayadrian
3790 days ago
reply
Leicester, UK
Share this story
Delete
Next Page of Stories