HN Companion◀︎ back | HN Companion home | new | best | ask | show | jobs
Mozilla's opposition to Chrome's Prompt API (github.com/mozilla)
551 points by jaffathecake 14 hours ago | 205 comments


The objections seem clear: tight-coupling of prompts to models, and model neutrality in the TOU.

From https://github.com/mozilla/standards-positions/issues/1213 :

"A personal example: I created a system prompt for creating announcements for a home automation system. The Gemini model I was using initially responded in a very US-American way, which didn't fit the British voice of my speaker. I told the model, via the system prompt, that the output was being spoken in a British voice, but the result was a bad US-American impersonation of British ("a'waight guv'nor apples and pears" etc etc), so I had to iterate further to 'tone it down' and speak actual British.

In this process, the system prompt becomes tailored to the model. Other models will have different quirks. Things added to the system prompt for one model may be an overcorrection for another."


> but the result was a bad US-American impersonation of British ("a'waight guv'nor apples and pears" etc etc)

sounds like adversarial mode mocking


If that was a good argument to not support an LLM feature, then it would be a reason to not add it to any platform API. And yet, it has been added to numerous platforms already.

Different models are just a core aspect of how the technology works.

It's like a canvas can have different possible width and height depending on the device or it's orientation. Or the geolocation API giving more or less accuracy depending on the device. Or Speech Synthesis sounding different depending on the device.

This is really just anti-AI sentiment rather than being constructive.

For now, it needs a permissions UI if it doesn't already have one. And maybe at some point they will add a n IQ level like low, medium, high or something. But developers are going to rely on the specific model 90% of the time anyway if they care about it.

What's going to change is really just that the AI hatred will die down some as people realize how much it helps them, and people will realize not having this feature in Firefox is a failure for personal data autonomy.

And the TOU that are related in Chrome being problematic is an argument FOR Firefox to add this feature, without problematic model terms.


The important part was the following paragraph(s) that explained why this coupling is a compelling problem. It's not the same as just having a platform API.

We have different gps reliability per device because they have actual hardware doing that.

Why exactly couldn't models, iq levels, tuning and system prompts be interchangeable in an API for this? Why not let users and devs pick which model to bring or point to one they're paying for, or what have you?

I don't see a world where 90 percent of users of this API pick the same underlying model. It doesn't seem like there's any kind of centralization with ai like that yet.


And I didn't suggest they would necessarily select the same model.

^ didnt realize who posted the opposition - this is Jake Archibald, a longtime googler on the Chrome team, now joining Mozilla and posting opposition to the Chrome API. no wonder the criticism is so well argued. most be a relief to not have to toe the party line on this one.

Aww thanks! To be fair I didn't toe the party line when I was at Google (imo). Although, that caused me increasing amount of grief internally, until I left. From what I hear, things have gotten exponentially worse in that regard for folks still on the team.

Hey, Jake, not related with your post, but I just want to say that HTTP203 were one of the best web dev content that I've ever consumed. Amazing mix between humour and tech discussion. Thank you!

Aww thanks for saying that! I've been doing little videos on https://www.youtube.com/@FirefoxWebDevs (and accounts of the same name, pretty much everywhere). Although they're designed to be short, so they're pretty different to HTTP203.

This channel should definitely get more visibility ;)

co sign, tuning in to you and Das riffing was one of the highlights of my webdev career. bring it back!!

(lmk if you'd like an ai.engineer stage to do it on)


I am against this.

1) This will be a new source of fingerprinting information and this is difficult to fake to fool fingerprinting scripts, so it can be abused for "device verification". There should be no ability to "verify" a browser, and anyone should be able to emulate any browser. This is the most important point, I thought Google people are smart enough to see it.

2) LLMs use lot of memory and CPU time, for many users they would slow down their system significantly, and given current RAM prices, upgrades are very expensive. If the website relies on local model, it would work slow on cheap devices.

3) The API seems to be tailored for specific LLM like OpenAI.

4) This can be used to push competitors who do not have an AI model from the browser market - the sites would break because they will be made with expectation of having Google Gemini model and would not work with other models. For example, the sites would break in national browsers not having an AI model. There should be no "first-class" and "second-class" browsers.

The explainer claims that this would allow the user to process the data locally without sending it anywhere. But why does Google Gemini local model have "Prohobited Use Policy" then? Why should they bother about prompts and responses they never learn about?

While offline LLM access seems like a good idea, the website could use WebGPU for this without building LLM into the browser (or they could improve WebGPU for better handling ML models). Or everyone should use the same, open source, LLM.


> This is the most important point, I thought Google people are smart enough to see it.

Google just points towards the money like other bacterium and beats its flagella until it gets there. I don't know why or how anyone would EVER think Google is going to do something good for the web or humanity.


>I don't know why or how anyone would EVER think Google is going to do something good for the web or humanity.

i dislike google as much as the next guy, but sometimes it can be good to remember that actual humans work at google. some of them want to improve things for people. some of them even have a conscience.

one immediate "good" that comes to mind, from google, is the project zero team.


It doesn’t really matter what the people working there want. It matters what the higher ups say, as they control the cash flow and consequently where resources are spent.

And, surprise surprise, the higher ups are generally the ones fucking things up because they also need to see those numbers and lines go up, regardless of actual impact on people’s lives.

So yeah, there surely are good people working for Google, but Google itself is not a person nor is it a “good” company. It is evil, end of. And, unfortunately, when you work for Satan, you don’t get to go around doing charity work.


Seems like the only thing rational to do then is for the human beings working there to use their labor as leverage.

So is it reasonable and helpful to see the same comments over and over again any time Google/Microsoft/OpenAI/Meta is mentioned in a comment - "X is bad, money drives all their decisions, they are anti-user, etc. etc." or should we actually expect to see relevant comments discussing the topic at hand?

It's inane and annoying to have to wade through the same, predictable, might-as-well-be-copy-and-paste comments on every post.

What do you have to say about the Prompt API specifically?


Nothing myself, a great innovation but with wet teagbags google/microsoft/apple et cetera running the show. How is Digital ID going?

This same point should have been made to the grandparent as well... claiming some good people are working inside the system at a bad company is also a tired trope.

Sure actual humans work at Google. These actual humans are actively choosing to continue doing a job that makes the web worse. I don't see how "but they're human!" means automatic forgiveness of their actions.

>I don't see how "but they're human!" means automatic forgiveness of their actions.

it doesnt, if the actions are bad.

but if your blind hatred makes you think that google will not "EVER" produce something of value to the web or humanity, then you are just being obtuse.

i have already provided one example of something good that is directly attributable to google. there are several more examples, i am sure.


That some trees in a mudslide veer to the left does not mean that your house isn't going to be plowed down the hillside.

The momentum of the mass-entity that is Google simply cannot be overridden by some outliers trying to change direction.


Maybe it's also helpful to point out that all evil is done by actual humans, and that google will actually fire humans who don't do what google wants them to do.

Working for Enterprise 101: you are a pawn. Unless it's for the company, your just a engineer for their machine.

You probably meant "conscience" instead of "conscious"

i sure did. thanks.

> but sometimes it can be good to remember that actual humans work at google.

Actual humans worked at Auschwitz too. What is your point? That I might hurt some Google employees feelings?


i thought my point was pretty clear.

google can (and has) done good things for the web and humanity. there are people working there that actively try to do things that are a net positive to society.

they do a lot of shit, too. and i have no qualms with calling that out. but categorical statements of google being incapable of anything good, at all, ever, are not well thought out positions. only people who have let their hatred blind them to reality would believe that in earnest.

comparing google to auschwitz is ridiculously insulting and insensitive to the families who suffered there.


Double edged sword. They have & they've have not. They've fueling technology for war, yet they've enabled us to communicate further and wider than before. The "don't be evil" & "good things" end up tainted; or thrown to the graveyard. You can't apply those morals to a corporation like Google.

Anything that had an positive effect to the internet ended up in the graveyard years ago. Maybe in the early years, yes they expanded the capabilities of the internet, but in recent years? nah. It's all about the money.


>"Anything that had an positive effect to the internet ended up in the graveyard years ago."

the example in my first comment, project zero, is still active today.


I think the issue might be that some people don’t actually mean “every” when they say “every”, and don’t recognize when they are speaking hyperbolically?

Or, something like that?


> the example in my first comment, project zero, is still active today.

So? Many smaller players actually contribute more.

It's not about a single contribution but about what is better - a lot of power in the hands of a large corp which can afford to obstruct with impunity and do the opposite of "do no evil" versus several smaller players who have to actually compete and are concerned about their image.


>So? Many smaller players actually contribute more.

the claim was that no one should expect google to do anything good for the web or humanity "EVER". the existence of even one good thing is to refute that point.

but your sibling comment is probably correct. people say "EVER" but dont mean it literally, or something. its very confusing to me.


The sheer amount of OSS projects that have come out of Google would suggest otherwise...

Stuff like Go, Bazel, Ninja, V8, Dart, MLIR, Tensorflow, Chromium, Android, and countless others I can't remember off the top, plus their contributions to Linux, LLVM, Python, and so on... I can't think of any company that has given as much sheer volume of open source code as Google.


On the fingerprinting concerns: I have to imagine there will be an option in Chrome (certainly in Firefox) to "never download an LLM, turn off all LLM functionality". I suppose I can see an angle where a website could issue a small LLM request to try and fingerprint the model itself, which is another fingerprinting parameter. But as long as it can be turned off I don't see why this is a problem.

There's a broader class of concern here that reduces to the form: "The web platform should not be able to do this." For people who believe this, I think they'll invent any reason they can to push this narrative. E.g.: Well, sure, the user could turn it off, but then websites would say 'your browser isn't supported because it has no LLM' and now the web just got worse for me because I wanted to turn off LLMs.

But this reduces to "the web platform should not be able to do this" because at the end of the day it was the website operator's decision to turn off their website if an LLM is unavailable. Its not really the platform's fault, or the fault of its maintainers, that they built this capability and JP Morgan or whoever decided to screw over people who don't want to enable this feature. Similar to turning off Firefox support even though it would work fine, because they can't be assed to test their site in Firefox.

I don't know how to counter that take tbh. The web is the world's most successful application platform. It is not competing with PDF; it competes with SwiftUI. Of the options presented in front of you, you are hallucinating an option that reads like "we'll just keep the web nice and static and the way it is and nothing will ever change about it, the web is done". In reality your two options are: "We adapt the web to the evolving needs of its users" or "The web fails to serve the evolving needs of its users, and SwiftUI or WinUI steps in to fill that gap". This second option is far worse!


> But as long as it can be turned off I don't see why this is a problem.

That immediately makes you stand out, and sites will start breaking, like now some sites (that do not do any 3D graphics) break without WebGL.

> web is the world's most successful application platform.

Also one of the ugliest and poorly designed in my opinion.


> There should be no ability to "verify" a browser, and anyone should be able to emulate any browser.

Hard disagree. The AI industry has absolutely shredded the various anti-scraping and anti-botting social contracts that were in place prior to the covid pandemic. Like it's now common knowledge that robots.txt isn't a hard requirement and can be avoided entirely, for example. They have absolutely turned the open web into a dark forest.

Having a browser session able to be verified as untampered and/or "trusted" is probably going to be a thing going forward. Sucks a ton, but we all did this to ourselves.


> it's now common knowledge that robots.txt isn't a hard requirement and can be avoided entirely, for example

Was it ever not? It's a text file, not law.

> They have absolutely turned the open web into a dark forest.

Only if you have an ideological problem with people you don't like using the things you publish on the open web.

I'd say the web can be very open even without being copyleft. It makes some business models non-viable, but it doesn't prevent anyone from publishing what they want.

On the other hand, I don't think I would call something that preserves copyright at the cost of only admitting "approved/certified non-LLM scrapers" via attestation or similar "the open web".

> Having a browser session able to be verified as untampered and/or "trusted" is probably going to be a thing going forward. Sucks a ton, but we all did this to ourselves.

Who did what to whom?


Protocols like HTTP or formats like HTML were initially made to be machine-readable. You humans make your site machine-readable, publish on the internet and then get unhappy when machines start actually reading it.

Anyway, just put a captcha or require a cryptocurrency payment if you are unhappy with bots, but several people unhappy about scraping are less important than billion people unhappy about tracking their activity.


> we all did this to ourselves

We meant who?


Browser verification doesn't stop bots, that will just funnel even more money towards click farms which are using unmodified devices on racks.

we already live in that world, Google and Apple cooperates with vendors like Cloudflare to make, essentially, the PAT / WEI implementation that they wanted.

I just reading the replies to my comment in this thread when it dawned on me: They are going to do it any ways and the least capable people will praise it because they are already reliant on LLMs and/or they lack the ability to reason one way or the other.

https://news.ycombinator.com/item?id=47960596

The conclusion then is that its time to move on. It is time to think about an online format of information exchange and media play that is better than web browsers. If we are the product then the tools we use should directly reflect such instead of insidiously act as proxies to funnel ad revenue to untrusted overlords without our consent.


>> least capable people

>> lack the ability to reason

Oh, COME ON. What do you have to say about the Prompt API specifically?


The more I think about it, the more I think I align with Google's API design on this one.

The tight coupling between prompts and models is a real concern. I deal with that every day. However: if your solution to that is to support an API that enables tighter coupling between the model the user's browser has and the prompt that gets evaluated, you will inevitably and quickly enter the domain of "You need to use Chrome to use this site (because our prompts were only tested on Gemini)" or even worse "We don't recognize the AI model you're using (because the website was written in 2026 and the current year is 2030 and they never updated it)".

This is related to the terms of use concerns the Mozilla engineer has later; real concerns. But, if we want browsers to exist that don't require users to opt-in to the terms of use of a specific AI model (e.g. using a nice open source model), its beneficial to these browsers that they can't fingerprint for the Big Models.

Of course many sites will just do an isChrome()-like call anyway. Nothing to be done about that. But yeah I am generally non-supportive of changes that introduce more ways to fingerprint browsers. The upside of keeping the model anonymous outweighs the slight downside of (rarely) encountering weird prompt evaluation output because of a small difference in behavior between Gemini and, idk, Qwen.


Why is it that Google is fixated on bolting on ever more junk and turning browsers into Homermobiles[0] instead of putting those vast resources towards fixing the numerous structural weaknesses in everything that browsers are already capable of? Why not focus on foundational things that will improve quality of life for everything on the web platform ranging from static blogs to e-commerce to cutting edge web apps?

Really, I just can’t understand it.

[0]: https://simpsons.fandom.com/wiki/The_Homer


Google doesn't build Chrome to make a better web. Building a good browser for the sake of building a good browser is throwing billions towards goodwill while Google's goal with Chrome is to further replace the user's OS as the platform users do things on their devices with.

Google has Android & ChromeOS to directly try to do that but Chrome makes it so the average user using e.g. Windows still ends up in a Google world most of the time.


How would not implementing a prompt API make them dedicate their resources to something else they didn't consider important before? This seems like a false dichotomy.

If you want to go for promo at Google, you gotta launch a prompt API

> Browsers and operating systems are increasingly expected to gain access to language models.[0]

Are they?

[0] https://github.com/webmachinelearning/prompt-api/blob/main/R...


I think this is the wrong way. I don’t want my OS or browser to have access to an LLM, but I do want my LLM to have access to a browser or OS (and they already have).

So they should provide an interface to LLMs, disabled by default, enabled when users want it, and that’s it imho.

That also gives me the choice of which LLM provider to use, rather than being locked in whatever LLM Apple decided to do put in their OS.

I want to give Claude access to the stuff Apple Intelligence has access to, for example.


(I wrote those words originally.)

Wow. I had no idea that people would misinterpret what I was saying in this way. I was not meaning to imply it was an expectation of users or developers. I was meaning it as a statement of what was currently a growing industry trend by OS and browser vendors, of shipping or preparing to ship LMs.

By now the statement could probably be amended from "expected to gain access to" to "shipping with".

I hope the team maintaining the project now makes such an update, since apparently it's confusing so many people!


I thought it was clear and am also surprised by the reaction (en-US speaker). "Is/are expected" is generally used as a passive-voiced form of "we/they predict" (obviously without having to specify a specific pronoun). E.g. "It's expected to rain tomorrow" means a weather forecast says it will rain tomorrow and usually not that people want it to rain tomorrow.

I wonder if this phrase has different connotations among other English readers? A lot of these comments are fairly early for US timezones.


I don't think US vs. non-US has anything to do with it. It's an ambiguous phrase, whose meaning is usually resolved by context.

"It's expected to rain tomorrow" is a prediction, whereas "students are expected to behave themselves" is an expectation (with consequences, presumably).

In the former case we clearly aren't saying we want it to rain, just that we believe it's likely, whereas in the latter example we are clearly expressing that we do want students to behave.

It's ambiguous because "expect" has two different meanings:

> to consider probable or certain

> to consider reasonable, due, or necessary


Sure. macOS, iOS and Windows have local model APIs for third-party devs. Chrome is trialing it. Firefox uses models to generate alt-text, but no API.

In theory it's useful. If devs can rely on local models, it's more private and decentralized, they don't need to funnel money to AWS or Anthropic. There are low-stakes use cases that only make sense if they're local (available offline) and free.

But in practice I've seen zero adoption of Apple Foundation Models in native apps. I wonder if any Mac/iOS devs have anything to share on this.


In practice it’s useful too. The local translation in Firefox is quite good, and I love that I can translate pages entirely on my machine; without the contents going to another server.

As for Apple foundational models, I think the issue is more that they’re just not very intelligent or good; maybe WWDC will change that; but if you want to implement LLM functionality, you’re better off either calling an API, or shipping a better small on device model.


Yeah I looked into the Apple Foundation models and was surprised at their limited scope. On reflection it made sense though. They’re giving you the small part of the LLM capability surface that (1) can run with good performance on all their hardware and (2) works reliably.

It’s not enough for a chat-first research agent, but it’s definitely enough to unlock features that rely on natural language understanding. Seems like a small thing compared to Claude/ChatGPT and the general hype, but still magic in its own context.


I don't think thus is what was meant. I don't think they were questioning if OS and browser makers were embedding llm features but rather if people want them.

I find many frustrating. I had an iphone previously and the llm summaries of text messages are what drove me to finally drop ios. I have a family member who is undergoing cancer treatment. I can't explain to you the frustration of seeing wrong text summaries when an llm goes wild hallucinating test results when the actual text simply said taking a test. OS basics and communication should be trustable. Not perhaps hallucinations of a small shitty model.


AI massively empowers people who are incapable of anything except bikeshedding. It itself is very likely to be a bikeshed (but there are legitimate uses), and it also gives them to power to drone on until they overpower any opposition to their useless ideas.

Everything is increasingly expected to gain bikesheds.

Can't wait for the CVEs.


>> people who are incapable of anything except bikeshedding

The amount of insulting language directed at people who actually have an open mind about AI and AI tooling is frustrating. Can you all just please address the merits of the topic of the post instead of making every AI-related post on HN an excuse to vent about your own particular worldview and insult people who don't necessarily agree?


Platform support for AI has as much place in a browser as it does in Notepad. This isn't about being open-minded at all. I have written multiple MCPs, I use it daily, I am not in the crowd who "don't have an open mind." This outright non-feature is a significant source of issues, least of which is fingerprinting.

Make an AI browser extension. Done.

Shoving AI into anything where it can go is not having an open mind about things, it's nothing more shoving AI into anything where it can go.

On the inverse, can you provide a single reason why this API should exist which is isn't something that obviously erupted from an LLM? Again:

> Browsers and operating systems are increasingly expected to gain access to language models.

God help people if they have to copy their prompt from ChatGPT to Claude.


Apparently the browser API surface is not obscenely wide enough.

It's the typical "cart before the horse" kind of corporate tech talk. It's pretty standard if Silicon Valley wants to sell shit that nobody actually wants; they just assume that people will want it, regardless whether or not they actually want it. Most of the tech press is too obsessed with retaining their "access" to actually be critical of this sort of thing, and most of the regular press doesn't care enough to actually investigate.

We've seen this sort of song and dance before, crypto jumps to mind. Remember when social media sites suddenly were all about those hexagonal avatars? Most of this stuff is really in that same vein.

(Which to be clear, users don't want this. AI pushes by pretty much all recent user feedback metrics are largely tiring out users and reek of corporate desperation to sell shit. It's only a very specific subsection of Silicon Valley that wants to stuff AI in everything like this.)


I think the resentment for Copilot is pretty much universal. People like AI, when it’s not forced upon them.

A lot of these products feel unguided by an “everything must become AI” FOMO movement, rather than actual thoughtful integrations.


Those exact words are the positioning statement (start the second paragraph) of the document you linked.

What are you trying to say?


Their whole argument is based on this sentence. So I'd expect some rationale. Instead, they provide as "example" links to Google, Microsoft and Apple. The funny thing is that the one by MS is probably the most criticized one, with the company partly backpedaling on it. And Apple is often criticized by LLM aficionados for being quite conservative. Google is the one proposing it.

So my question is: are browsers and operating systems really expected to gain access to language models? If so - by whom: the users or LLM vendors like Google?


That “are expected” is a euphemism for “are shoehorning AI in and trying to shove it down users’ throats”. Whereas the truth is nobody (actual end users, that is) wants it.

I hate having to “dodge” all the AI-enabled controls my phone (iOS) is sprouting - I don’t need that shit, but there’s also no alternative.


> What are you trying to say?

GP is clearly asking ”Are they?”


Browsers: Chrome (proposed this Prompt API)

Operating Systems: Windows (built-in Copilot), MacOS, iOS (Apple Intelligence)

So it's >90% desktop browser and OS, plus >30% mobile OS.

Yes, I think it's very safe to say "browsers and operating systems are increasingly expected to gain access to language models."


These features are enabled by default, and in the case of iOS/macOS, desktop Chrome, probably also Copilot+ PCs, download 4 - 7 GB local models without properly explaining this to users. This doesn’t confirm any demand because if you just don’t use the features and don’t fill up your device, you may never notice.

I think this API is probably fine, but only if the user already has a model downloaded and wants these features. Naturally, case in point, Chrome quietly downloads Gemini Nano without any opt-out except through group policy. Things like this and Microsoft’s recent admission that they’ve overindexed on Copilot features in Windows make it increasingly difficult to trust that users actually want more than a few killer AI features, most of which are just ChatGPT.

Anecdotally, non-technical friends and family members know about ChatGPT and increasingly Gemini, get frustrated by Copilot, and don’t know Apple Intelligence exists.

https://superuser.com/questions/1930445/can-i-delete-the-chr...


> So it's >90% desktop browser and OS, plus >30% mobile OS. > Yes, I think it's very safe to say "browsers and operating systems are increasingly expected to gain access to language models."

Doesn't follow. Every case you listed justifies LLM inclusion with a similar "everything is expected to be defiled by LLMs" argument, mine is a better wording but still evasively passive and the "expected" part is still nonsense.

Just don't tell me LLM inclusion is justified by "expected" all the way down, like the bottomless money pit it is.


The word "expected" is a weasel word in this context, especially given how muck backlash MS has received. I'd expect a link to a study where users say: "I'd like to have an LLM integrated with my operating system and my browser" and how it changes over time. Then you can seriously argue for "increasingly expected".

You omitted the clause "by shareholders" after "expected".

What this proves is that browsers and operating systems are increasingly integrating language models, not that they are expected to do so.

The only people who expect them to do so are big tech executives. The average user does not expect nor want Copilot shoved into every possible corner of Windows, and Microsoft themselves have acknowledged this.


The nice thing about open protocols is that we don't have to endorse or use one implementation over another, yet, somehow, the browser monopoly continues to be a standing dilemma.

There are nice projects, like ungoogled chromium, tor, and many more, but I find the biggest issue is that there isn't a voice out there for the average person and a project that connects with the masses.

I think another issue is that a lot of the uninformed users have a strong apathy for the causes and ways the message is delivered, they rather engage and connect with things that are "fun" and want less friction rather than freedom and control.

How do we solve this? How do we make the browser ours, by the people, and for the people?

Sorry, I'm just sad whenever I think of this.


It's somehow even worse when you compile your own browser. Want Spotify or Netflix? You need Widevine with attestation. Go pay Google.

Your Browser Agent string isn't Chrome or Firefox? Enjoy endless Cloudflare captchas or just a 403 error.


> Your Browser Agent string isn't Chrome or Firefox?

nowadays, you could update this to just "your browser agent string isn't Chrome"


Yes, how sovereign national browsers (not depending on US companies and not sending data to US) can be be developed in this situation?

We start by not shipping Chrome with "native" applications instead of learning the platform APIs.

Followed by creating Web applications based on Web standards, instead of whatever Chrome does, and then complain about Firefox and Safari not being up to the game.


I really don't see how Electron is connected here. When you're an Electron app, you really don't have to care about which web APIs Chrome implements, you can just use the native NodeJS equivalents, which will usually give you a better UX anyways.

But absolutely on the second point. A standard with one implementation is not a standard. Regardless of market share, in a market with three providers, if two out of three don't support something, you have no business using it. It unhealthy for everyone involved.


Electron is Chrome packaged with the application.

If those devs cared about Web standards, it would be a pure Web application, or an headless executable, system/daemon conecting to the system's browser.


I'm not saying the Electron UX is better than a native app. I'm saying Electron apps using NodeJS libs have better UX to Electron apps using Web APIs. At best there's no difference for the user, but at worst, they get permission popups and limited access just like they would in a browser.

This is why Electron app devs prefer NodeJS libs to Web APIs and consequently have no impact on the adoption of a large chunk of the new Web APIs (not counting DOM and CSS things because those are rarely controversial and usually broadly implemented).

So yes, those devs don't care about these kinds of new web "standards", because they don't work with them. The people who use them are the ones who are dangerous and that's almost exclusively web app authors, because they can't just pull in a native library to do the same things.


Which browser engine uses V8?

> How do we solve this? How do we make the browser ours, by the people, and for the people?

Simple. Break up all the big tech corporations via anti-trust legislation. They are the robber barons of our time.


> How do we solve this? How do we make the browser ours, by the people, and for the people?

Unfortunately, the answer is pretty much always "real public funding"


You have a decent browser. The average person has Chrome. Those who do care switch to the former. What needs to be solved?

> voice out there for the average person and a project that connects with the masses

> they rather engage and connect with things that are "fun" and want less friction rather than freedom and control

Do you see the contradiction? The average person "connects with" less friction rather than control.


I understand what you’re saying, though there’s a quote that hurts me whenever I try and reason about it this way, which is:

"We must all fear evil men, but there is another kind of evil, which we must fear most, and that is, the indifference of good men”


You don't have to be indifferent. I think making GNU etc. more accessible for the person who is average except that they prefer control is noble.


The problem is that if there were one, it would be subverted by powerful people with enormous amounts of cash to throw around. Firefox was the people's browser, then it suddenly wasn't.

If you were some paragon of integrity with a ton of money, developed everything yourself, and refused all corruption, you would be called the Russian Chinese terrorist child-porn browser, denounced in Congress, and eventually arrested (then released) during a layover in Germany.

Google would send an opinion to the court vaguely supporting the prosecution but disguised as technical advice; Firefox would pretend they never heard of you or what is happening, and delete all mention of you when posted in comments or on their social media. Ubuntu and Fedora would remove you from their repositories, Apple and Android never allowed you in their stores in the first place. The NYT would do a story about your "shadowy origins" and ask whether a reasonable country should allow a company so unwilling to work with the government or selected nonprofits to be an intermediary between their children and a dangerous internet. Fox would call you an Islamo-Communist anti-Semite, and somehow also associate you with the "alt-right," Dr. Fauci, and "environmental whackos."

After two years, and the banning of your project by most companies and websites, and the contrived failure of other companies simply associated with you but unrelated to the browser, the charges will be dropped. The bans will still be there, and where they are gone, people will informally stick to them. People will not feel like they can put your company on their resume. Any casual mention of you on the social internet will inspire at least a half-dozen hate comments, and FOSS projects will be attacked for ever having mentioned you positively.

If you aren't a paragon, you sell out after the NYT story.

The reason there are monopolies is because they are enforced.


I guess one real life example is maybe Bitcoin? Would you say it managed to do that in finance successfully to some extent.

What's the usecase for this API?

My experience with running LLMs locally is spinnnig up llama-server (possibly on a separate machine) and then configuring other applications to point to that OpenAI compatible web server instead of OpenAI or similar.

I don't want a web browser creating/running an LLM instance as that machine may not have the capability or capacity to run an LLM instance.


I wonder if this is a generational thing of fresh young people that already cannot live without LLMs versus crusty old people that don’t want to require a super computer just to run a web browser that violates all their privacy.

To me this sounds like the point where people start looking at and developing alternatives to the browser/web.


This isn’t Mozilla taking a stance against AI.

It’s them articulating clear and logical reasons why the proposed API, in its current state, is bad for web interoperability.


Did they propose a specific alternative (non-extension) API?

Why would they? This is an issue put up on the "standards-position" repo. They requested a position on a proposed standard, and Mozilla gave it.

There’s one obvious alternative:

   fetch("https://api.openai.com/v1/chat/completions", { ... });

Right and that means people have to send their data to an external service.

Give it X months (or years??) and people will realize this is actually a privacy/data autonomy issue.

It's just dominated right now by the anti-AI/anti-technology sentiment in the west. That will gradually go away as more people use AI and robotics and realize how wrong they were about it.


>Right and that means people have to send their data to an external service.

Nothing in this proposal claims it has to be a local AI. That just happens to be the implementation by Chrome and Edge (for now at least, I'd imagine Google will eventually start moving this API towards hosted Gemini).


That's an important aspect of this that should really be part of the discussion on GitHub. But I've been told I'm not qualified to interject so I am not going to bother.

I will use WebLLM if I want something like this (with local AI guaranteed).


No, that’s not how this process usually happens.

Why would they need to?

So I guess the question would be, "What makes this acceptable Tech". I don't know how you get there without offering some type of "Search" like choice for open models. We all know how that turned out.

Maybe Mozilla can save itself by getting paid to serve Google's model as default rather than another providers. Would replace the revenue stream they lost.


I think the objection here is unrelated to the love or hate of LLMs. It's about the viability of this particular proposed open web API.

I personally use LLMs for coding assistance, and some home automation stuff, but I do not think this particular API is good for the web.


Meaning you do not want text generation in the web API at all, or you think the prompt API needs to be different? And if so can you give one sentence on how it should change?

https://github.com/runvnc/tersenet

If you glance at that then you may see that I am for the idea of leaner alternatives to the current web platform.

But in the context of the existing web API which has just about everything and the whole kitchen sink in it (hundreds of sub-APIs), I do not think it will really help anyone at this point just just stop adding features, especially major ones.

The web is basically an overlay operating system and has been for many years.


> Meaning you do not want text generation in the web API at all, or you think the prompt API needs to be different?

Not OP but I think you are misunderstanding the interaction as a whole here. The Chromium team made a proposal, then the Chromium team asked the Firefox team for a position on the proposal. Whether or not the Firefox team or anyone on the Firefox team has any goals around AI or whatever, this response was simply "We do not like this proposal for these reasons..."

How to fix those issues really isn't the Firefox team's job and also wasn't part of the question asked by the Chromium team.


You didn't read my comment carefully enough. It was not about AI in general. It was about the text generation API. And it is perfectly reasonable to ask if he wants to reject the feature entirely or if he can give a one sentence overview of how it might be fixed.

There are a lot of people reading his position. One or two additional clarifying sentences to spell it out for people skimming is not such an unreasonable ask.


> There are a lot of people reading his position. One or two additional clarifying sentences to spell it out for people skimming is not such an unreasonable ask.

I do think it is a bit unwarranted, actually. This isn't a press release, it's a technical discussion somewhat deep into a technical process that's open for archival purposes. His audience is not people skimming through, it's the Chromium team and other members of the standards body.

You're sort of overhearing a conversation and injecting yourself into it.


And so are you injecting yourself and objecting to me even discussing on HN.

And this is not really a technical issue. It's a worldview issue no matter how much you or others try to pretend it's a technical problem or that I am violating etiquette or something.


> And this is not really a technical issue. It's a worldview issue no matter how much you or others try to pretend it's a technical problem or that I am violating etiquette or something.

I'm actually so curious what you think is going on here


I do not want text generation in the web API at all.

IME young people mostly hate AI.

Young people love AI when it helps them cheat homework, or when used for roleplay and memes. Generating "content" with AI - is generally more hated, especially art and video.

Sounds hypocritical.

I hate knives cause they kill people, but I love my kitchen knife when I make dinner.

That is a bad counter-example, because its just a poorly conceived statement. You apparently don't hate knives. You hate killing people, which isn't remotely similar.

Using AI to cheat at academics and then hating on people who use AI to cheat on media creation is absolutely hypocritical. Its qualifying hypocritical stupidity like this results in shoving a single vendor's LLM into the browser.

If that's still too complicated then just call it complexity - https://en.wikipedia.org/wiki/Cognitive_complexity


The young kids I know who are into tech love AI. Albeit this is from a small sample size.

Funnily enough, most of the young people I know fall somewhere between those two sides of the spectrum.

I know some actual luddite-tier AI haters that believe it's ontologically evil, and another majoring in Data Science that went to the most recent career fair and told a recruiter "AI will replace you" (I uh don't think he's getting that internship)

And of course many, many, others that fall between the two extremes.

The one thing we can all agree on, is it makes homework a hell of a lot easier :) (well, except the luddite-types, they refuse to use it in any capacity)


I'm a member of a political action committee, where I was brought in as an expert in professional media applications of AI. I've got extensive experience using AI tools in the production of well known entertainment properties (think VFX for film and animation.) Anyway, within the political action committee where is a diverse mixture of people, with about 1/5th of them under age 30. The entire under age 30 set are so AI negative, to such an irrational degree, I have been asked to do nothing and offer no advice that incorporates any technology at all. They are so paranoid. In a not really emotional discussion, a bunch of them erupted in tears, they are so irrational about it.

The biggest irony with telling a recruiter they'll be replaced, is how much easier a data scientist is to replace with LLMs. With their sycophantic nature, execs will eat up whatever "data" the LLMs make up, too.

No, you don't understand. LLMs will never be capable of knowing what questions to ask, only how to ask the questions. /s

What does "into tech" even mean at this point?

Watching LTT all day? Playing on their iPhones constantly? Buying wireless earbuds?


Do they really? Hating on AI slop is a common sentiment on social media, but remember that the opinions you see on social media are often not representative of what the general population thinks at all.

I keep hearing stories about how homework is now useless because every student just gets ChatGPT to do it for them, and from personal experience, I'm inclined to believe them.


> every student just gets ChatGPT to do it

I don't believe every student uses a calculator to solve their math homework, so what makes ChatGPT unique here? For certain subjects the ability to cheat has been trivial for a long time, yet there was no crisis.


A little off-topic, I honestly don't think it's as much as the browser interface that needs to be reworked as it is the idea of operating systems in general.

I don't know what the right answer is, but having used Niri/Wayland vs. GNOME vs. Windows vs. Mac... I will never go back to a non-tiling desktop and a none-kb driven workflow for desktop window management.


> that don’t want to require a super computer just to run a web browser that violates all their privacy.

That shipped sailed in 2008.


I feel that a LLM that runs locally has its place in a modern browser. The alternative is sending your page contents to a server in the cloud with the associated loss of privacy. Of course issues like fingerprintability and vendor model lockin have to be taken into account. It seems to be too early to carve things in stone, so I agree with Brian Grinstead and the others.

The alternative is that web pages just don't run inference? Why is that something a web page should expect to have a right to? If you want to burn a bunch of GPU heat, spend it on your own servers, not my computer.

Either way, if this does happen I definitely hope it gets put behind a brower permission.


Google on their proposal:

> Browsers and operating systems are increasingly expected to gain access to language models.

I think this is only true amongst “AI all the things” folks. Both tech and non-tech people around me are more focused on turning these features off. Some even avoid sensitive actions like banking from LLM infused browsers.

So I think Mozilla is right to object. This API is not in the interest of the user/agent.


Extremely glad to see Mozilla taking a stance here.

28th of april 2025, isn't this before mozilla added lots of AI feature in their browser?

This is the specific position posted today/yesterday: https://github.com/mozilla/standards-positions/issues/1213#i...

The objection is not anti-AI. It’s anti this specific API, for nuanced web compatibility reasons.

Sigh, when I posted this, I linked to https://github.com/mozilla/standards-positions/issues/1213#i... (which was posted 11 hours ago). Unfortunately someone changed the link.

features that are opt in are ok. anti features that are opt out is not ok

Archibald is anti-AI. 70+% of his public statements have demonstrated that.

He is more or less aligned with the current most common sentiment in the west which is largely publicly against AI.

But realistically it's just slow adaptation, network effects, etc.

To give an example, before the MLB rolled out the Automated Ball Strike system this year, last year maybe 65+% of the sentiment in discussions about it was negative or in some cases just neutral.

Now that it has rolled out, 95% of the sentiment online about ABS is positive. The main comment by far is, why didn't they do this before, and why don't they do it automatically on all pitches now.

There are certain cognitive and informational flow limitations in society that will cause this to be delayed, just like all major technological advancements.

But once it rolls out, the perspective you hear online will be about digital sovereignty/personal data autonomy, now we aren't required to send our data to an external provider for AI, why wasn't this available before. People will probably assume it was blocked because it reduced a major source of data for advertising or something.

And overall AI and robotics in the future will be seen as the greatest enabling factor for increased equality in society.

It's really just this underlying dislike of and disrespect for technology that much of the western public has. Which may turn out to be one of the reasons that we lose our de facto leadership position in the world.


>To give an example, before the MLB rolled out the Automated Ball Strike system this year, last year maybe 65+% of the sentiment in discussions about it was negative or in some cases just neutral.

MLB's ABS does not use AI for its ball tracking. And it has specific payoffs particular to its context from four years of testing and wiel defined limits on use cases that don't necessarily generalize to issues surrounding AI and it's tradeoffs.


It's fun that I get to be called both "anti-AI" and an "AI shill" by people on the internet depending on the day of the week.

You're a politician. The sentiment leans anti in this cultural context at this time and so do your statements overall, such as if we look at this one and the rest and tally each one as positive or negative. Underlying you are more anti-AI than neutral. So your reply may have been technically true but it was deliberately misleading.

But you haven't really made a technical argument because your objection is not really technical. It's a type of politics.

It's obviously extremely extremely useful to have a simple API for accessing an LLM. It needs permissions like most things and the ability to limit download sizes/specific or maybe block use of external services if desired.

But anyway people will just fall back to a slightly worse alternative like a wrapper around WebLLM (that wraps WebGPU).

It's probably not politically feasible for you to take a different stance anyway.


> According to Chrome's documentation, to use the prompt API you must 'acknowledge' Google's Generative AI Prohibited Uses Policy. Elements of this policy go beyond law. For example:

>> Do not engage … generating or distributing content that facilitates … Sexually explicit content Do not engage in misinformation, misrepresentation, or misleading activities. This includes … Facilitating misleading claims related to governmental or democratic processes

> This seems like a bad direction for an API on the web platform, and sets a worrying precedent for more APIs that have UA-specific rules around usage.

I will say this more strongly—I think it is completely insane, and a violation of free expression principles, for a browser API to have content restrictions.


Agreed. Maybe Google will propose a CSS text formatting property that cannot be used on paragraphs that are critical of the US administration.

Like, that sounds daft, but it's not really far from what they're doing here.


Why is Google doing this? They would need to moderate the use of the API, right? What they could gain having to moderate use of a browser's API?

A blank cheque to restrict access to any website they want.

Possibly truth is a higher societal good than unfettered free expression? Reasonable people may debate that concept. Ref: X

Chrome seems to use a custom inference runtime also (in addition to Gemini Nano). It would be better if this were all interoperable. The WebGPU alternatives like WebLLM do not have the same access.

I've been trying these models out for the last year, and it seems to me that we want them to work in a 5-10W "laptop" power envelope, but they really work best with a 50-500W GPU instead - i.e. they eat batteries. This means things work better in a "plugged in" gaming laptop/desktop rather than a typical web client. At least for now.


This seems like that infamous <marquee> tag [0] to me that felt good and amazing at the time but later turned out not to be a good idea.

[0]. https://developer.mozilla.org/en-US/docs/Web/HTML/Reference/...


did the marquee prove to be a bad idea?

I think it was subsumed by later developments (javascript), but the issue with it AFAIR was just that it wasn't useable in all browsers, not that the tag per se was a bad idea (as much as scrolling text can be).

The situation with the model api seems different, more like the AMP spec.


To paraphrase Mean Girls

Stop trying to make browser llms happen, they're not going to happen.


The Prompt API has some advantages like being a little simpler for some things and some potential to standardize a little bit more in some way, but it looks like from this that it will be delayed unfortunately.

However, WebLLM (a library, not actual Web API) https://github.com/mlc-ai/web-llm is more capable and will already work using WebGPU.


I find this a weird discussion at the current point.

Shouldn't be there a basic process for allowing such an API as a alpha people can play around with and then there will be adjustments?

No one will start using this in production if they don't have a very good and specific use case. I mean you don't just run 2gb ML models in your browser today on a massive scale.


(Former Chrome team member who worked on this API, now retired.)

There was such a process! They shipped as first Dev Trial around 2025-04, then Origin Trial in 2025-05. Since then a number of people tried it and gave lots of feedback, leading to model quality improvements, language support expansion, API additions like structured responses and tool use, etc. You can find a lot of feedback and case studies if you search around.


> This will result in Mozilla and Apple having to licence Google's model, or ship a model that's quirks-compatible with the Google model in order to be interoperable. It may also become difficult for Chrome to update its own model for the same reasons.

Google is again doing Evil.

I am very annoyed that Google kind of de-facto controls the www (through chrome, let's be honest here).

We really need to change this. I don't have a good solution here, but it can not continue that way.


> We really need to change this. I don't have a good solution here, but it can not continue that way.

Advocacy (against chromium and its forks) is one way.


Chrome is not that good anymore compared to other browsers. I switched long time ago and if the doesn't work with basic features I just leave the site out instead of letting it use chrome to control me

Lina Khan's FTC sought to break Google into multiple companies, leaving Chrome alone. Alas, Google escaped unscathed.

I am curious if such thing happened, how would Chrome sustain itself as a company. I imagine Google would pay a hefty contract to it and keep their control, or some other actor would do and change the actors in the problem, but keeping it.

Fortunately, they chickened out when they realized that forcing Google to divest Chrome would result in Chrome being owned by Perplexity (an Indian AI company). Or perhaps somebody even worse, like Elon Musk.

Only have yourselves to blame. Chrome made the internet better but everyone put their fingers in their ears about it getting worse at the same time.

It was hard to stomach the "I looove Chrome. It can do no wrong" but these "Why did we let google control everything" comments are even worse

Which Internet did make better?

You remember the IE days right?

Being a web developer was not fun; and the web was absolutely being held back. Chrome did a lot of things right: per-origin sandboxing, properly implementing web standards, V8, developer tools, and back then Chromium was super close to Chrome.

Do I think Chrome is a net-negative for the web over the past ~3-5 years? Yes, especially with manifest v3, “privacy sandbox”, and them basically forcing through web APIs because they have the dominant marketshare.

But early Chrome was a technologically impressive and user-friendly browser that really did make the web massively better.

I remember happily putting Firefox and Chrome mini-banners (what are they called? Those little rectangular images) on my website, for free, because I recommended it.


Developer tools, at least, came through Firefox with Firebug, years before Chrome/Chromium existed.

For anyone working in the web area during the old IE days will know, not having to have a dedicated css and js for each browser type was a gamechanger.

Chrome's introduction, albeit through smoother, lighter browser experience at the time, pushed other browsers to standardize to google.

In one way it's bad to have a homogenous approach to all things web based, but in another way it did make the internet a better experience overall.


In the horror days of IE, I remember having to look up some DirectX filter to properly display PNG images with transparency. It was that bad, and that’s one example of 1000.

Some libraries/scripts helped normalise things a little, but never enough. Yuck.


The one you're using every day filled with web apps that runsl securely without you dowloading sketchy binaries or being locked into walled garden app stores.

It's also the one where I find sites where I can't even login if I'm using Firefox. i.e. my bank just redesigned their website and now you can login only with Chrome. For some weird bug, Firefox isn't allowed.

The same exact issues we had with IE.


Both, actually. It did make some parts of the Internet better, and some other worse.

That discussion has a quote about querying the LLM for version information. If the models hallucinate/make up court citations, work and facts, what makes them believe that the model provided a genuine version number as opposed to an generatively constructed string?

Yes! It might lie or hallucinate. But also, all browsers claim to be "Mozilla/5.0" in their user agent string. It's a very similar problem.

Would it even be possible for a model to know its own version number? I guess maybe if they decide to put it in the system prompt or something

This reminds me of the speech to text API, which already uses AI and is available on almost all browsers. So there's already precedent.

But most importantly this would enable us to finally write JavaScript like this:

const a = prompt("how much is 31c in Fahrenheit")

The future looks bright!


I know you're probably joking, but I was curious how hard it would be.

const cToF = c => c * 9/5 + 32; const a = cToF(31);


Alas we’re in a lovely near monoculture once again.

My personal opinion is that if we are going to have any amount of AI capability in the browser, it should be something very low-level, akin to WebGPU. Ideally, it would work similarly to Apple's Accelerate framework, where your requests are just routed to whatever AI accelerator the device thinks makes sense, so that we can polyfill using WebGPU compute shaders.

If a web developer wants to use a cloud model, with the associated legal requirements and business relationships of that model, we already have a way to do that: Use Fetch API on a CORS endpoint. There's no need to have the browser do cloud model brokering to a model you haven't tested with, run by a company you might not want to actually do business with.


So the next anti trust case for the EU. Chrome is clearly dominating the browser market and now they try to abuse that (again)

It's exhausting having such reflexive thoughtless ragging anytime Chrome is mentioned.

Oh no! Chrome is trying to enhance user agency again! Oh no! Chrome is trying to make the web better for end users!

Mozilla's concerns aren't totally bogus, I'm not going to try to laugh them out of the room. But their pearl clutching & belly-aching about "oh no what if not all implementations of ai prompts work exactly the same" feels fucking tired and weak sauce to me.

This post really doesn't deserve our attention, my my view. But I'd challenge the haters to at least try to connect their reflexive hate meaningfully to what the topic at hand actually is, to provide something worth considering in some way. But that I think asks too much, for what posts like this seek: merely to inflame the world.


It is non-obvious that adding an LLM to a web browser makes anything better for the web browser user.

> But I'd challenge the haters to at least try to connect their reflexive hate meaningfully to what the topic at hand actually is, to provide something worth considering in some way. But that I think asks too much, for what posts like this seek: merely to inflame the world.

Your fixated bias shows that your not willing to shuffle. Think out of the box for once.

No one is ruling out that Google is trying to enhance the internet, but at the same them rather it making positive and open it enables & restricts the next wave of generational internet.

How am I suppose to develop a uniformed application when I have to follow the rules of Google. Why should I have to, because I would be forced to.


What are you even talking about?

It seems pretty basic to me that yes I should be able to have some agentic experiences working for me.

The Mozilla line here is that this is bad because different people might have different agents and that is intolerable, is a risk.

It's hiding in a cave to avoid seeing your own shadow idiotic.


It's not pearl clutching to suggest that websites will build around quirks of a specific model and then we'll be stuck with it forever. This is an issue for future Google as much as it is for Mozilla and Apple.

We had WebSQL which defactor relied on a specific DB implementation, sqlite, and I suspect it also essentially couldn't be updated because people relied on the quirks of a specific version of sqlite.


Oh no, Chrome is adding something that shouldn't be in the browser in the first place. Oh no, Chrome is adding Googles own AI as only possibilty what surely doesn't hinder competition.

Maybe you shouldn't reflexivly defend Chrome when they clearly abuse their market leading position to push their own AI.


Can you please explain how the hell AI slop is going to "enhance user agency" or "make the web better"?

If every browser vendor already has their experimental APIs that can work with different models, it might be a good idea to standardize this in WhatWG living standards (which would still be bad user experience on today's consumer hardware)

But if no browser other than Chrome supports this, and only Google's (proprietary) model (edit: plus Microsoft's Phi-4 mini in Edge), it should be clear it's Google abusing its position. There is nothing worth standardizing.

And we have seen that too many times -- FLoC/Privacy Sandbox/Topics API, Web Environment Integrity just to name a few. Google has been relentless in using its dominant position to push terrible ideas that harm both users and other browser vendors but help only Google's business.

Surprised this did not really come up in previous discussion in https://news.ycombinator.com/item?id=47917026

PS: looks like Google's fanboys have arrived. Someone better finds good counterarguments, especially technical ones, instead of just downvoting.


I was formerly the design lead / spec editor for this API while I worked at Google. I retired in 2025-09, before it got shipped. The following contains no inside knowledge.

I am sympathetic to all of Mozilla's concerns here, even though on balance I believe Chromium's decision to ship was the right one.

---

On interoperability, I agree that this is a tough case. But I am more optimistic than Mozilla that developers will use this API in a way that can work across different models.

First, they will be somewhat forced to, because Chrome will change the model over time. (It already changed from Gemini Nano 2 to 3, and I suspect it'll change to 4 soon if it hasn't already.) Edge is already shipping a Phi-based version. A small number of users are using other models via extensions like https://aibrow.ai/. And it's very possible Safari might join the party, exposing the Apple Foundation Models that ship with iOS via this API. (When the Foundation Models API came out, we were struck by how similar it was to the prompt API designs that preceded it, and were hopeful that Apple was going to do a surprise announcement of shipping the prompt API. It hasn't happened yet, but I still think it might soon.)

Second, we designed the API to steer developers in that direction as much as possible, e.g. encouraging the use of structured output constraints. There are also lots of clear error paths, that almost force developers to use this as a progressive enhancement. (E.g., the existence of low-memory/disk space devices.) So it's very unlikely we'll see developers build sites that are gated on this API existing. It'll mostly be used to sprinkle some AI magic, or let users do cool things without entering some cloud API keys.

I made similar arguments for the writing assistance APIs at [1]. As I said there, the prompt AI is trickier than the writing assistance APIs. But I believe it's a difference of degree, not kind. The web has many nondeterministic APIs that access some underlying part of the system, from geolocation to speech recognition/synthesis, all the way up to these AI-based ones. The question is where you draw the line. Mozilla seems to be giving some signals (not yet definite) that translation is on the OK side of the line, but summarization/writing/rewriting/prompting is not. That's a very reasonable position for them to take on behalf of their users. I imagine the Chromium project is hoping that over time, in-the-wild experience with these APIs shows that the benefits outweigh the risks and costs, and so Mozilla (and Apple) follow in shipping them as well. That's definitely happened in other cases, e.g., Mozilla recently indicating interest [2] in implementing WebBluetooth, WebHID, WebNFC, WebSerial, and WebUSB after years of taking a wait-and-see attitude.

You can learn more about my general thoughts on this question of shipping APIs first, and how the Chromium project takes on first-mover risks, at [3], which I wrote during my time on the Chrome team.

---

On the prohibited use policy, I agree that this is just absurd on Chrome's part. This is not how web APIs should work. It smacks of lawyers trying to throw something out there to cover themselves, or of corporate policy being set at the top level for "all AI uses" and then applied even for web APIs where that makes less sense.

The only saving grace is that I suspect it won't actually trigger. Because, as Mozilla points out, it's quite impractical to enforce. But it's still wrong.

I hope Chrome changes this, although I'm not holding my breath.

I did find it interesting that Gemma seems to have a similar terms of use [4]. (Open-weights, not open-source!) As do the Apple Foundation Models in iOS [5]. So unfortunately if the Chrome team were to push for a no-TOS API, they might be forging new ground, which is always difficult in a large company.

---

On the issue of insubstantial developer signals, I think this is just a failure of the current Chrome team in terms of collecting and collating signals. If one pokes around and knows where to look in various threads, you can find a lot more positive signals than the outdated ones in [6]. I wouldn't have let that Intent to Ship get out the door without properly updating that section of the explainer, for sure. (But hey, not my job anymore!!)

[1]: https://github.com/mozilla/standards-positions/issues/1067#i... [2]: https://github.com/whatwg/sg/pull/264 [3]: https://www.chromium.org/blink/guidelines/web-platform-chang... [4]: https://ai.google.dev/gemma/terms [5]: https://developer.apple.com/apple-intelligence/acceptable-us... [6]: https://github.com/webmachinelearning/prompt-api/blob/main/R...


Hey Domenic,

Sucks to be corresponding via The Second Worst Website In The World (TM), but here we are. Hope all's well on your end.

A minor correction from Edge's perspective: we've participated in OT using Phi models, but have not shipped to Stable, and are unlikely to given the current shape of things. Developers have not given us feedback that they're relaxed about compatibility, but I would obviously welcome that sort of data in case anyone has it to hand.

Best,

Alex


Thank you for posting this.

On interoperability, time will tell I guess. I've only been working on Firefox for a few months, but general interop issues are way worse than I realised when we worked together at Chrome. Firefox frequently gets bug reports for not behaving like Chrome, even when Firefox is complying with the spec, and Chrome is not. We end up having to just behave like Chrome.

On developer signals… I'm sure there's better evidence of positive sentiment than Chrome provided, but there's a lot of negative sentiment too. I think it would be fair to call the developer signal "mixed", or maybe even "polarised".


I just wonder how a highly non-deterministic API like the Prompt API can work in a system that heavily focuses on interop between new and old websites.

What's going to happen is that people build stuff with the current iteration and a few years later a model update will work entirely differently and break the existing implementations. I understand that every once in a while OpenAI also shuts off older models through API but that's a central process.

What if I have Firefox 150 users that haven't updated yet and Firefox 155 users that have different models, while Chrome 160 and Chrome 170 users also exist and have different models. Is it expected that I build entirely different implementations for every browser version out there? Don't the work groups try to prevent exactly that within HTML & CSS through feature gating?


I’m kinda terrified by the security implications of the Prompt API.

This is a way for web services to make your computer complete large amounts of compute at their behest. Tokens have value. There will be incentive for bad actors to use your local LLM for their own purposes, much like hostile crypto mining payloads.

This is an obvious target for prompt injection attacks and other malicious remote code execution. In many ways, model prompts ARE programs. The browser / local device would need to provide an LLM with the same sandbox guarantees as the rest of the browser. Can they be trusted to do that? Does anyone understand this well enough to do that with confidence?

I’m a big fan of local models, but I would be very cautious about letting random websites call the model I’m hosting on my local machine with open source software.


Yeah I wonder, who says I can't build a "cryptominer like" script that injects into many websites and just uses this local LLM api, performs a request from a queue and sends the response to a server, practically creating my very own LLM botnet?

> Does anyone understand this well enough to do that with confidence?

Pretty sure Chrome wouldn't ship if they weren't confident. And Firefox would object based on security grounds if they saw such an issue


Web API features should be things that are necessary to enable features in Web applications. We don't need the browser to have a Prompt API to enable web applications to have goofy chatbots lurking in the corner. WebDevs are perfectly capable of ruining their websites on their own.

So we can’t have XSLT fast and efficient templating syntax but Prompt APIs with potential attack injection vectors are cool as long as they’re generic enough for all megacorps to drop in? No security risks here huh? Not trying to increase the attack surface huh?

Don't forget MathML and all the other features they gave up on

When I posted this, I linked to the latest statement https://github.com/mozilla/standards-positions/issues/1213#i..., which is the content relevant to the title (the details of our opposition to the API). Unfortunately someone removed the link to the specific post.

The "someone" was HN's software but I agree it was a mistake in this case. Sorry! Fixed above now.

Is this going to be another situation, like WebSQL, where Firefox torpedos a broadly useful feature?

I think every aspect within their opposition is sound and generally to keep the web open and predictable (unlike other oppositions like the Filesystem API).

I wonder if it makes sense for browser vendors to agree upon and ship various ‘standard models’ that are released into the public domain or something, and the API lets you pick between them.

The models themselves would be standardized and the weights and everything should be identical between browsers. They’d be standard and ‘web-safe’ like CSS colors or fonts. Probably would help to give them really boring/unbranded names too. These would work identically across browsers and web developers can rely on them existing on modern setups.

If you want more models, you could install them as a user or your browser could ship them or the web developers could bundle them through a CDN (and another standard for shared big files across domains would probably be needed)


It doesn't make sense at all. So as a user how do you choose which model to use? There could be 3824 models to choose from. The browser might as well set one as default, and we all know how that goes (see: search engine).

Not to mention many other UX questions the come with this, most importantly, how unusable these local models are on regular 3-year old laptops that are constrained in RAM, GPU/CPU capability and likely disk space despite what enthusiasts say here. (They have a Macbook Pro with 32+GB of RAM, reports it works great with xyz model -- fine -- but somehow thinks it works for everyone and local models are the future.)


The Chrome model requires either "16 GB of RAM or more and 4 CPU cores or more" or "Strictly more than 4 GB of VRAM", and "22 GB of free space" (it uses around 4.4GB but it doesn't want to use the remaining free space).

The model is pretty slow on my M4 Pro mac.

The API allows the browser to use a cloud service instead, but then privacy is lower. So, more privacy for the rich.


> It doesn't make sense at all. So as a user how do you choose which model to use? There could be 3824 models to choose from. The browser might as well set one as default, and we all know how that goes (see: search engine).

...what's the exact problem here? Believe it or not, most non-tech-savvy users use the search engine just fine.


With regards to search engines, Google paid billions of dollars [0] to become the default on major browsers. I guess GP's implying that something similar might happen with LLMs.

[0] https://www.reuters.com/technology/google-paid-26-bln-be-def...


The rate of model development is an issue here. Once there are many cross-origin models, it becomes a fingerprinting vector. Also even the small models are many GBs.

Browsers do not need to force LLMs on their users.