I like this on the first glance. The idea of a random website probing arbitrary local IPs (or any IPs for that matter) with HTTP requests is insane. I wouldn't care if it breaks some enterprise apps or integrations - enterprises could reenable this "feature" via management tools, normal users could configure it themselves, just show a popup "this website wants to control local devices - allow/deny".
This is a misunderstanding. Local network devices are protected from random websites by CORS, and have been for many years. It's not perfect, but it's generally quite effective.
The issue is that CORS gates access only on the consent of the target server. It must return headers that opt into receiving requests from the website.
This proposal aims to tighten that, so that even if the website and the network device both actively want to communicate, the user's permission is also explicitly requested. Historically we assumed server & website agreement was sufficient, but Facebook's recent tricks where websites secretly talked to apps on your phone have broken that assumption - websites might be in cahoots with local network servers to work against your interests.
Doesn't CORS just restrict whether the webpage JS context gets to see the response of the target request? The request itself happens anyway, right?
So the attack vector that I can imagine is that JS on the browser can issue a specially crafted request to a vulnerable printer or whatever that triggers arbitrary code execution on that other device. That code might be sufficient to cause the printer to carry out your evil task, including making an outbound connection to the attacker's server. Of course, the webpage would not be able to discover whether it was successful, but that may not be important.
I think CORS is so hard for us to hold in our heads in large part due to how much is stuffed into the algorithm.
It may send an OPTIONS request, or not.
It may block a request being sent (in response to OPTIONS) or block a response from being read.
It may restrict which headers can be set, or read.
It may downgrade the request you were sending silently, or consider your request valid but the response off limits.
It is a matrix of independent gates essentially.
Even the language we use is imprecise, CORS itself is not really doing any of this or blocking things. As others pointed out it’s the Single Origin Policy that is the strict one, and CORS is really an exception engine to allow us to punch through that security layer.
No, a preflight (OPTIONS) request is sent by the browser first prior to sending the request initiated by the application. I would be surprised if it is possible for the client browser to control this OPTIONS request more than just the URL. I am curious if anyone else has any input on this topic though.
Maybe there is some side-channel timing that can be used to determine the existence of a device, but not so sure about actually crafting and delivering a malicious payload.
Haha I remember that. The solution at the time for many forum admins was to simply state that anyone found to be doing that would be permabanned. Which was enough to make it stop completely, at least for the forums that I moderated. Different times indeed.
This expectation is that this should not work - well behaved network devices shouldn't accept a blind GET like this for destructive operations. Plenty of other good reasons for that. No real alternative unless you're also going to block page redirects & links to these URLs as well, which also trigger a similar GET. That would make it impossible to access any local network page without typing it manually.
While it clearly isn't a hard guarantee, in practice it does seem to generally work as these have been known issues without apparent massive exploits for decades. That CORS restrictions block probing (no response provided) does help makes this all significantly more difficult.
"No true Scotsman allows GETs with side effects" is not a strong argument
It's not just HTTP where this is a problem. There are enough http-ish protocols where protocol smuggling confusion is a risk. It's possible to send chimeric HTTP requests at devices which then interpret them as a protocol other than http.
Exactly you can also trigger forms for POST or DELETE etc. this is called CSRF if the endpoint doesn't validate some token in the request. CORS only protects against unauthorized xhr requests. All decades old OWASP basics really.
That highly ranked comments on HN (an audience with way above average-engineer interest in software and security) get this wrong kinda explains why these things keep being an issue.
I don't know why you are getting downvoted, you do have a point. Some of the comments appear knowing what CORS headers are, but neither their purpose nor how it relates to CSRF it seems, which is worrying. It's not meant as disparaging. My university thought a course on OWASP thankfully, otherwise I'll probably also be oblivious.
This misses the point a bit. CSRF usually applies to people who want only same domain requests and dont realize that cross domain is an option for the attacker.
In the modern web its much less of an issue due to samesite cookies being default .
The idea is, the malicious actor would use a 'simple request' that doesn't need a preflight (basically, a GET or POST request with form data or plain text), and manage to construct a payload that exploits the target device. But I have yet to see a realistic example of such a payload (the paper I read about the idea only vaguely pointed at the existence of polyglot payloads).
There doesn't need to be any kind of "polyglot payload". Local network services and devices that accept only simple HTTP requests are extremely common. The request will go through and alter state, etc.; you just won't be able to read the response from the browser.
I can give an example of this; I found such a vulnerability a few years ago now in an application I use regularly.
The target application in this case was trying to validate incoming POST requests by checking that the incoming MIME type was "application/json". Normally, you can't make unauthorized XHR requests with this MIME type as CORS will send a preflight.
However, because of the way it was checking for this (checking if the Content-Type header contained the text "application/json"), It was relatively easy to construct a new Content-Type header that bypasses CORS:
It's worth bearing in mind in this case that the payload doesn't actually have to be form data - the application was expecting JSON, after all! As long as the web server doesn't do its own data validation (which it didn't in this case), we can just pass JSON as normal.
This was particularly bad because the application allowed arbitrary code execution via this endpoint! It was fixed, but in my opinion, something like that should never have been exposed to the network in the first place.
Some devices don't bother to limit the size of the GET, which can enable a DOS attack at least, a buffer overflow at worst. But I think the most typical vector is a form-data POST, which isn't CSRF-protected because "it's on localhost so it's safe, right?"
I've been that sloppy with dev servers too. Usually not listening on port 80 but that's hardly Ft Knox.
> No, a preflight (OPTIONS) request is sent by the browser first prior to sending the request initiated by the application.
Note: preflight is not required for any type of request that browser js was capable of making prior to CORS being introduced. (Except for local network)
So a simple GET or POST does not require OPTIONS, but if you set a header it might require OPTIONS (unless its a header you could set in the pre-cors world)
You’re forgetting { mode: 'no-cors' }, which makes the response opaque (no way to read the data) but completely bypasses the CORS preflight request and header checks.
This is missing important context. You are correct that preflight will be skipped, but there are further restrictions when operating in this mode. They don't guarantee your server is safe, but it does force operation under a “safer” subset of verbs and header fields.
The browser will restrict the headers and methods of requests that can be sent in no-cors mode. (silent censoring in the case of headers, more specifically)
Anything besides GET, HEAD, POST will result in an error in browser, and not be sent.
All headers will be dropped besides the CORS safelisted headers [0]
And Content-Type must be one of urlencoded, form-data, or text-plain. Attempting to use anything else will see the header replaced by text-plain.
That’s just not that big of a restriction. Anecdotally, very few JSON APIs I’ve worked with have bothered to check the request Content-Type. (“Minimal” web frameworks without built-in security middleware have been very harmful in this respect.) People don’t know about this attack vector and don’t design their backends to prevent it.
I agree that it is not a robust safety net. But in the instance you’re citing, thats a misconfigured server.
What framework allows you to setup a misconfigured parser out of the box?
I dont mean that as a challenge, but as a server framework maintainer Im genuinely curious. In express we would definitely allow people to opt into this, but you have to explicitly make the choice to go and configure body-parser.json to accept all content types via a noop function for type checking.
Meaning, its hard to get into this state!
Edit to add: there are myriad ways to misconfigure a webserver to make it insecure without realizing. But IMO that is the point of using a server framework! To make it less likely devs will footgun via sane defaults that prevent these scenarios unless someone really wants to make a different choice.
SvelteKit for sure, and any other JS framework that uses the built-in Request class (which doesn’t check the Content-Type when you call json()).
I don’t know the exact frameworks, but I consume a lot of random undocumented backend APIs (web scraper work) and 95% of the time they’re fine with JSON requests with Content-Type: text/plain.
I think you’re making those restrictions out to be bigger than they are.
Does no-cors allow a nefarious company to send a POST request to a local server, running in an app, containing whatever arbitrary data they’d like? Yes, it does. When you control the server side the inability to set custom headers etc doesn’t really matter.
Thankfully no-cors also restricts most headers, including setting content-type to anything but the built-in form types. So while CSRF doesn't even need a click because of no-cors, it's still not possible to do csrf with a json-only api. Just be sure the server is actually set up to restrict the content type -- most frameworks will "helpfully" accept and convert form-data by default.
It depends. GET requests are assumed not to have side effects, so often don't have a preflight request (although there are cases where it does). But of course, not all sites follow those semantics, and it wouldn't surprise me if printer or router firmware used GETs to do something dangerous.
Also, form submission famously doesn't require CORS.
I can confirm that local websites that don't implement CORS via the OPTIONS request cannot be browsed with mainstream browsers. Does nothing to prevent non-browser applications running on the local network from accessing your website.
As far as I can tell, the only thing this proposal does that CORS does not already do is provide some level of enterprise configuration control to guard against the scenario where your users are using compromised internet sites that can ping around your internal network for agents running on compromised desktops. Maybe? I don't get it.
If somebody would fix the "no https for local connections" issue, then IoT websites could use authenticated logins to fix both problems. Non-https websites also have no access to browser crypto APIs so roll-your-own auth (the horror) isn't an option either. Fustrating!
I don't believe this is true? As others have pointed out, preflight options requests only happen for non simple requests. Cors response headers are still required to read a cross domain response, but that is still a huge window for a malicious site to try to send side effectful requests to your local network devices that have some badly implemented web server running.
[edit]: I was wrong. Just tested that a moment ago. It turns out NOT to be true. My web server during normal operation is current NOT getting OPTIONS requests at all.
Wondering whether I triggered CORS requests when I was struggling with IPv6 problems. Or maybe it triggers when I redirect index.html requests from IPv6 to IPv4 addresses. Or maybe I got caught by the earlier roll out of version one of this propposal? There was definitely a time while I was developing pipedal when none of my images displayed because my web server wasn't doing CORS. But. Whatever my excuse might be, I was wrong. :-/
Or simply perform a timing attack as a way of exploring the local network, though I'm not sure if the browser implementation immediately returns after the request is made (ex fetch API is called) but before the response is received. Presumably it doesn't, which would expose it to timing attacks as a way of exploring the network.
Almost every js API for making requests is asynchronous so they do return after the request is made. The exception though is synchronous XHR calls, but I'm not sure if those are still supported
... Anyhow I think it doesn't matter because you can listen for the error/failure of most async requests. Cors errors are equivalent to network errors - the browser tells the js it got stays v code 0 with no further information - but the timing of that could lead to some sort of interference? Hard to say what that would be though. Maybe if you knew the target webserver was slow but would respond to certain requests, a slower failed local request could mean it actually reached a target device.
That said, why not just fire off simple http requests with your intended payload? Abusing the csrf vulnerabilities of local network devices seems far easier than trying to make something out of a timing attack here.
CORS doesn’t protect you from anything. Quite the opposite: it _allows_ cross origin communication (provided you follow the spec). The same origin policy is what protects you.
I made a CTF challenge 3 years ago that proves why local devices are not so protected. exploitv99 bypasses PNA with timing as the other commentor points out.
> The issue is that CORS gates access only on the consent of the target server. It must return headers that opt into receiving requests from the website.
False. CORS only gates non-simple requests (via options), simple requests are sent regardless of CORS config, there is no gating whatsoever.
How would Facebook do that? They scan all likely local ranges for what could be your phone, and have a web server running on the phone? That seems more like a problem of allowing the phone app to start something like that and keep it running in the background.
I think this can be circumvented by DNS rebinding, though your requests won't have the authentication cookies for the target, so you would still need some kind of exploit (or a completely unprotected target).
CORS prevents the site from accessing the response body. In some scenarios, a website could, for example, blindly attempt to authenticate to your router and modify settings by guessing your router bran/model and password
> but Facebook's recent tricks where websites secretly talked to apps on your phone have broken that assumption - websites might be in cahoots with local network servers to work against your interests.
This isn't going to help for that. The locally installed app, and the website, can both, independently, open a connection to a 3rd party. There's probably enough fingerprinting available for the 3rd party to be able to match them.
This sounds crazy to me. Why should websites ever have access to the local network? That presents an entirely new threat model for which we don’t have a solution. Is there even a use case for this for which there isn’t already a better solution?
I've used https://pairdrop.net/ before to share files between devices on the same LAN. It obviously wouldn't have to be a website, but it's pretty convenient since all my devices I wanted to share files on already have a browser.
Same use case, but I remember getting approval prompts ( though come to think of it, those were not mandated, but application specific prompts to ensure you consciously choose to share/receive items ). To your point, there are valid use cases for it, but some tightening would likely be beneficial.
Not a local network, but localhost example: due to the lousy private certificate capability APIs in web browsers, this is commonly used for signing with electronic IDs for countries issuing smartcard certificates for their citizens (common in Europe). Basically, a web page would contact a web server hosted on localhost which was integrated with PKCS library locally, providing a signing and encryption API.
One of the solutions in the market was open source up to a point (Nowina NexU), but it seems it's gone from GitHub
For local network, you can imagine similar use cases — keep something inside the local network (eg. an API to an input device; imagine it being a scanner), but enable server-side function (eg. OCR) from their web page. With ZeroConf and DHCP domain name extensions, it can be a pretty seamless option for developers to consider.
>Why should websites ever have access to the local network?
It's just the default. So far, browsers haven't really given different IP ranges different security.
evil.com is allowed to make requests to bank.com . Similarly, evil.com is allowed to make requests to foo.com even if foo.com DNS resolves to 127.0.0.1 .
> It's just the default. So far, browsers haven't really given different IP ranges different security.
I remember having "zone" settings in Internet Explorer 20 years ago, and ISTR it did IP ranges as well as domains. Don't think it did anything about cross-zone security though.
> Is there even a use case for this for which there isn’t already a better solution?
I deal with a third-party hosted webapp that enables extra when a webserver hosted on localhost is present. The local webserver exposes an API allowing the application to interact more closely with the host OS (think locally-attached devices and servers on the local network). If the locally-installed webserver isn't present the hosted app hides the extra functionality.
Limiting browser access to the localhost subnet (127.0.0.1/8) would be fine to me, as a sysadmin, so long as I have the option to enable it for applications where it's desired.
>That presents an entirely new threat model for which we don’t have a solution.
What attack do you think doesn't have a solution? CSRF attacks? The solution is CSRF tokens, or checking the Origin header, same as how non-local-network sites protect against CSRF. DNS rebinding attacks? The solution is checking the Host header.
> normal users could configure it themselves, just show a popup "this website wants to control local devices - allow/deny".
MacOS currently does this (per app, not per site) & most users just click yes without a second thought. Doing it per site might create a little more apprehension, but I imagine not much.
Do we have any evidence that most users just click yes?
My parents who are non-technical click no by default to everything, sometimes they ask for my assistance when something doesn't work and often it's because they denied some permission that is essential for an app to work e.g. maybe they denied access to the microphone to an audio call app.
Unless we have statistics, I don't think we can make assumptions.
The amount of "malware" infections I've responded to over the years that involved browser push notifications to Windows desktops is completely absurd. Chrome and Edge clearly ask for permissions to enable a browser push.
The moment a user gets this permissions request, as far as I can tell they will hit approve 100% of the time. We have one office where the staff have complained that it's impossible to look at astrology websites without committing to desktop popups selling McAfee. Which implies those staff, having been trained to hit "no", believe it's impossible to do.
(yes, we can disable with a GPO, which I heavily promote, but that org has political problems).
As a counter example, I think all these dialogs are annoying as hell and click yes to almost everything. If I’m installing the app I have pre-vetted it to ensure it’s marginally trustworthy.
I have no statistics but I wouldn't consider older parents the typical case here. My parents never click yes on anything but my young colleagues in non engineering roles in my office do. And I'd say even a decent % of the engineering colleagues do too - especially the vibe coders. And they all spend a lot more time on they computer then my parents.
Interesting parallel between the older-parents who (may have finally learned to) deny and young folks, supposed digital-natives a majority of which who don’t really understand how computers work.
People accept permission prompts from apps because they conciously downloaded the app and generally have an idea about the developer and what the app does. If a social media app asks for permission to your photos it's easy to understand why, same with a music streamer wanting to connect to your smart speaker.
A random website someone linked me to wanting to access my local network is a very different case. I'm absolutely not giving network or location or camera or any other sort of access to websites except in very extreme circumstances.
I have seen it posed as 'This site has bot protection. Confirm that you are not a bot by clicking yes', trying to mimic the modern Cloudflare / Google captchas.
To be clear: implementing this in browser on a per site basis would be a massive improvement over in-OS/per-app granularity. I want this popup in my browser.
But I was just pointing out that, while I'll make good use of it, it still probably won't offer sufficient protection (from themselves) for most.
And annoyingly, for some reason it does not remember this decision properly. Chrome asks me about local access every few weeks, it seems.
Yes, as a Chromecast user, please do give me a break from the prompts, macOS – or maybe just show them for Airplay with equal frequency and see how your users like that.
problem is: without allowing it webUIs like synology won't work, since they require your browser to connect to the local network... as it is, it's not great
I can't believe that anyone still thinks a popup permission modal offers any type of security. Windows UAC has shown quite definitively that users will always click through any modal in their way without thought or comprehension.
Besides that, approximately zero laypersons will have even the slightest clue what this permission means, the risks involved, or why they might want to prevent it. All they know is that the website they want is not working, and the website tells them to enable this or that permission. They will all blindly enable it every single time.
I don't think anyone's under the impression that this is a perfect solution. But it's better than nothing, and the options are this, nothing, or a security barrier that can't be bypassed with a permission prompt. And it was determined that the latter would break too many existing sites that have legitimate (i.e., doing something the end user actively wants) reason to talk to local devices.
I wonder how much of that is on the modal itself. If we instead popped up an alert that said "blocked an attempt to talk to your local devices, since this is generally a dangerous thing for websites to do. <dismiss>. to change this for this site, go to settings/site-security", making approval a more annoying multi-click deliberate affair, and defaulting the knee-jerk single-click dismissal to the safer option of refusal.
Maybe. But eventually they will learn. In the meantime, other users, who at least try to stay somewhat safe ( if it is even possible these days ), can make appropriate adjustments.
I think it does, in many (but definitely not all) contexts.
For example, it's pretty straightforward what camera, push notification, or location access means. Contact sharing is already a stretch ("to connect you with your friends, please grant...").
This is so true. The modern Mac is a sea of Allow/Don't Allow prompts, mixed with the slightly more infantilizing alternative of the "Block" / "Open System Preferences" where you have to prove you know what you're doing by manually browsing for the app to grant the permission to, to add it to the list of ones with whatever permission.
They're just two different approaches with the same flaw: People with no clue how tech works cannot completely protect themselves from any possible attacker, while also having sophisticated networked features. Nobody has provided a decent alternative other than some kind of fully bubble-wrapped limited account using Group Policies, to ban all those perms from even being asked for.
Asking you if you trust a device before opening a data connection to it is simply not the same thing as asking the person who just created a shortcut if they should be allowed to do that.
I once encountered malware on my roommate’s Windows 98 system. It was a worm designed to rewrite every image file as a VBS script that would replicate and re-infect every possible file whenever it was clicked or executed. It hid the VBS extensions and masqueraded as the original images.
Creation of a shortcut on Windows is not necessarily innocuous. It was a common first vector to drop malware as users were accustomed to installing software that did the same thing. A Windows shortcut can hide an arbitrary pathname, arbitrary command-line arguments, a custom icon, and more; these can be modified at any time.
So whether it was a mistake for UAC to be overzealous or obstructionist, or Microsoft was already being mocked for poor security, perhaps they weren’t wrong to raise awareness about such maneuvers.
This Web Security lecture by Feross Aboukhadijeh has a great example of Zoom's zero-day from 2019 that allowed anyone to force you to join a zoom meeting (and even cause arbitrary code execution), using a local server:
It's not clear to me from Google's proposal if it also restricts access to localhost, or just your local network - it'd be great if it were both, as we clearly can't rely on third parties to lock down their local servers sufficiently!
edit: localhost won't be restricted:
"Note that local -> local is not a local network request, as well as loopback -> anything. (See "cross-origin requests" below for a discussion on potentially expanding this definition in the future.)"
It will be restricted. This proposal isn't completely blocking all localhost and local IPs. Rather, it's preventing public sites from communicating with localhost and local IPs. E.g:
* If evil.com makes a request to a local address it'll get blocked.
* If evil.com makes a request to a localhost address it'll get blocked.
* If a local address makes a request to a localhost address it'll get blocked.
* If a local address makes a request to a local address, it'll be allowed.
* If a local address makes a request to evil.com it'll be allowed.
* If localhost makes a request to a localhost address it'll be allowed.
* If localhost makes a request to a local address, it'll be allowed.
* If localhost makes a request to evil.com it'll be allowed.
I agree fully with him. I don’t care what part of your job gets harder, or what software breaks if you can’t make it work without unnecessarily invading my privacy. You could tell me it’s going to shut down the internet for 6 months and I still wouldn’t care.
You’ll have to come up with a really strong defense for why this shouldn’t happen in order to convince most users.
It just means I run a persistent client on your device that is permanently connected to the mothership, instead of only when you have your browser open.
I’m so glad most people don’t truly consider software devs to be real engineers, because this is a perfect example of why that word deserves so much more respect than this field gives it.
I'm sure it will require some work, but this is the price of security. The idea that any website I visit can start pinging/exploiting some random unsecured testing web server I have running on localhost:8080 is a massive security risk.
Can you define "local network"? Probably not. Most large enterprises own publicly-routable IP space for internal use. Internal doesn't mean 192.168.0.0/24. foo.corp.example.com could resolve to 9.10.11.12 and still be local. What about IPv6? It's a nonsense argument fraught with corner cases.
Sure - a destination is "local" if your machine has a route to that IP which isn't via a gateway.
If your network is large enough that it consists of multiple routed network segments, and you don't have any ACLs between those segments, then yeah, you won't be fully protected by this browser feature. But you aren't protected right now either, so nothing's getting worse, it's just not getting better for your specific use case.
> Sure - a destination is "local" if your machine has a route to that IP which isn't via a gateway.
Fantastic. Well, Google doesn't agree
The proposal defines it along RFC1918 address space boundaries. The spitballing back and forth in the GitHub issues about which imaginary TLDs they will or won't also consider "local" is absolutely horrifying.
Not to be snarky, but that's a good example of "perfect being the enemy of good". You are totally right that there are corner cases, sure. But that doesn't stop us from tackling the low hanging fruit first. Which is, as you say, localhost and LAN (if present).
It should not even be able to communicate with the local network at all, it’s a goddamn web page. It should be restricted to just communicate with the server that hosts it and that’s it.
The whole browser is a massive security leak. What genius thought it was a good idea for the web page I visit in the morning to get the weather forecast to be able to run arbitrary code and to communicate with arbitrary hosts on my local network?
I do understand this sentiment, but isn't the tension here that security improvements by their very nature are designed to break things? Specifically the things we might consider "bad", but really that definition gets a bit squishy at the edges.
I wish they'd (Apple/Micrsoft/Google/...) would do similar things for USB and Bluetooth.
Lately, every app I install, wants bluetooth access to scan all my bluetooth devices. I don't want that. At most, I want the app to have to declare in their manifest some specific device IDs (short list) that their app is allowed to connect to and have the OS limit their connections to only those devices. For for example the Bose App should only be able to see Bose devices, nothing else. The CVS (pharmacy app) should only be able to connect to CVS devices, whatever those are. All I know is the app asked for permission. I denied it.
I might even prefer if it had to register the device ids and then the user would be prompted, the same way camera access/gps access is prompted. Via the OS, it might see a device that the CVS.app registered for in its manifest. The OS would popup "CVS app would like to connect to device ABC? Just this once, only when the app is running, always" (similar to the way iOS handles location)
By id, I mean some prefix that a company registers for its devices. bose.xxx, app's manifest says it wants to connect to "bose.*" and OS filters.
Similarly for USB and maybe local network devices. Come up with an id scheme, have the OS prevent apps form connecting to anything not that id. Effectively, don't let apps browser the network, usb, bluetooth.
I am still holding out hope that eventually at least Apple will offer fake permission grants to applications. Oh, app XYZ "needs" to see my contact list to proceed? Well it gets a randomized fake list, indistinguishable from the real one. Similar with GPS.
I have been told that WhatsApp does not let you name contacts without sharing your address book back to Facebook.
Yeah. I'd like it too. I can't use my bank's app, because it wants some weird permissions like an access to contacts, I refuse to give them, because I see no use in it for me, and it refuses to work.
On Android Telegram works with denied access to the contacts and maintains its own, completely separate, contact list (shared with desktop Telegram and other copies logged in to same account). I'm using Telegram longer than I'm using smartphone and it has completely separate contact list (as it should be).
And WhatsApp cannot be used without access to contacts: it doesn't allow to create WatsApp-only contact and complains that it has no place to store it till you grant access to Phone contact list.
To be honest, I prefer to have separate contact lists on all my communication channel, and even sharing contacts between phone app and e-mail app (GMail) bothers me.
Telegram is good in this aspect, it can use its own contact list, not synchronized or shared with anything else, and WhatsApp is not.
Looks to me like it was a bug. Not giving access to any contacts broke the app completely but limited access works fine except for an annoying persistent in app notification.
iOS generally solves this through App Store submission reviews so I’m surprised this isn’t a rule and that telegram got away with it. “Apps must not gate functionality behind receiving access to all contacts vs a subset” or something. They definitely do so for location access, for example.
WhatsApp specifically needs phone numbers, and you can filter out which contacts you share, but not which fields. So if you family uses WhatsApp, you’d share those contacts, but you can’t share ONLY their phone number, WhatsApp also gets their birthdays, addresses, personal notes, and any other personal information which you might have.
I think this feature is pretty meaningless in the way that it’s implemented.
It’s also pretty annoying that applications know they have partial permission, so kept prompting for full permission all the time anyway.
Apps are not allowed to force you to share your contacts on iOS, report any apps that are asking you to do so as it’s a violation of the App Store TOS.
> Lately, every app I install, wants bluetooth access to scan all my bluetooth devices.
Blame Apple and Google and their horrid BLE APIs.
An app generally has to request "ALL THE PERMISSIONS!" to get RSSI which most apps are using as a (really stupid, bug prone, broken) proxy for distance.
What everybody wants is "time of flight"--but for some reason that continues to be mostly unsupported.
It's crazy to me that this has always been the default behavior for web browsers. A public website being able to silently access your entire filesystem would be an absurd security hole. Yet all local network services are considered fair game for XHR, and security is left to the server itself. If you are developer and run your company's webapp on your dev machine for testing (with loose or non-existent security defaults), facebook.com or google.com or literally anyone else could be accessing it right now. Heck think of everything people deploy unauthed on their home network because they trust their router's firewall. Does every one of them have the correct CORS configuration?
I majored in CS and I had no idea that was possible: public websites you access have access to your local network. I have to take time to process this. Beside what is suggested in the post, are there any ways to limit this abusive access?
No, simple requests [1] - such as a GET request, or a POST request with text/plain Content-Type - don't trigger a CORS preflight. The request is made, and the browser may block the requesting JS code from seeing the response if the necessary CORS response header is missing. But by that point the request had already been made. So if your local service has a GET endpoint like http://localhost:8080/launch_rockets, or a POST endpoint, that doesn't strictly validate the body Content-Type, then any website can trigger it.
Although those were typically used to give ActiveX controls on the intranet unfettered access to your machine because IT put it in the group policy. Fun days.
Honestly I just assumed a modern equivalent existed. That it doesn’t is ridiculous. Local network should be a special permission like the camera or microphone.
I guess this would help Meta’s sneaking identification code sharing between native apps and websites with their sdk on them from communicating serendipitously through localhost, particularly on Android.
While this will help to block many websites that have no business making local connections at all, it's still very coarse-grained.
Most websites that need this permission only need to access one local server. Granting them access to everything violates the principle of least privilege. Most users don't know what's running on localhost or on their local network, so they won't understand the risk.
> Most users don't know what's running on localhost or on their local network, so they won't understand the risk.
Yes, which is why they also won't understand when the browser asks if you'd like to allow the site to visit http://localhost:3146 vs http://localhost:8089. A sensible permission message ("allow this site to access resources on your local network") is better than technical mumbo jumbo which will make them just click "yes" in confusion.
Either way they'll click "yes" as long as the attacker site properly primes them for it.
For instance, on the phishing site they clicked on from an email, they'll first be prompted like:
"Chase need to verify your Local Network identity to keep your account details safe. Please ensure that you click "Yes" on the following screen to confirm your identity and access account."
Yes, that's meaningless gibberish but most people would say:
This is true, but you can only protect people from themselves so far. At some point you gotta let them do what they want to do. I don't want to live in a world where Google decides what we are and aren't allowed to do.
In an ideal world, the browser could act as an mDNS client, discovering local services, so that it could then show the pretty name of the relevant service in the security prompt.
In the world we live in, of course, almost nothing on your average LAN has an associated mDNS service advertisement.
A comprehensive implementation would be a firewall. Which CIDRs, which ports, etc.
I wish there were an API to build such a firewall, e.g. as a part of a browser extension, but also a simple default UI allowing to give access to a particular machine (e.g. router), to the LAN, to a VPN, based on the routing table, or to "private networks" in general, in the sense Windows ascribes to that. Also access to localhost separately. The site could ask one of these categories explicitly.
I worry that there are problems with Ipv6. Can anyone explain to me if there actually is a way to determine whether an IPv6 is site local? If not, the proposal is going to have problems on IPv6-only networks.
I have an struggled with this issue in the past. I have an IoT application whose websever wants to reject any requests from a non-local address. After failing to find a way to distinguish IPv6 local addresses, I ended up redirecting IPv6 requests to the local IPv4 address. And that was the end of that.
I feel like I would be in a better position to raise concerns if I could confirm that my understanding is correct: that there is no practical way for an application to determine whether an IPv6 address is link- or site-local.
I did experiment with IPv6 "link local" addresses, but these seem to be something else altogether different (for use by routers rather than general applications),and don't seem to work for regular application use.
There is some wiggle room provided by including .local address as local servers. But implementation of .local domains seems to be inconsistent across various OSs at present. Raspberry PI OS, for example, will do mDNS resolution of "some_address" but not of "someaddress.local"; Ubuntu 24.04 will resolve "someaddress.local", but not "someaddress". And neither will resolve "someaddress.local." (which I think was recommended at one point, but is now deprecated and non-functional). Which does seems like an issue worth raising.
And it frustrates the HECK out of me that nobody will allow use of privately issued certs for local network addresses. The "no https for local addresses" thing needs to be fixed.
IPv6 still has the concept of "routable". You just have to decide what site-local means in terms of the routing table.
In old school IPv4 you would normally assign octet two to a site and octet three to a VLAN. Oh and you start with 10.
With IPv6 you have a lot more options.
All IPv6 devices have link local addresses - that's the LAN or local VLAN - a bit like APIPA.
Then you start on .local - that's Apple and DNS and the like and nothing to do with IP addresses. That's name to address.
You can do Lets Encrypt (ACME) for "local network addresses" (I assume you mean RFC 1918 addresses: 10/8, 172.16/12, 192.168/16) - you need to look into DNS-01 and perhaps DNS CNAME. It does require quite some effort.
There is a very good set of reasons why TLS certs are a bit of a bugger to get working effectively these days. There are solutions freely available but they are also quite hard to implement. At least they are free. I remember the days when even packet capture required opening your wallet.
You might look into acme.sh if Certbot fails to work for you. You also might need to bolt down IP addressing in general, IPv4 vs IPv6 and DNS and mDNS (and Bonjour) as concepts - you seem a little hazy on that lot.
HTTPS doesn't care about IP addresses. It's all based on domain names. You can get a certificate for any domain you own. You can also set said domain to resolve to any address you like, including a "local" one.
NAT has rotted people's brains unfortunately. RFC 1918 is not really the way to tell if something is "local" or not. 25 years ago I had 4 publicly routable IPv4 addresses. All 4 of these were "local" to me despite also being publicly routable.
An IP address is local if you can resolve it and don't have to communicate via a router.
It seems too far gone, though. People seem unable to separate RFC 1918 from the concept of "local network".
> Can anyone explain to me if there is any way to determine whether an inbound IPv6 address is "local"?
No, because it's the antithesis of IPv6 which is supposed to be globally routable. The concept isn't supposed to exist.
Not to mention Google can't even agree on the meaning of "local" - the article states they completely changed the meaning of "local" to be a redefinition of "private" halfway through brainstorming this garbage.
Creating a nonstandard, arbitrary security boundary based on CIDR subnets as an HTTP extension is completely bonkers.
As for your application, you're going about it all wrong. Just assume your application is public-facing and design your security with that in mind. Too many applications make this mistake and design saloon-door security into their "local only" application which results in overreaction such as the insanity that is the topic of discussion here.
".local" is reserved for mDNS and is in the RFC, though this is frequently and widely ignored.
It's very useful to have this additional information in something like a network address. I agree, you shouldn't rely on it, but IPv6 hasn't clicked with me yet, and the whole "globally routable" concept is one of the reasons. I hear that, and think, no, I don't agree.
Globally routable doesn't mean you don't have firewalls in between filtering and blocking traffic. You can be globally routable but drop all incoming traffic at what you define as a perimeter. E.g. the WAN interface of a typical home network.
The concept is frequently misunderstood in that IPv4 consumer SOHO "routers" often combine a NAT and routing function with a firewall, but the functions are separate.
It is widely understood that my SOHO router provides NAT for IPV4, and routing+firewall (but no NAT) for IPV6. And provides absolutely no configuability for the IpV6 firewall (which would be extremely difficult anyway) because all of the IPV6 addresses allocated to devices on my home network are impermanent and short-lived.
You can make those IPv6 IP addresses permanent and long-lived. They don't need to be short-lived addresses.
Also, I've seen lots of home firewalls which will identify a device based on MAC address for match criteria and let you set firewall rules based on those, so even if their IPv6 address does change often it still matches the traffic.
There’s something about ip6 addresses being big as a guid that makes them hard to remember. Seem like random gibberish, like a hash. But I can look at an ip4 address like a phone number, and by looking tell approximately its rules.
Maybe there’s a standard primer on how to grok ip6 addresses, and set up your network but I missed it.
Also devices typically take 2 or 4 ip6 addresses for some reason so keeping on top of them is even harder.
When just looking at hosts in your network with their routable IPv6 address, ignore the prefix. This is the first few segments, probably the first four in most cases for a home network (a /64 network) When thinking about firewall rules or having things talk to each other, ignore things like "temporary" IP addresses.
Ignore all those temporary ones. Ignore the longer one. You can ignore 2600:1700:63c9:a421, as that's going to be the same for all the hosts on your network, so you'll see it pretty much everywhere. So, all you really need to remember if you're really trying to configure things by IP address is this is whatever-is-my-prefix::2000.
But honestly, just start using DNS. Ignore IP addresses for most things. We already pretty much ignore MAC addresses and rely on other technologies to automatically map IP to MAC for us. Its pretty simple to get a halfway competent DNS setup going on, so many home routers will have things going by default, and its just way easier to do things in general. I don't want to have to remember my printer is at 192.168.20.132 or 2600:1700:63c9:a421::a210 I just want to go to http://brother or ipp://brother.home.arpa and have it work.
But as you can see this is still an explosion of complexity for the home user. More than 4x (32 --> 128), feels like x⁴ (though might not be accurate).
I like your idea of "whatever..." There should be a "lan" variable and status could be shown factored, like "$lan::2000" to the end user perhaps.
I do use DNS all the time, like "printer.lan", "gateway.lan", etc. But don't think I'm using in the router firewall config. I use openwrt on my router but my knowledge of ipv6 is somewhat shallow.
The device is an IoT guitar pedal that runs on a Raspberry Pi. In performance, on stage, a Web UI runs on a phone or tablet over a hotspot connection on the PI, which is NOT internet connected (since there's no expectation that there's a Wi-Fi router or internet access at a public venue). OR the pi runs on a home wifi network, using a browser-hosted UI on a laptop or desktop. OR, I suppose over an away-from-home Wi-Fi connection at a studio or rehearsal space, I suppose.
It is not reasonable to expect my users to purchase domain names and certs for their $60 guitar pedal, which are not going to work anyway, if they are playing away from their home network. Nor is ACME provisioning an option because the device may be in use but unconnected to the internet for months at a time if users are using the Pi Hotspot at home.
I can't use password authentication to get access to the Pi Web server, because I can't use HTTPS to conceal the password, and browsers disable access to javascript crypto APIs on non non-HTTPS pages (not that I'd really trust myself to write javascript code to obtain auth tokens from the pi server anyway), so doing auth over an HTTP connection doesn't really strike me as a serious option either..
Nor is it reasonable to expect my non-technical users to spend hours configuring their networks. It's an IoT device that should be just drop and play (maybe with a one-time device setup that takes place on the Pi).
There is absolutely NO way I am going to expose the server to the open internet without HTTPS and password authentication. The server provides a complex API to the client over which effects are configured and controlled. Way too much surface area to allow anyone of the internet to poke around in. So it uses IP/4 isolation, which is the best I can figure out given the circumstances. It's not like I havem't given the problem serious consideration. I just don't see a solution.
The use case is not hugely different from an IoT toothbrush. But standards organizations have chosen to leave both my (hypothetical) toothbrush and my application utterly defenseless when it comes to security. Is it any surprise that IoT toothbrushes have security problems?
How would YOU see https working on a device like that?
> ".local" is reserved for mDNS and is in the RFC, though this is frequently and widely ignored.
Yes. That was my point. It is currently widely ignored.
Well, who can agree on this? Local network, private network, intranet, Tailscale and VPN, Tor? IPv6 ULA, NAT/CGNAT, SOCKS, transparent proxy? What resources are "local" to me and what resources are "remote"?
This is quite a thorny and sometimes philosophical question. Web developers are working at the OSI Layer 6-7 / TCP/IP Application Layer.
Now even cookies and things like CSRF were trying to differentiate "servers" and "origins" and "resources" along the lines of the DNS hierarchy. But this has been fraught with complication, because DNS was not intended to delineate such things, and can't do so cleanly 100% of the time.
Now these proposals are trying to reach even lower in the OSI model - Layer 3, Layer 2. If you're asking "what is on my LAN" or "what is a private network", that is not something that HTTPS or web services are supposed to know. Are you going to ask them to delve into your routing table or test the network interfaces? HTTPS was never supposed to know about your netmask or your next-hop router.
So this is only one reason that there is no elegant solution for the problem. And it has been foundational to the way the web was designed: "given a uniform locator, find this resource wherever it may be, whenever I request it." That was a simpler proposition when the Web was used to publish interesting and encyclopedic information, rather than deliver applications and access sensitive systems.
Cors doesnt stop POST request also not fetch with 'no-cors'supplied in javascript its that you cant read response that doesnt mean request is not sent by browser
Then again local app can run server with proxy that adds adds CORS headers to the proxied request and you can access any site via js fetch/xmlhttprequest interface, even extension is able to modify headers to bypass cors
Cors bypassing is just matter of editing headers whats really hard to or impossible to bypass in CSP rules,
Now facebook app itself is running such cors server proxy even without it an normal http or websocket server is enought to send metrics
Chrome already has flag to prevent locahost access still as said websocket can be used
Completely banning localhost is detrimental
Many users are using self hosted bookmarking, note app, pass managers like solutions that rely on local server
Is the so-called "modern" web browser too large and complex
I never asked for stuff like "websockets"; I have to disable it, why
I still prefer a text-only browser for reading HTML; it does not run Javascript, it does not do websockets, CSS, images or a gazillion other things; it does not even autoload resources
It is relatively small, fast and reliable; very useful
It can read larger HTML files that make so-called "modern" web browsers choke
It does not support online ad services
The companies like Google that force ads on www users are known for creating problems for www users and then proposing solutions to them; why not just stop creating the problems
Do note that since the removal of NPAPI plugins years ago, locally-installed software that intends to be used by one or more public websites has to run an HTTP server on localhost.
It would be really annoying if this use case was made into an unreasonable hassle or killed entirely. (Alternatively, browser developers could've offered a real alternative, but it's a bit late for that now.)
Doesn't most software just register a protocol handler with the OS? Then a website can hand the browser a zoommtg:// link, which the browser opens with zoom ?
Things like Jupyter Notebooks will presumably be unaffected by this, as they're not doing any cross-origin requests.
And likewise, when a command line tool wants you to log in with oauth2 and returns you to a localhost URL, it's a simple redirect not a cross-origin request, so should likewise be allowed?
A common use case, whether for 3D printers, switches, routers, or NAS devices is that you've got a centrally hosted management UI that then sends requests directly to your local devices.
This allows you to use a single centrally hosted website as user interface, without the control traffic leaving your network. e.g. Plex uses this.
I don't think this proposal will stop you visiting the management UI for devices like switches and NASes on the local network. You'll be able to visit http://192.168.0.1 and it'll work just fine?
This is just about blocking cross-origin requests from other websites. I probably don't want every ad network iframe being able to talk to my router's admin UI.
2. I click "add device" and enter 192.168.0.230, repeating this for my other local devices.
3. The website ui.manufacturer.tld now shows me a dashboard with aggregate metrics from all my switches and routers, which it collects by fetch(...) ing data from all of them.
The manufacturers site is just a static page. It stores the list of devices and credentials to connect to them in localStorage.
None of the data ever leaves my network, but I can just bookmark ui.manufacturer.tld and control all of my devices at once.
This is a relatively neat approach providing the same comfort as cloud control, without the privacy nightmare.
If it is truly static site/page, download it and open from local disk. And nudge vendor to release it as archive which can be downloaded and unpacked locally.
It has a multitude of benefits comparing opening it from vendor site each time:
1) It works offline.
2) It works if vendor site is down.
3) It works if vendor restrict access to it due to acquisition, making it subscription-based, discontinuation of feature "because fuck you".
4) It works if vendor goes out of business or pivot to something else.
5) It still works with YOUR devices if vendor decides to drop support for old ones.
6) It still works with YOUR versions of firmwares if vendor decides to push new ones, with features which are user-hostile (I'm looking at you, BambuLab).
7) It cannot be compromised, as copy on vendor site can be If your system is compromised, you have bigger problems than forged UI for devices. Even best of vendors have data breaches this days.
8) It cannot upload your data if vendor goes rogue.
Downsides? If you really need to update it, you need to re-download it manually. Not a big hassle, IMHO.
> If it is truly static site/page, download it and open from local disk. And nudge vendor to release it as archive which can be downloaded and unpacked locally.
Depending on the browser, file:/// is severely limited in what CORS requests are allowed.
And then there's products like Plex, where it's not a static site, but you still want a central dashboard that connects to your local Plex server directly via CORS.
Why local Plex which you need to install & run (it is already Server) cannot provide its own UI to browser, without 3rd party sites?
It is absurd design, IMHO.
I'll never allow this in my network. It looks security nightmare. Today it shows me dashboard (of what? Several my Plex servers?), tomorrow it is forced to report pirated movies to police. No, thanx.
HTTPS, basically. I've gone around and around in circles on this for a device I work on. You'd like to present an HTTPS web UI, because a) you'd like encryption between the UI and the device, and b) browsers lock down a lot of APIs, sometimes arbitrarily, behind being in a 'secure context' (ironically, including the cryptography APIs!). But your device doesn't control it's IP address or hostname, and may not even have access to the internet, so there's no way for it to have a proper HTTPS certificate, and a self-signed certificate will create all kinds of scary warnings in the browser (which HTTP will not, ironically).
So manufacturers create all kinds of crazy workarounds, like plex's, to be able to present an HTTPS web page that is easily accessible and can just talk to the device. (Except it's still not that simple, because you can't easily make an HTTP request from an HTTPS context, so plex also jumps through a bunch of hoops to co-ordinate some HTTPS certificate for the local device, which requires an internet connection).
It's a complete mess, and browsers really seem to be keen on blocking any 'let HTTPS work for local devices' solution, even if it were just a simple upgrade to the status quo that would otherwise just be treated like HTTP. Nor will they stop putting useful APIs behind a 'secure context' like an HTTPS certificate implies any level of trust except that a page is associated with a given domain name.
(Someone at plex seems to have finally gotten through to some of the devs at Chrome, and AFAIK there is now a somewhat reasonable flow that would allow e.g. a progressive webapp to request access to a local device and communicate with it without an HTTPS certificate, which is something, but still way to just host the damn UI on the device without limiting the functionality! And it's chrome-only, maybe still in preview? Haven't gotten around to trying to implement it yet)
> a) you'd like encryption between the UI and the device
No, I don't. It is on my local network. If device has public IP and I want to browse my collection when I'm out of my local network, then I do, but then Let's encrypt solved this problem many years ago (10 years!). If device doesn't have public IP but I punch hole in my NAT or install reverse proxy on gateway, then I'm tech-savvy enough to obtain Let's Encrypt cert for it, too.
> b) browsers lock down a lot of APIs, sometimes arbitrarily
Why does GUI which is served from server co-hosted with mediaserver needs any special APIs at all? It can generate all content on server side and basic JS is enough to add visual effects for smooth scrolling, drop-down menus, etc.
Its all look over-engineered in the sake of what? Of imitating desktop app in browser? Looks like it creates more problems than writing damn native desktop app. In QT, for example, which will be not-so-native (but more native than any site or Electron) but work on all 3 major OSes and *BSD from single sources.
Even on a local network, you should probably not be sending e.g. passwords around in plaintext. Let's encrypt is a solution for someone who's tech-savvy enoug to set it up, not the average user.
> Its all look over-engineered in the sake of what? Of imitating desktop app in browser?
Pretty much, yeah. And not just desktop app, but mobile app as well. The overhead of supporting multiple platforms, especially across a broad range of devices, is substantial. Wep applications sidestep a lot of that and can give you a polished UX across basically every device, especially e.g. around the installation process (because there doesn't need to be one).
> Depending on the browser, file:/// is severely limited in what CORS requests are allowed.
And it is strange to me too. Local (on-disk) site is like local Electron app without bundling Chrome inside. Why it should be restricted when Electron app can do everything? It looks illogical.
How so? It's certainly better than sending all that traffic through the cloud.
While I certainly prefer stuff I can just self-host, compared to the modern cloud-only reality with WebUSB and stuff, this is a relatively clean solution.
That works if you want to launch an application from a website, but it doesn't work if you want to actively communicate with an application from a website.
#1 use case would be a password manager. It would be best if the browser plugin part can ping say, the 1password native app, which runs locally on your pc, and say "Yo I need a password for google.com" - then the native app springs into action, prompts for biometrics, locates the password or offers the user to choose, then returns it directly to the browser for filling.
Sure you can make a fully cloud-reliant PW manager, which has to have your key stored in the browser and fetch your vault from the server, but a lot of us like having that information never have to leave our computers.
Browser extensions play by very different rules than websites already. The proposal is for the latter and I doubt it is going to affect the former, other than MAYBE an extra permanent permission.
you missed the point. password managers are one of the many use cases for this feature; that they just so happen to be mostly implemented as extensions does not mean that the feature is only useful for extensions
It would be amazing if that method of communicating with a local app was killed entirely, because it's been a very very common source of security vulnerabilities.
It's harder to run html and xml files with xslt by just opening them in a web browser (things like nunit test run output). To view these properly now -- to get the css, xslt, images, etc. to load -- you now typically have to run a web server at that file path.
Note: this is why the viewers for these tools will spin up a local web server.
With local LLMs and AI it is now common to have different servers for different tasks (LLM, TTS, ASR, etc.) running together where they need to communicate to be able to create services like local assistants. I don't want to have to jump through hoops of running these through SSL (including getting a verified self-signed cert.), etc. just to be able to run a local web service.
I'm not sure any of that is necessary for what we're talking about: locally-installed software that intends to be used by one or more public websites.
For instance, my interaction with local LLMs involves 0 web browsers, and there's no reason facebook.com needs to make calls to my locally-running LLM.
Running HTML/XML files in the browser should be easier, but at the moment it already has the issues you speak of. It might make sense, IMO, for browsers to allow requests to localhost from websites also running on localhost.
One of the very few security inspired restrictions I can wholeheartedly agree with. I don't want random websites be able to read my localhost. I hope it gets accepted and implemented sooner than later.
OTOH it would be cool if random websites were able to open up and use ports on my computer's network, or even on my LAN, when granted permission of course. Browser-based file- and media sharing between my devices, or games if multi-person.
It appears to not have been enabled by default on my instance of uBlock; it seems a specific filter list is used to implement this [0]; that filter was un-checked; I have no idea why. The contents of that filter list are here [1]; notice that there are exceptions for certain services, so be sure to read through the exceptions before enabling it.
[0] Filter Lists -> Privacy -> Block Outsider Intrusion into Lan
This has the potential to break rclone's oauth mechanism as it relies on setting the redirect URL to localhost so when the oauth is done rclone (which is running on your computer) gets called.
I guess if the permissions dialog is sensibly worded then the user will allow it.
I think this is probably a sensible proposal but I'm sure it will break stuff people are relying on.
IIUC this should not break redirects. This only affects: (1) fetch/xmlhttprequests (2) resources linked to AND loaded on a page (e.g. images, js, css, etc.)
As noted in another comment this doesn't work unless the server responding provides proper CORS headers allowing the content to be loaded by the browser in that context: so for any request to work the server is either wide open (cors: *) or are cooperating with the requesting code (cors: website.co). The changes prevent communication without user authorization.
Is it possible to do this today with browser extensions? I ran noscript 10 years ago and it was really tough. Kinda felt like being gaslit constantly. I could go back, only enabling sites selectively, but it's not going to work for family. Wondering if just blocking cross origin requests would be more feasible.
I do not understand. Doesn't same-origin prevent all of these issues? Why on earth would you extend some protection to resources based on IP address ranges? It seems like the most dubious criteria of all.
I think the problem is that some local server are not really designed to be as secure as a public server. For example, a local server having a stupid unauthenticated endpoint like "GET /exec?cmd=rm+-rf+/*", which is obviously exploitable and same-origin does not prevent that.
Browsers allow launching HTTP requests to localhost in the same way they allow my-malicious-website.com to launch HTTP requests to say mail.google.com . They can _request_ a resource but that's about it -- everything else, even many things you would expect to be able to do with the downloaded resource, are blocked by the same origin policy. [1] Heck, we have a million problems already where file:/// websites cannot access resources from http://localhost , and viceversa.
So what's the attack vector exactly? Why it would be able to attack a local device but not attack your Gmail account ( with your browser happily sending your auth cookies) or file:///etc/passwd ?
The only attack I can imagine is that _the mere fact_ of a webserver existing on your local IP is a disclosure of information for someone, but ... what's the attack scenario here again? The only thing they know is you run a webserver, and maybe they can check if you serve something at a specified location.
Does this even allow identifying the router model you use? Because I can think of a bazillion better ways to do it -- including the simple "just assume is the default router of the specific ISP from that address".
> [Same-origin policy] prevents a malicious website on the Internet from running JS in a browser to read data from [...] a company intranet (which is protected from direct access by the attacker by not having a public IP address) and relaying that data to the attacker.
This is specifically in response to the recent Facebook chicanery where their app was listening on localhost and spitting out a unique tracking ID to anything that connects, allowing arbitrary web pages to get the tracking ID and correspondingly identify the user visiting the page.
But this is trying to solve the problem in the wrong place. The problem isn't that the browser is making the connection, it's that the app betraying the user is running on the user's device. The Facebook app is malware. The premise of app store curation is that they get banned for this, right? Make everyone who wants to use Facebook use the web page now.
The existing PNA is easily defeated for bugs that can be triggered with standard cross origin requests. For example PNA does nothing to stop a website from exploiting some EOL devices I have with POST requests and img tags.
The alternative proposal sounds much nicer, but unfortunately was paused due to concerns about devices not being able to support it.
I guess once this is added maybe the proposed device opt in mechanism could be used for applications to cooperatively support access without a permission prompt?
Browser should just allow per-site settings or global allow/deny all to allow deny permission to localhost
So thats user will be in control
cant just write a extension that blocks access to domains based on origin
So user can just add facebook.com as origin to block all facebook* sites from sending any request to any registered url in these case localhost/127.0.0.1 domains
Assuming that RFC1918 addresses mean "local" network is wrong. It means "private". Many large enterprises use RFC1918 for private, internal web sites.
One internal site I spend hours a day using has a 10.x.x.x IP address. The servers for that site are on the other side of the country and are many network hops away. It's a big company, our corporate network is very very large.
A better definition of "local IP" would be whether the IP is in the same subnet as the client, i.e. look up the client's own IP and subnet mask and determine if a packet to a given IP would need to be routed through the default gateway.
The computer I use at work (and not only mine, many many of them) has a public IP address. Many internal services are on 10.0.0.0/8. How is this being taken into account?
10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16 are all private addresses per RFC1918 and documents superseding it(5735?). If it's like 66.249.73.128/27 or 164.13.12.34/12, those are "global" IP.
Yes that's the point: many of our work PCs have global public IPs from something like 128.130.0.0/15 (not this actual block, but something similar), and many internal services are on 10.0.0.0/8. I'm not sure I get exactly how the proposal is addressing this. How does it know that 128.130.0.0/15 is actually internal and should be considered for content loaded from an external site?
The proposal doesn't need to address this because it doesn't even consider the global public IP of 128.130.0.0/15 in your example. If you visit a site on 10.0.0.0/8 that accesses resources on 10.0.0.0/8 it's allowed. But if you visit a random other site on the internet it will be (by default) forbidden to access the internal resource at 10.0.0.0/8.
My reading is this just adds a dialog box before browser loads RFC1918 ranges. At IP layer, a laptop with 128.130.0.123 on wlan0 should not be able to access 10.0.10.123:80, but I doubt they bother to sanity check that. Just blindly assuming all RFC1918 and only RFC1918 are local should do the job for quite a while.
btw, I've seen that kind of network. I was young, and it took me a while to realize that they DHCP assign global IPs and double NAT it. That was weird.
People believe that "my computer" or "my smartphone" has an Internet address, but this is a simplification of how it's really working.
The reality is that each network interface has at least one Internet address, and these should usually all be different.
An ordinary computer at home could be plugged into Ethernet and active on WiFi at the same time. The Ethernet interface may have an IPv4 address and a set of IPv6 addresses, and belong to their home LAN. The WiFi adapter and interface may have a different IPv4 address, and belongs to the same network, or some other network. The latter is called "multi-homing".
If you visit a site that reveals your "public" IP address(es), you may find that your public, routable IPv4 and/or IPv6 addresses differ from the ones actually assigned to your interfaces.
In order to be compliant with TCP/IP standards, your device always needs to respond on a "loopback" address in 127.0.0.0/8, and typically this is assigned to a "loopback" interface.
A network router does not identify with a singular IP address, but could answer to dozens, when many interface cards are installed. Linux will gladly add "alias" IPv4 addresses to most interface devices, and you'll see SLAAC or DHCPv6 working when there's a link-local and perhaps multiple routable IPv6 addresses on each interface.
The GP says that their work computer has a [public] routable IP address. But the same computer could have another interface, or even the same interface has additional addresses assigned to it, making it a member of that private 10.0.0.0/8 intranet. This detail may or may not be relevant to the services they're connecting to, in terms of authorization or presentation. It may be relevant to the network operators, but not to the end-user.
So as a rule of thumb: your device needs at least one IP address to connect to the Internet, but that address is associated with an interface rather than your device itself, and in a functional system, there are multiple addresses being used for different purposes, or held in reserve, and multiple interfaces that grant the device membership on at least one network.
Many years ago, before it was dropped, IP version 6 had a concept of "site local" addresses, which (if it had applied to version 4) would have encompassed the corporate intranet addresses that you are talking about. Routed within the corporate intranet; but not routed over corporate borders.
Think of this proposal's definition of "local" (always a tricky adjective in networking, and reportedly the proposers here have bikeshedded it extensively) as encompassing both Local Area Network addresses and non-LAN "site local" addresses.
fd00::/8 (within fc00::/7) is still reserved for this purpose (site-local IPv6 addressing).
fc00::/8 (a network block for a registry of organisation-specific assignments for site-local use) is the idea that was abandoned.
Roughly speaking, the following are analogs:
169.254/16 -> fe80::/64 (within fe80::/10)
10/8, 172.16/12, 192.168/16 -> a randomly-generated network (within fd00::/8)
For example, a service I maintain that consists of several machines in a partial WireGuard mesh uses fda2:daf7:a7d4:c4fb::/64 for its peers. The recommendation is no larger than a /48, so a /64 is fine (and I only need the one network, anyway).
So in my case, I guess I need to blame the unconfigurable cable router my ISP provided me with? Since there's no way to provide reservations for IPv6 addresses. :-/
Right. OpenWRT, for example, will automatically generate a random /48 within fd00::/8 to use as a ULA (unique local addressing) prefix for its LAN interfaces, and will advertise those prefixes to its clients. You can also manually configure a specific prefix instead.
e.g. Imagine the following OpenWRT setup:
ULA: fd9e:c023:bb5f::/48
(V)LAN 1: IPv6 assignment hint 1, suffix 1
(V)LAN 2: IPv6 assignment hint 2, suffix ffff
Clients on LAN 1 would be advertised the prefix fd9e:c023:bb5f:1::/64 and automatically configure addresses for themselves within it. The router itself would be reachable at fd9e:c023:bb5f:1::1.
Clients on LAN 2 would be advertised the prefix fd9e:c023:bb5f:2::/64 and automatically configure addresses for themselves within it. The router itself would be reachable at fd9e:c023:bb5f:2::ffff.
Clients on LAN 1 could communicate with clients on LAN 2 (firewall permitting) and vice versa by using these ULA addresses, without any IPv6 WAN connectivity or global-scope addresses.
Is it a gross generalization to say that if you're visiting a site whose name resolves to a private IP address, it's a part of the same organizational entity as your computer is?
The proposal here would consider that site local and thus allowed to talk to local. What are the implications? Your employer whose VPN you're on, or whose physical facility you're located in, can get some access to the LAN where you are.
In the case where you're a remote worker and the LAN is your private home, I bet that the employer already has the ability to scan your LAN anyway, since most employers who are allowing you onto their VPN do so only from computers they own, manage, and control completely.
> Is it a gross generalization to say that if you're visiting a site whose name resolves to a private IP address, it's a part of the same organizational entity as your computer is?
Yes. That's a gross generalization.
I support applications delivered via site-to-site VPN tunnels hosted by third parties. In the Customer site the application is accessed via an RFC 1918 address. It is is not part of the Customer's local network, however.
Likewise, I support applications that are locally-hosted but Internet facing and appear on a non-RFC1918 IP address even though the server is local and part of the Customer's network.
Access control policy really should be orthogonal to network address. Coupling those two will enivtably lead to mismatches to work around. I would prefer some type of user-exposed (and sysadmin-exposed, centrally controllable) method for declaring the network-level access permitted by scripts (as identified by the source domain, probably).
Don't some internet providers to large scale NAT (CGNAT), so customers each get a 10.x address instead of a public one? I'm not sure if this is a problem or not. It sounds like it could be.
It wouldn’t be important in this scenario, because what your own IP address is doesn’t matter (and most of us are sitting behind a NAT router too, after all).
It would block a site from scanning your other 10.x peers on the same network segment, thinking they’re “on your LAN” but that’s not a problem in my humble opinion.
I get that this could happen on any OS, and the proposal is from browser maker's perspective. But what about the other side of things, an app (not necessarily browser) talking to arbitrary localhost address?
Basically any inter-process communication (IPC). https://en.wikipedia.org/wiki/Inter-process_communication . There are fancier IPC mechanisms, but none as widely supported as just sending arbitrary data over a socket. It wouldn't surprise me if e.g. this is how Chrome processes communicate with each other.
The split horizon DNS model mentioned in that article is to me insane. Your DNS responses should not change based on what network you are connected to. It breaks so many things. For one, caching breaks because DNS caching is simplistic and is only cached with a TTL: no way to tell your OS to associate a DNS cached response to a network.
I understand why some companies want this, but doing it on the DNS level is a massive hack.
If I were the decision maker I would break that use case. (Chrome probably wouldn't though.)
> Your DNS responses should not change based on what network you are connected to.
GeoDNS and similar are very broadly used by services you definitely use every day. Your DNS responses change all the time depending on what network you're connecting from.
Further: why would I want my private hosts to be resolvable outside my networks?
Of course DNS responses should change depending on what network you're on.
> but if you're inside our network perimeter and you look up their name, you get a private IP address and you have to use this IP address to talk to them
In the linked article using the wrong DNS results in inaccessibility. GeoDNS is merely a performance concern. Big difference.
> why would I want my private hosts
Inaccessibility is different. We are talking about accessible hosts requiring different IP addresses to be accessed in different networks.
If you have two interfaces connected to two separate networks, you can absolutely have another host connected to the same two networks. That host will have a different IP for each of their interfaces, you could reach it on either, and DNS on each network should resolve to the IP it's reachable on on that network.
Correct, and this is by design. Keeping in mind "hairpin"-style connections often don't work, also by design (leaving a network then hairpinning back into the same network).
Let's say you have an internal employee portal. Accessing it from somewhere internal goes to an address in private space, while accessing it from home gives you the globally routable address. The external route might have more firewalls / WAFs / IPSes etc in the way. There's no other way you could possibly achieve this than by serving a different IP for each of the two networks, and you can do that through DNS, by having an internal resolver and an external resolver.
> but you could just have two different fqdns
Good luck training your employees to use two different URLs depending on what network they originate from.
Especially for universities it's very common to have the same hostname resolve to different servers, and provide different results, depending on whether you're inside the university network or not.
Some sites may require login if you're accessing them from the internet, but are freely accessible from the intranet.
Others may provide read-write access from inside, but limited read-only access from the outside.
Similar situations with split-horizon DNS are also common in corporate intranets or for people hosting Plex servers.
Ultimately all these issues are caused by NAT and would disappear if we switched to IPv6, but that would also circumvent the OP proposal.
No I haven't seen this before. I have seen however the behavior where login is required from the Internet but not on the university network; I had assumed this is based on checking the source IP of the request.
Similarly the use case of read-write access from inside, but limited read-only access from the outside is also achievable by checking the source IP.
But how do you check the source IP if everyone is behind NAT?
Take the following example (all IPs are examples):
1. University uses 10./8 internally, with 10.1./16 and 10.2./16 being students, 10.3./16 being admin, 10.4. being natsci institute, 10.5. being tech institute, etc.
2. You use radius to assign users to IP ranges depending on their group membership
3. If you access the website from one of these IP ranges, group membership is implied, otherwise you'll have to log in.
4. The website is accessible at 10.200.1.123 internally, and 205.123.123.123 externally with a CDN.
Without NAT, this would just work, and many universities still don't use NAT.
But with NAT, the website wont see my internal IP, just the gateway's IP, so it can't verify group membership.
In some situations I can push routes to end devices so they know 205.123.123.123 is available locally, but that's not always an option.
In this example the site is available externally through Cloudflare, with many other sites on the same IP.
I usually try to write comments with proper notation and proper example values, but if — like in this instance — I'm interrupted IRL and lose my draft, I'll focus on getting my idea across at all rather than writing the perfect comment. Even if that leads to excessive abbreviations, slightly off example values, inconsistency between you/I/passive voice or past/present/future tense.
In this case the comment you see is the third attempt, ultimately written on a phone (urgh), but I hope the idea came across nonetheless.
The web is currently just “controlled code execution” on your device. This will never work if not done properly. We need a real “web 3.0” where web apps can run natively and containerized, but done correctly, where they are properly sandboxed. This will bring performance and security.
Disagree. Untrusted code was thought to be a meaningful term 20-30 years ago when you ran desktop OSs with big name software like Microsoft Word and Adobe, and games. What happened in reality is that this fence had false positives (ie Meta being one of your main adversaries) and an enormous amount of false negatives (all indie or small devs that would have their apps classified as viruses).
The model we need isn’t a boolean form of trust, but rather capabilities and permissions on a per-app, per-site or per-vendor basis. We already know this, but it’s incredibly tricky to design, retrofit and explain. Mobile OSs did a lot here, even if they are nowhere near perfect. For instance, they allow apps (by default even) to have private data that isn’t accessible from other apps on the same device.
Whether the code runs in an app or on a website isn’t actually important. There is no fundamental reason for the web to be constrained except user expectations and the design of permission systems.
Even IPv6 has local devices. Determining whether that's a /64 or a /56 network may need some work, but the concept isn't all that different. Plus, you have ::1 and fe80::, of course.
Whatever happened to IPv6 site-local and link local address ranges (address ranges that were specifically defined as address ranges that would not cross router or WAN boundaries? They were in the original IPv6 standards, but don't seem to be implemented or supported. Or at least they aren't implemented or supported by my completely uconfigurable home cable router povided by my ISP.
IPv6 in normal ethernet/wlan like uses requires link-local to for functioning neighbour discovery (equivalent for v4's ARP) so it's very likely it works. Not meant for normal application usage though. Site local was phased out in favour of ULA etc.
But if you're not using global addresses you're probably doing it wrong. Global addressing doesn't mean you're globally reachable, confusing addressing vs reachability is the source of a lot of misunderstandings. You can think of it as "everyone gets their own piece of unique address space, not routed unless you want it to be".
It’s insane to me that random internet sites can try to poke at my network or local system for any purpose without me knowing and approving it.
With all we do for security these days this is such a massive hole it defies belief. Ever since I first saw an enterprise thing that just expected end users to run a local utility (really embedded web server) for their website to talk to I’ve been amazed this hasn’t been shut down.
Even in this case, it could be useful to limit the access websites have to local servers within your subnet (/64, etc), which might be a better way to define the “local” network.
(And then corporate/enterprise managed Chrome installs could have specific subnets added to the allow list)
this thing’s leaking. localhost ain’t private if random sites can hit it and get responses. devices still exposing ports like it’s 2003. prompts don’t help, people just just click through till it goes away. cors not doing much, it’s just noise now.issue’s been sitting there forever, everyone patches on top but none of these local services even check who’s knocking. just answers. every time.
I don’t see this mentioned anywhere but Safari on iOS already does this. If you try to access a local network endpoint you’ll be asked to allow it by Safari, and the permission is per-site.
A browser can't tell if a site is on the local network. Ambiguous addresses may not be on the local network and conversely a local network may use global addresses especially with v6.
The same thing that makes blocking ports on iOS and macOS so hard: there's barely any firewall on these devices by default, and the ones users may find cause more problems than users will ever think they solve.
Listening on a specific port is one of the most basic things software can possibly do. What's next, blocking apps from reading files?
Plus, this is also about blocking your phone's browser from accessing your printer, your router, or that docker container you're running without a password.
That doesn't seem right. Can't speak to macOS, but on Android every application is sandboxed. Restricting its capabilities is trivial. Android apps certainly ARE blocked from reading files, except for some files in its storage and files the user grants it access to.
Adding two Android permissions would fix this entire class of exploits: "run local network service", and "access local network services" (maybe with a whitelist).
This should not be possible in the first place. There is no legitimate reason for it. Having users grant "concent" is just a way to make it more OK, not to stop it.
Is it just me or am I not seeing any example that isn't pure theory?
And if it is just me, fine I'll jump in - they should also make it so that users have to approve local network access three times. I worry about the theoretical security implications that come after they only approve local network access once.
Personally I had completely forgotten that anyone and anything can do this right now.
TLDR, IIUC, right now, random websites can try accessing contents on local IPs. You can try to blind load e.g. http://192.168.0.1/cgi-bin/login.cgi from JavaScript, iterating through a gigantic malicious list of such known useful URLs, then grep and send back whatever you want to share with advertisers or try POSTing backdoors to printer update page. No, we don't need that.
Of course, OTOH, many webapps today use localhost access to pass tokens and to talk to cooperating apps, but you only need access to 127.0.0.0/8 for that which is harder to abuse, so that range can be default exempted.
I disagree. I know it’s done, but I don’t think that makes it safe or smart.
Require the user to OK it and require the server to send a header with the one _exact_ port it will access. Require that the local server _must_ use CORS and allow that server.
No website not loaded from localhost should ever be allowed to just hit random local/private IPs and ports without explicit permission.
Cross-site requests have been built in to the design of the WWW since the beginning. The whole idea of hyperlinking from one place to another, and amalgamating media from multiple sites into a single page, is the essence of the World Wide Web that Tim Berners-Lee conceived at CERN, based on the HyperCard stacks and Gopher and Wais services that had preceded it.
Of course it was only later that cookies and scripting and low-trust networks were introduced.
The WWW was conceived as more of a "desktop publishing" metaphor, where pages could be formatted and multimedia presentations could be made and served to the public. It was later that the browser was harnessed as a cross-platform application delivery front-end.
Also, many sites do carefully try to guard against "linking out" or letting the user escape their walled gardens without a warning or disclaimer. As much as they may rely on third-party analytics and ad servers, most web masters want the users to remain on their site, interacting with the same site, without following an external link that would end their engagement or web session.
CORS != hyperlinks. CORS is about random websites in your browser accessing other domains without your say-so. Websites doing stuff behind your back does feel antithetical to Tim Berners Lee's ideals...
I'm aware of that but obviously there's a huge difference between the user clicking a link and navigating to a page on another domain and the site making that request on the user's behalf for a blob of JS.
Android apps need UDP port binding to function. You can't do QUIC without UDP. Of course you can (should) restrict localhost bound ports to the namespaces of individual apps, but there is no easy solution to this problem at the moment.
If you rely on users having to click "yes", then you're just making phones harder to use because everyone still using Facebook or Instagram will just click whatever buttons make the app work.
On the other hand, I have yet to come up with a good reason why arbitrary websites need to set up direct connections to devices within the local network.
There's the IPv6 argument against the proposed measures, which requires work to determine if an address is local or global, but that's also much more difficult to enumerate than the IPv4 space that some websites try to scan. That doesn't mean IPv4 address shouldn't be protected at all, either. Even with an IPv6-shaped hole, blocking local networks (both IPv4 and local IPv6) by default makes sense for websites originating from outside.
IE did something very similar to this decades ago. They also had a system for displaying details about websites' privacy policies and data sharing. It's almost disheartening to see we're trying to come up with solutions to these problems again.
With WebUSB, you can program a microcontroller without needing to install local software. With Web Bluetooth, you can wirelessly capture data from + send commands to that microcontroller.
As a developer, these standards prevent you from needing to maintain separate implementations for Windows/macOS/Linux/Android.
As a user, they let you grant and revoke sandbox permissions in a granular way, including fully removing the web app from your computer.
Browsers provide a great cross-platform sandbox and make it much easier to develop secure software across all platforms.
WebUSB and Web Bluetooth are opt-in when the site requests a connection/permission, as opposed to unlimited access by default for native apps. And if you don't want to use them, you can choose a browser that doesn't implement those standards.
What other platform (outside of web browsers) is a good alternative for securely developing cross-platform software that interacts with hardware?
I’m ok with needing non-browser software for those things.
> Browsers provide a great cross-platform sandbox and make it much easier to develop secure software across all platforms.
Sure, until advertising companies find ways around and through those sandboxes because browser authors want the browsers be capable of more, in the name of a cross platform solution. The more a browser can do, the more surface area the sandbox has. (An advertising company makes the most popular browser, by the way.)
> What other platform (outside of web browsers) is a good alternative for securely developing cross-platform software that interacts with hardware?
There isn’t one, other than maybe video game engines, but it doesn’t matter. OS vendors need to work to make cross-platform software possible; it’s their fault we need a cross-platform solution at all. Every OS is a construct, and they were constructed to be different for arbitrary reasons.
A good app-permission model in the browser is much more likely to happen, but I don’t see that really happening, either. “Too inconvenient for users [and our own in-house advertisers/malware authors]” will be the reason.
MacOS handles permissions pretty well, but it could do better. If something wants local network permission, the user gets prompted. If the user says no, those network requests fail. Same with filesystem access. Linux will never have anything like this, nor will Windows, but it’s what security looks like, probably.
Users will say yes to those prompts ultimately, because as soon as users have the ability to say “no” on all platforms, sites will simply gate site functionality behind the granting of those permissions because the authors of those sites want that data so badly.
The only thing that is really going to stop behavior like this is law, and that is NEVER going to happen in the US.
So, short of laws, browsers themselves must stop doing stupid crap like allowing local network access from sites that aren’t on the local network, and nonsense stuff like WebUSB. We need to give up on the idea that anyone can be safe on a platform when we want that platform to be able to do anything. Browsers must have boundaries.
Operating systems should be the police, probably, and not browsers. Web stuff is already slow as hell, and browsers should be less capable, not more capable for both security reasons and speed reasons.
just the fact that this comes from google is a hard pass for me. they sell so many adwords scams that they clearly do not give a damn about security. “security” from google is just another one of their trojan horses.
project zero is ZERO compared to the millions of little old ladies around the world getting scammed through adwords. only security big g cares about is its own. they have the tools to laser-in on and punish the subtlest of wrongthink on youtube, yet it’s just too tall of an order to focus the same laser on tech support scammers…
Google loves wreaking havoc on web standards. Is there really anything anyone can do about it at this point? The number of us using alternative browsers are a drop in the bucket when compared to Chrome's market share.
I understand the idea behind it and am still kinda chewing on the scope of it all. It will probably break some enterprise applications and cause some help desk or group policy/profile headaches for some.
It would be nice to know when a site is probing the local network. But by the same token, here is Google once again putting barriers on self sufficiency and using them to promote their PaaS goals.
They'll gladly narc on your self hosted application doing what it's supposed to do, but what about the 23 separate calls to Google CDN, ads, fonts, ect that every website has your browser make?
I tend to believe the this particular functionality is no longer of any use to Google, which is why they want to deprecate it to raise the barrier of entry for others.
Idk, I like the idea of my browser warning me when a random website I visit tries to talk to my network. if there's a legitimate reason I can still click yes. This is orthogonal to any ads and data collection.
I agree that any newly proposed standards for the web coming from Google should be met with a skeptical eye — they aren’t good stewards IMO and are usually self-serving.
I’d be interested in hearing what the folks at Ladybird think of this proposal.
On a quick look, isn't this a bit antithetical to the concept of the internet as a decentralized and hierarchical system? You have to route through the public internet to interoperate with the rest of the public internet?
I like this on the first glance. The idea of a random website probing arbitrary local IPs (or any IPs for that matter) with HTTP requests is insane. I wouldn't care if it breaks some enterprise apps or integrations - enterprises could reenable this "feature" via management tools, normal users could configure it themselves, just show a popup "this website wants to control local devices - allow/deny".
This is a misunderstanding. Local network devices are protected from random websites by CORS, and have been for many years. It's not perfect, but it's generally quite effective.
The issue is that CORS gates access only on the consent of the target server. It must return headers that opt into receiving requests from the website.
This proposal aims to tighten that, so that even if the website and the network device both actively want to communicate, the user's permission is also explicitly requested. Historically we assumed server & website agreement was sufficient, but Facebook's recent tricks where websites secretly talked to apps on your phone have broken that assumption - websites might be in cahoots with local network servers to work against your interests.
Doesn't CORS just restrict whether the webpage JS context gets to see the response of the target request? The request itself happens anyway, right?
So the attack vector that I can imagine is that JS on the browser can issue a specially crafted request to a vulnerable printer or whatever that triggers arbitrary code execution on that other device. That code might be sufficient to cause the printer to carry out your evil task, including making an outbound connection to the attacker's server. Of course, the webpage would not be able to discover whether it was successful, but that may not be important.
I think CORS is so hard for us to hold in our heads in large part due to how much is stuffed into the algorithm.
It may send an OPTIONS request, or not.
It may block a request being sent (in response to OPTIONS) or block a response from being read.
It may restrict which headers can be set, or read.
It may downgrade the request you were sending silently, or consider your request valid but the response off limits.
It is a matrix of independent gates essentially.
Even the language we use is imprecise, CORS itself is not really doing any of this or blocking things. As others pointed out it’s the Single Origin Policy that is the strict one, and CORS is really an exception engine to allow us to punch through that security layer.
No, a preflight (OPTIONS) request is sent by the browser first prior to sending the request initiated by the application. I would be surprised if it is possible for the client browser to control this OPTIONS request more than just the URL. I am curious if anyone else has any input on this topic though.
Maybe there is some side-channel timing that can be used to determine the existence of a device, but not so sure about actually crafting and delivering a malicious payload.
This tag:
triggers a local network GET request without any CORS involvement.I remember back in the day you could embed <img src="http://someothersite.com/forum/ucp.php?mode=logout"> in your forum signature and screw with everyone's sessions across the web
Haha I remember that. The solution at the time for many forum admins was to simply state that anyone found to be doing that would be permabanned. Which was enough to make it stop completely, at least for the forums that I moderated. Different times indeed.
Or you could just make the logout route POST-only. Problem solved.
<img src="C:\con\con"></img>
It's essentially the same, as many apps use HTTP server + html client instead of something native or with another IPC.
This expectation is that this should not work - well behaved network devices shouldn't accept a blind GET like this for destructive operations. Plenty of other good reasons for that. No real alternative unless you're also going to block page redirects & links to these URLs as well, which also trigger a similar GET. That would make it impossible to access any local network page without typing it manually.
While it clearly isn't a hard guarantee, in practice it does seem to generally work as these have been known issues without apparent massive exploits for decades. That CORS restrictions block probing (no response provided) does help makes this all significantly more difficult.
"No true Scotsman allows GETs with side effects" is not a strong argument
It's not just HTTP where this is a problem. There are enough http-ish protocols where protocol smuggling confusion is a risk. It's possible to send chimeric HTTP requests at devices which then interpret them as a protocol other than http.
Yes, which is why web browsers way back even in the netscape navigator era had a blacklist of ports that are disallowed.
Exactly you can also trigger forms for POST or DELETE etc. this is called CSRF if the endpoint doesn't validate some token in the request. CORS only protects against unauthorized xhr requests. All decades old OWASP basics really.
> Exactly you can also trigger forms for POST or DELETE etc
You cant do a DELETE from a form. You have to use ajax. If cross DELETE needs preflight.
To nitpick, CSRF is not the ability to use forms per se, but relying solely on the existence of a cookie to authorize actions with side effects.
That highly ranked comments on HN (an audience with way above average-engineer interest in software and security) get this wrong kinda explains why these things keep being an issue.
I don't know why you are getting downvoted, you do have a point. Some of the comments appear knowing what CORS headers are, but neither their purpose nor how it relates to CSRF it seems, which is worrying. It's not meant as disparaging. My university thought a course on OWASP thankfully, otherwise I'll probably also be oblivious.
If you're going cross-domain with XHR, I'd hope you're mostly sending json request bodies and not forms.
Though to be fair, a lot of web frameworks have methods to bind named inputs that allow either.
This misses the point a bit. CSRF usually applies to people who want only same domain requests and dont realize that cross domain is an option for the attacker.
In the modern web its much less of an issue due to samesite cookies being default .
The idea is, the malicious actor would use a 'simple request' that doesn't need a preflight (basically, a GET or POST request with form data or plain text), and manage to construct a payload that exploits the target device. But I have yet to see a realistic example of such a payload (the paper I read about the idea only vaguely pointed at the existence of polyglot payloads).
There doesn't need to be any kind of "polyglot payload". Local network services and devices that accept only simple HTTP requests are extremely common. The request will go through and alter state, etc.; you just won't be able to read the response from the browser.
Exactly. People who are answering must not have been aware of “simple” requests not requiring preflight.
I can give an example of this; I found such a vulnerability a few years ago now in an application I use regularly.
The target application in this case was trying to validate incoming POST requests by checking that the incoming MIME type was "application/json". Normally, you can't make unauthorized XHR requests with this MIME type as CORS will send a preflight.
However, because of the way it was checking for this (checking if the Content-Type header contained the text "application/json"), It was relatively easy to construct a new Content-Type header that bypasses CORS:
Content-Type: multipart/form-data; boundary=application/json
It's worth bearing in mind in this case that the payload doesn't actually have to be form data - the application was expecting JSON, after all! As long as the web server doesn't do its own data validation (which it didn't in this case), we can just pass JSON as normal.
This was particularly bad because the application allowed arbitrary code execution via this endpoint! It was fixed, but in my opinion, something like that should never have been exposed to the network in the first place.
This is a great example; thanks.
Oh, you can only send arbitrary text or form submissions. That’s SO MUCH.
Correct.
Here's a formal definition of such simple requests, which may be more expansive than one might expect: https://developer.mozilla.org/en-US/docs/Web/HTTP/Guides/COR...
You don't even need to be exploiting the target device, you might just be leaking data over that connection.
https://news.ycombinator.com/item?id=44169115
Yeah, I think this is the reason this proposal is getting more traction again.
I think that is because it is so old that its basically old news and mostly mitigated.
https://www.kb.cert.org/vuls/id/476267 is an article from 2001 on it.
Some devices don't bother to limit the size of the GET, which can enable a DOS attack at least, a buffer overflow at worst. But I think the most typical vector is a form-data POST, which isn't CSRF-protected because "it's on localhost so it's safe, right?"
I've been that sloppy with dev servers too. Usually not listening on port 80 but that's hardly Ft Knox.
It can send a json-rpc request to your bitcoin node and empty your wallet
Do you know of any such node that doesn't check the Content-Type of requests and also has no authentication?
> No, a preflight (OPTIONS) request is sent by the browser first prior to sending the request initiated by the application.
Note: preflight is not required for any type of request that browser js was capable of making prior to CORS being introduced. (Except for local network)
So a simple GET or POST does not require OPTIONS, but if you set a header it might require OPTIONS (unless its a header you could set in the pre-cors world)
You’re forgetting { mode: 'no-cors' }, which makes the response opaque (no way to read the data) but completely bypasses the CORS preflight request and header checks.
This is missing important context. You are correct that preflight will be skipped, but there are further restrictions when operating in this mode. They don't guarantee your server is safe, but it does force operation under a “safer” subset of verbs and header fields.
The browser will restrict the headers and methods of requests that can be sent in no-cors mode. (silent censoring in the case of headers, more specifically)
Anything besides GET, HEAD, POST will result in an error in browser, and not be sent.
All headers will be dropped besides the CORS safelisted headers [0]
And Content-Type must be one of urlencoded, form-data, or text-plain. Attempting to use anything else will see the header replaced by text-plain.
[0] https://developer.mozilla.org/en-US/docs/Glossary/CORS-safel...
That’s just not that big of a restriction. Anecdotally, very few JSON APIs I’ve worked with have bothered to check the request Content-Type. (“Minimal” web frameworks without built-in security middleware have been very harmful in this respect.) People don’t know about this attack vector and don’t design their backends to prevent it.
I agree that it is not a robust safety net. But in the instance you’re citing, thats a misconfigured server.
What framework allows you to setup a misconfigured parser out of the box?
I dont mean that as a challenge, but as a server framework maintainer Im genuinely curious. In express we would definitely allow people to opt into this, but you have to explicitly make the choice to go and configure body-parser.json to accept all content types via a noop function for type checking.
Meaning, its hard to get into this state!
Edit to add: there are myriad ways to misconfigure a webserver to make it insecure without realizing. But IMO that is the point of using a server framework! To make it less likely devs will footgun via sane defaults that prevent these scenarios unless someone really wants to make a different choice.
SvelteKit for sure, and any other JS framework that uses the built-in Request class (which doesn’t check the Content-Type when you call json()).
I don’t know the exact frameworks, but I consume a lot of random undocumented backend APIs (web scraper work) and 95% of the time they’re fine with JSON requests with Content-Type: text/plain.
I think you’re making those restrictions out to be bigger than they are.
Does no-cors allow a nefarious company to send a POST request to a local server, running in an app, containing whatever arbitrary data they’d like? Yes, it does. When you control the server side the inability to set custom headers etc doesn’t really matter.
My intent isnt to convince people this is a safe mode, but to share knowledge in the hope someone learns something new today.
I didnt mean it to come across that way. The spec does what the spec does, we should all be aware of it so we can make informed decisions.
Thankfully no-cors also restricts most headers, including setting content-type to anything but the built-in form types. So while CSRF doesn't even need a click because of no-cors, it's still not possible to do csrf with a json-only api. Just be sure the server is actually set up to restrict the content type -- most frameworks will "helpfully" accept and convert form-data by default.
It depends. GET requests are assumed not to have side effects, so often don't have a preflight request (although there are cases where it does). But of course, not all sites follow those semantics, and it wouldn't surprise me if printer or router firmware used GETs to do something dangerous.
Also, form submission famously doesn't require CORS.
There is a limited, but potentially effective, attack surface via URL parameters.
I can confirm that local websites that don't implement CORS via the OPTIONS request cannot be browsed with mainstream browsers. Does nothing to prevent non-browser applications running on the local network from accessing your website.
As far as I can tell, the only thing this proposal does that CORS does not already do is provide some level of enterprise configuration control to guard against the scenario where your users are using compromised internet sites that can ping around your internal network for agents running on compromised desktops. Maybe? I don't get it.
If somebody would fix the "no https for local connections" issue, then IoT websites could use authenticated logins to fix both problems. Non-https websites also have no access to browser crypto APIs so roll-your-own auth (the horror) isn't an option either. Fustrating!
I don't believe this is true? As others have pointed out, preflight options requests only happen for non simple requests. Cors response headers are still required to read a cross domain response, but that is still a huge window for a malicious site to try to send side effectful requests to your local network devices that have some badly implemented web server running.
[edit]: I was wrong. Just tested that a moment ago. It turns out NOT to be true. My web server during normal operation is current NOT getting OPTIONS requests at all.
Wondering whether I triggered CORS requests when I was struggling with IPv6 problems. Or maybe it triggers when I redirect index.html requests from IPv6 to IPv4 addresses. Or maybe I got caught by the earlier roll out of version one of this propposal? There was definitely a time while I was developing pipedal when none of my images displayed because my web server wasn't doing CORS. But. Whatever my excuse might be, I was wrong. :-/
Or simply perform a timing attack as a way of exploring the local network, though I'm not sure if the browser implementation immediately returns after the request is made (ex fetch API is called) but before the response is received. Presumably it doesn't, which would expose it to timing attacks as a way of exploring the network.
eBay, for one, has been (was?) fingerprinting users like this for years.
https://security.stackexchange.com/questions/232345/ebay-web...
Almost every js API for making requests is asynchronous so they do return after the request is made. The exception though is synchronous XHR calls, but I'm not sure if those are still supported
... Anyhow I think it doesn't matter because you can listen for the error/failure of most async requests. Cors errors are equivalent to network errors - the browser tells the js it got stays v code 0 with no further information - but the timing of that could lead to some sort of interference? Hard to say what that would be though. Maybe if you knew the target webserver was slow but would respond to certain requests, a slower failed local request could mean it actually reached a target device.
That said, why not just fire off simple http requests with your intended payload? Abusing the csrf vulnerabilities of local network devices seems far easier than trying to make something out of a timing attack here.
This is also a misunderstanding. CORS only applies to the Layer 7 communication. The rest you can figure out from the timing of that.
Significant components of the browser, such as Websockets have no such restrictions at all
Won't the browser still append the "Origin" field to WebSocket requests, allowing servers to reject them?
yes, and that's exactly how discord's websocket communication checks work (allowing them to offer a non-scheme "open in app" from the website).
they also had some kind of RPC websocket system for game developers, but that appears to have been abandoned: https://discord.com/developers/docs/topics/rpc
A WebSocket starts as a normal http request, so it is subject to cors if the initial request was (eg if it was a post)
websockets aren't subject to CORS, they send the initiating webpage in the Origin header but the server has to decide whether that's allowed.
Unfortunately, the initial WebSocket HTTP request is defined to always be a GET request.
CORS doesn’t protect you from anything. Quite the opposite: it _allows_ cross origin communication (provided you follow the spec). The same origin policy is what protects you.
I made a CTF challenge 3 years ago that proves why local devices are not so protected. exploitv99 bypasses PNA with timing as the other commentor points out.
https://github.com/adc/ctf-midnightsun2022quals-writeups/tre...
> The issue is that CORS gates access only on the consent of the target server. It must return headers that opt into receiving requests from the website.
False. CORS only gates non-simple requests (via options), simple requests are sent regardless of CORS config, there is no gating whatsoever.
How would Facebook do that? They scan all likely local ranges for what could be your phone, and have a web server running on the phone? That seems more like a problem of allowing the phone app to start something like that and keep it running in the background.
Webrtc allows you to find the local ranges.
Typically there are only 256 IP's, so a scan of them all is almost instant.
Do you have a link talking about those Facebook's recent tricks? I think I missed that story, and would love to read an analysis about it
https://news.ycombinator.com/item?id=44169115
I think this can be circumvented by DNS rebinding, though your requests won't have the authentication cookies for the target, so you would still need some kind of exploit (or a completely unprotected target).
How? The browser would still have to resolve it to a final IP right?
I'm not sure what you mean but this explains it: https://github.blog/security/application-security/localhost-...
CORS prevents the site from accessing the response body. In some scenarios, a website could, for example, blindly attempt to authenticate to your router and modify settings by guessing your router bran/model and password
Is this kind of attack actually in scope for this proposal? The explainer doesn't mention it.
> Local network devices are protected from random websites by CORS
C'mon. We all know that 99% of the time, Access-Control-Allow-Origin is set to * and not to the specific IP of the web service.
Also, CORS is not in the control of the user while the proposal is. And that's a huge difference.
> but Facebook's recent tricks where websites secretly talked to apps on your phone have broken that assumption - websites might be in cahoots with local network servers to work against your interests.
This isn't going to help for that. The locally installed app, and the website, can both, independently, open a connection to a 3rd party. There's probably enough fingerprinting available for the 3rd party to be able to match them.
THE MYTH OF "CONSENSUAL" REQUESTS
Client: I consent
Server: I consent
User: I DON'T!
ISN'T THERE SOMEBODY YOU FORGOT TO ASK?
Does anyone remember when the user-agent was an agent of the user?
This sounds crazy to me. Why should websites ever have access to the local network? That presents an entirely new threat model for which we don’t have a solution. Is there even a use case for this for which there isn’t already a better solution?
I've used https://pairdrop.net/ before to share files between devices on the same LAN. It obviously wouldn't have to be a website, but it's pretty convenient since all my devices I wanted to share files on already have a browser.
Same use case, but I remember getting approval prompts ( though come to think of it, those were not mandated, but application specific prompts to ensure you consciously choose to share/receive items ). To your point, there are valid use cases for it, but some tightening would likely be beneficial.
Not a local network, but localhost example: due to the lousy private certificate capability APIs in web browsers, this is commonly used for signing with electronic IDs for countries issuing smartcard certificates for their citizens (common in Europe). Basically, a web page would contact a web server hosted on localhost which was integrated with PKCS library locally, providing a signing and encryption API.
One of the solutions in the market was open source up to a point (Nowina NexU), but it seems it's gone from GitHub
For local network, you can imagine similar use cases — keep something inside the local network (eg. an API to an input device; imagine it being a scanner), but enable server-side function (eg. OCR) from their web page. With ZeroConf and DHCP domain name extensions, it can be a pretty seamless option for developers to consider.
>Why should websites ever have access to the local network?
It's just the default. So far, browsers haven't really given different IP ranges different security.
evil.com is allowed to make requests to bank.com . Similarly, evil.com is allowed to make requests to foo.com even if foo.com DNS resolves to 127.0.0.1 .
> It's just the default. So far, browsers haven't really given different IP ranges different security.
I remember having "zone" settings in Internet Explorer 20 years ago, and ISTR it did IP ranges as well as domains. Don't think it did anything about cross-zone security though.
> Is there even a use case for this for which there isn’t already a better solution?
I deal with a third-party hosted webapp that enables extra when a webserver hosted on localhost is present. The local webserver exposes an API allowing the application to interact more closely with the host OS (think locally-attached devices and servers on the local network). If the locally-installed webserver isn't present the hosted app hides the extra functionality.
Limiting browser access to the localhost subnet (127.0.0.1/8) would be fine to me, as a sysadmin, so long as I have the option to enable it for applications where it's desired.
>That presents an entirely new threat model for which we don’t have a solution.
What attack do you think doesn't have a solution? CSRF attacks? The solution is CSRF tokens, or checking the Origin header, same as how non-local-network sites protect against CSRF. DNS rebinding attacks? The solution is checking the Host header.
>for which we don’t have a solution
It's called ZTA, Zero Trust Architecture. Devices shouldn't assume the LAN is secure.
Exactly, LAN is not a "secure" network field. Authenticate everything from everywhere all the time
You got grandma running ZTA now?
This is a problem impacting mass users, not just technical ones.
> normal users could configure it themselves, just show a popup "this website wants to control local devices - allow/deny".
MacOS currently does this (per app, not per site) & most users just click yes without a second thought. Doing it per site might create a little more apprehension, but I imagine not much.
Do we have any evidence that most users just click yes?
My parents who are non-technical click no by default to everything, sometimes they ask for my assistance when something doesn't work and often it's because they denied some permission that is essential for an app to work e.g. maybe they denied access to the microphone to an audio call app.
Unless we have statistics, I don't think we can make assumptions.
The amount of "malware" infections I've responded to over the years that involved browser push notifications to Windows desktops is completely absurd. Chrome and Edge clearly ask for permissions to enable a browser push.
The moment a user gets this permissions request, as far as I can tell they will hit approve 100% of the time. We have one office where the staff have complained that it's impossible to look at astrology websites without committing to desktop popups selling McAfee. Which implies those staff, having been trained to hit "no", believe it's impossible to do.
(yes, we can disable with a GPO, which I heavily promote, but that org has political problems).
As a counter example, I think all these dialogs are annoying as hell and click yes to almost everything. If I’m installing the app I have pre-vetted it to ensure it’s marginally trustworthy.
I have no statistics but I wouldn't consider older parents the typical case here. My parents never click yes on anything but my young colleagues in non engineering roles in my office do. And I'd say even a decent % of the engineering colleagues do too - especially the vibe coders. And they all spend a lot more time on they computer then my parents.
Interesting parallel between the older-parents who (may have finally learned to) deny and young folks, supposed digital-natives a majority of which who don’t really understand how computers work.
People accept permission prompts from apps because they conciously downloaded the app and generally have an idea about the developer and what the app does. If a social media app asks for permission to your photos it's easy to understand why, same with a music streamer wanting to connect to your smart speaker.
A random website someone linked me to wanting to access my local network is a very different case. I'm absolutely not giving network or location or camera or any other sort of access to websites except in very extreme circumstances.
"Please accept the [tech word salad] popup to verify your identity"
Maybe this won't fool you, but it would trick 90% of internet users. (And even if it was 20% instead of 90%, that's still way too much.)
I have seen it posed as 'This site has bot protection. Confirm that you are not a bot by clicking yes', trying to mimic the modern Cloudflare / Google captchas.
To be clear: implementing this in browser on a per site basis would be a massive improvement over in-OS/per-app granularity. I want this popup in my browser.
But I was just pointing out that, while I'll make good use of it, it still probably won't offer sufficient protection (from themselves) for most.
And annoyingly, for some reason it does not remember this decision properly. Chrome asks me about local access every few weeks, it seems.
Yes, as a Chromecast user, please do give me a break from the prompts, macOS – or maybe just show them for Airplay with equal frequency and see how your users like that.
problem is: without allowing it webUIs like synology won't work, since they require your browser to connect to the local network... as it is, it's not great
Why? I’d guess requests from a local network site to itself (maybe even to others on the same network) will be allowed.
With the proposal in the OP, I would think so yes. But the MacOS setting mentioned directly above is blanket per-app at the OS level.
yes, but I'm answering to the comment that explains how currently macos works
This proposal is for websites outside your network contacting inside your network. I assume local IPs will still work.
I'm answering to the comment that explains how currently macos works
Note that the proposal also covers loopbacks, so domain names for local access would also still work.
I can't believe that anyone still thinks a popup permission modal offers any type of security. Windows UAC has shown quite definitively that users will always click through any modal in their way without thought or comprehension.
Besides that, approximately zero laypersons will have even the slightest clue what this permission means, the risks involved, or why they might want to prevent it. All they know is that the website they want is not working, and the website tells them to enable this or that permission. They will all blindly enable it every single time.
I don't think anyone's under the impression that this is a perfect solution. But it's better than nothing, and the options are this, nothing, or a security barrier that can't be bypassed with a permission prompt. And it was determined that the latter would break too many existing sites that have legitimate (i.e., doing something the end user actively wants) reason to talk to local devices.
I wonder how much of that is on the modal itself. If we instead popped up an alert that said "blocked an attempt to talk to your local devices, since this is generally a dangerous thing for websites to do. <dismiss>. to change this for this site, go to settings/site-security", making approval a more annoying multi-click deliberate affair, and defaulting the knee-jerk single-click dismissal to the safer option of refusal.
Maybe. But eventually they will learn. In the meantime, other users, who at least try to stay somewhat safe ( if it is even possible these days ), can make appropriate adjustments.
I think it does, in many (but definitely not all) contexts.
For example, it's pretty straightforward what camera, push notification, or location access means. Contact sharing is already a stretch ("to connect you with your friends, please grant...").
"Local network access"? Probably not.
This is so true. The modern Mac is a sea of Allow/Don't Allow prompts, mixed with the slightly more infantilizing alternative of the "Block" / "Open System Preferences" where you have to prove you know what you're doing by manually browsing for the app to grant the permission to, to add it to the list of ones with whatever permission.
They're just two different approaches with the same flaw: People with no clue how tech works cannot completely protect themselves from any possible attacker, while also having sophisticated networked features. Nobody has provided a decent alternative other than some kind of fully bubble-wrapped limited account using Group Policies, to ban all those perms from even being asked for.
> The modern Mac is a sea of Allow/Don't Allow prompts
Remember when they used to mock this as part of their marketing?
https://www.youtube.com/watch?v=DUPxkzV1RTc
Windows Vista would spawn a permissions prompt when users did something as innocuous as creating a shortcut on their desktop.
Microsoft deserved to be mocked for that implementation.
MacOS asked a permission dialog when I plug my AirPods in to charge. I have no idea what I’m even giving permission for but it pops up every time.
Asking you if you trust a device before opening a data connection to it is simply not the same thing as asking the person who just created a shortcut if they should be allowed to do that.
How do you know the person created the shortcut and not some malware trying to get a user to click on an executable and elevate permissions?
I once encountered malware on my roommate’s Windows 98 system. It was a worm designed to rewrite every image file as a VBS script that would replicate and re-infect every possible file whenever it was clicked or executed. It hid the VBS extensions and masqueraded as the original images.
Creation of a shortcut on Windows is not necessarily innocuous. It was a common first vector to drop malware as users were accustomed to installing software that did the same thing. A Windows shortcut can hide an arbitrary pathname, arbitrary command-line arguments, a custom icon, and more; these can be modified at any time.
So whether it was a mistake for UAC to be overzealous or obstructionist, or Microsoft was already being mocked for poor security, perhaps they weren’t wrong to raise awareness about such maneuvers.
A user creating a shortcut manually is not something that requires a permissions prompt.
If you want to teach users to ignore security prompts, then completely pointless nagging is how you do it.
A better option would be to put Mark Zuckerberg in prison for deploying malware to a massive number of people.
This Web Security lecture by Feross Aboukhadijeh has a great example of Zoom's zero-day from 2019 that allowed anyone to force you to join a zoom meeting (and even cause arbitrary code execution), using a local server:
https://www.youtube.com/watch?v=wLgcb4jZwGM&list=PL1y1iaEtjS...
It's not clear to me from Google's proposal if it also restricts access to localhost, or just your local network - it'd be great if it were both, as we clearly can't rely on third parties to lock down their local servers sufficiently!
edit: localhost won't be restricted:
"Note that local -> local is not a local network request, as well as loopback -> anything. (See "cross-origin requests" below for a discussion on potentially expanding this definition in the future.)"
>edit: localhost won't be restricted:
It will be restricted. This proposal isn't completely blocking all localhost and local IPs. Rather, it's preventing public sites from communicating with localhost and local IPs. E.g:
* If evil.com makes a request to a local address it'll get blocked.
* If evil.com makes a request to a localhost address it'll get blocked.
* If a local address makes a request to a localhost address it'll get blocked.
* If a local address makes a request to a local address, it'll be allowed.
* If a local address makes a request to evil.com it'll be allowed.
* If localhost makes a request to a localhost address it'll be allowed.
* If localhost makes a request to a local address, it'll be allowed.
* If localhost makes a request to evil.com it'll be allowed.
Ahh, thanks for clarifying! It's the origin being compared, not the context - of course.
[flagged]
I agree fully with him. I don’t care what part of your job gets harder, or what software breaks if you can’t make it work without unnecessarily invading my privacy. You could tell me it’s going to shut down the internet for 6 months and I still wouldn’t care.
You’ll have to come up with a really strong defense for why this shouldn’t happen in order to convince most users.
It just means I run a persistent client on your device that is permanently connected to the mothership, instead of only when you have your browser open.
I’m so glad most people don’t truly consider software devs to be real engineers, because this is a perfect example of why that word deserves so much more respect than this field gives it.
[flagged]
I like your "you've been *** my ass for 35 years, please feel free to keep doing it for all eternity" attitude.
I'm sure it will require some work, but this is the price of security. The idea that any website I visit can start pinging/exploiting some random unsecured testing web server I have running on localhost:8080 is a massive security risk.
Or probing your local network for vulnerable HTTP servers, like insecure routers or web cameras. localhost is just the tip of the iceberg.
Can you define "local network"? Probably not. Most large enterprises own publicly-routable IP space for internal use. Internal doesn't mean 192.168.0.0/24. foo.corp.example.com could resolve to 9.10.11.12 and still be local. What about IPv6? It's a nonsense argument fraught with corner cases.
> Can you define "local network"?
Sure - a destination is "local" if your machine has a route to that IP which isn't via a gateway.
If your network is large enough that it consists of multiple routed network segments, and you don't have any ACLs between those segments, then yeah, you won't be fully protected by this browser feature. But you aren't protected right now either, so nothing's getting worse, it's just not getting better for your specific use case.
> Sure - a destination is "local" if your machine has a route to that IP which isn't via a gateway.
Fantastic. Well, Google doesn't agree
The proposal defines it along RFC1918 address space boundaries. The spitballing back and forth in the GitHub issues about which imaginary TLDs they will or won't also consider "local" is absolutely horrifying.
Cool so it will protect 99.999% of home networks. Compared to 0% which are protected now. Sounds great!
Not to be snarky, but that's a good example of "perfect being the enemy of good". You are totally right that there are corner cases, sure. But that doesn't stop us from tackling the low hanging fruit first. Which is, as you say, localhost and LAN (if present).
It should not even be able to communicate with the local network at all, it’s a goddamn web page. It should be restricted to just communicate with the server that hosts it and that’s it.
They define it the explainer this was originally based on: (https://github.com/WICG/private-network-access/blob/main/exp...)
Quote: We extend the RFC 1918 concept of private IP addresses to build a model of network privacy.
Concretely, there are 3 kinds of private network requests:
[flagged]
The whole browser is a massive security leak. What genius thought it was a good idea for the web page I visit in the morning to get the weather forecast to be able to run arbitrary code and to communicate with arbitrary hosts on my local network?
I do understand this sentiment, but isn't the tension here that security improvements by their very nature are designed to break things? Specifically the things we might consider "bad", but really that definition gets a bit squishy at the edges.
This attitude kept IE6 in production well after its natural life should have concluded.
I’m sorry but this proposal is absolutely monumentally important.
The fact that I have to rely on random extensions to accomplish this is unacceptable.
I wish they'd (Apple/Micrsoft/Google/...) would do similar things for USB and Bluetooth.
Lately, every app I install, wants bluetooth access to scan all my bluetooth devices. I don't want that. At most, I want the app to have to declare in their manifest some specific device IDs (short list) that their app is allowed to connect to and have the OS limit their connections to only those devices. For for example the Bose App should only be able to see Bose devices, nothing else. The CVS (pharmacy app) should only be able to connect to CVS devices, whatever those are. All I know is the app asked for permission. I denied it.
I might even prefer if it had to register the device ids and then the user would be prompted, the same way camera access/gps access is prompted. Via the OS, it might see a device that the CVS.app registered for in its manifest. The OS would popup "CVS app would like to connect to device ABC? Just this once, only when the app is running, always" (similar to the way iOS handles location)
By id, I mean some prefix that a company registers for its devices. bose.xxx, app's manifest says it wants to connect to "bose.*" and OS filters.
Similarly for USB and maybe local network devices. Come up with an id scheme, have the OS prevent apps form connecting to anything not that id. Effectively, don't let apps browser the network, usb, bluetooth.
I am still holding out hope that eventually at least Apple will offer fake permission grants to applications. Oh, app XYZ "needs" to see my contact list to proceed? Well it gets a randomized fake list, indistinguishable from the real one. Similar with GPS.
I have been told that WhatsApp does not let you name contacts without sharing your address book back to Facebook.
Yeah. I'd like it too. I can't use my bank's app, because it wants some weird permissions like an access to contacts, I refuse to give them, because I see no use in it for me, and it refuses to work.
Also for the camera, just feed them random noise or a user-selectable image/video
In iOS you can share a subset of your contacts. This is functionally equivalent and works as you described for WhatsApp.
>In iOS you can share a subset of your contacts.
the problem is, the app must respect that.
WhatsApp, for all the hate it gets, does.
"Privacy" focused Telegram doesnt-- it wouldnt work unless I shared ALL my contacts-- when I shared a few, it kept complaining I had to share ALL
Is it something specific to iOS Telegram client?
On Android Telegram works with denied access to the contacts and maintains its own, completely separate, contact list (shared with desktop Telegram and other copies logged in to same account). I'm using Telegram longer than I'm using smartphone and it has completely separate contact list (as it should be).
And WhatsApp cannot be used without access to contacts: it doesn't allow to create WatsApp-only contact and complains that it has no place to store it till you grant access to Phone contact list.
To be honest, I prefer to have separate contact lists on all my communication channel, and even sharing contacts between phone app and e-mail app (GMail) bothers me.
Telegram is good in this aspect, it can use its own contact list, not synchronized or shared with anything else, and WhatsApp is not.
I’ve never allowed Telegram on iOS to access my contacts, camera, or microphone and it’s worked just fine.
Looks to me like it was a bug. Not giving access to any contacts broke the app completely but limited access works fine except for an annoying persistent in app notification.
iOS generally solves this through App Store submission reviews so I’m surprised this isn’t a rule and that telegram got away with it. “Apps must not gate functionality behind receiving access to all contacts vs a subset” or something. They definitely do so for location access, for example.
WhatsApp specifically needs phone numbers, and you can filter out which contacts you share, but not which fields. So if you family uses WhatsApp, you’d share those contacts, but you can’t share ONLY their phone number, WhatsApp also gets their birthdays, addresses, personal notes, and any other personal information which you might have.
I think this feature is pretty meaningless in the way that it’s implemented.
It’s also pretty annoying that applications know they have partial permission, so kept prompting for full permission all the time anyway.
GrapheneOS has this feature (save for faking GPS) fwiw
Apps are not allowed to force you to share your contacts on iOS, report any apps that are asking you to do so as it’s a violation of the App Store TOS.
Like the github 3rd party application integration. "ABC would like to see your repositories, which ones do you want to share?"
Does that UI actually let you choose? IME it just tells me what orgs & repos will be shared, with no option to choose.
Safari doesn't support Web MIDI apparently for this reason (fingerprinting), but it makes using any kind of MIDI web app impossible.
Are you talking about web apps, mobile apps, desktop apps, or browser extensions?
All of them.
Apple does this for iOS 18 via the AccessorySetupKit
> Lately, every app I install, wants bluetooth access to scan all my bluetooth devices.
Blame Apple and Google and their horrid BLE APIs.
An app generally has to request "ALL THE PERMISSIONS!" to get RSSI which most apps are using as a (really stupid, bug prone, broken) proxy for distance.
What everybody wants is "time of flight"--but for some reason that continues to be mostly unsupported.
It's crazy to me that this has always been the default behavior for web browsers. A public website being able to silently access your entire filesystem would be an absurd security hole. Yet all local network services are considered fair game for XHR, and security is left to the server itself. If you are developer and run your company's webapp on your dev machine for testing (with loose or non-existent security defaults), facebook.com or google.com or literally anyone else could be accessing it right now. Heck think of everything people deploy unauthed on their home network because they trust their router's firewall. Does every one of them have the correct CORS configuration?
I majored in CS and I had no idea that was possible: public websites you access have access to your local network. I have to take time to process this. Beside what is suggested in the post, are there any ways to limit this abusive access?
What’s even crazier is that nobody learned this lesson and new protocols are created with the same systematic vulnerabilities.
Talking about MCP agents if that’s not obvious.
> Does every one of them have the correct CORS configuration?
I would guess it's closer to 0% than 0.1%.
The local server has to send Access-Control-Allow-Origin: * for this to work, right?
Are there any common local web servers or services that use that as the default? Not that it’s not concerning, just wondering.
No, simple requests [1] - such as a GET request, or a POST request with text/plain Content-Type - don't trigger a CORS preflight. The request is made, and the browser may block the requesting JS code from seeing the response if the necessary CORS response header is missing. But by that point the request had already been made. So if your local service has a GET endpoint like http://localhost:8080/launch_rockets, or a POST endpoint, that doesn't strictly validate the body Content-Type, then any website can trigger it.
[1] https://developer.mozilla.org/en-US/docs/Web/HTTP/Guides/COR...
I was thinking in terms of response exfiltration, but yeah, better put that /launch_rockets endpoint behind some auth.
Internet Explorer solved this with their zoning system right?
https://learn.microsoft.com/en-us/previous-versions/troubles...
Ironically, Chrome partially supported and utilized IE security zones on Windows, though it was not well documented.
Oh yeah forgot about that, amazing.
Although those were typically used to give ActiveX controls on the intranet unfettered access to your machine because IT put it in the group policy. Fun days.
Honestly I just assumed a modern equivalent existed. That it doesn’t is ridiculous. Local network should be a special permission like the camera or microphone.
I guess this would help Meta’s sneaking identification code sharing between native apps and websites with their sdk on them from communicating serendipitously through localhost, particularly on Android.
[0] https://www.theregister.com/2025/06/03/meta_pauses_android_t...
surreptitiously
While this will help to block many websites that have no business making local connections at all, it's still very coarse-grained.
Most websites that need this permission only need to access one local server. Granting them access to everything violates the principle of least privilege. Most users don't know what's running on localhost or on their local network, so they won't understand the risk.
> Most users don't know what's running on localhost or on their local network, so they won't understand the risk.
Yes, which is why they also won't understand when the browser asks if you'd like to allow the site to visit http://localhost:3146 vs http://localhost:8089. A sensible permission message ("allow this site to access resources on your local network") is better than technical mumbo jumbo which will make them just click "yes" in confusion.
Either way they'll click "yes" as long as the attacker site properly primes them for it.
For instance, on the phishing site they clicked on from an email, they'll first be prompted like:
"Chase need to verify your Local Network identity to keep your account details safe. Please ensure that you click "Yes" on the following screen to confirm your identity and access account."
Yes, that's meaningless gibberish but most people would say:
• "Not sure what that means..."
• "I DO want to access my account, though."
This is true, but you can only protect people from themselves so far. At some point you gotta let them do what they want to do. I don't want to live in a world where Google decides what we are and aren't allowed to do.
In an ideal world, the browser could act as an mDNS client, discovering local services, so that it could then show the pretty name of the relevant service in the security prompt.
In the world we live in, of course, almost nothing on your average LAN has an associated mDNS service advertisement.
They don’t? Every time I install an OS I turn that stuff off, because I don’t fully understand it. Or is avahi et al another thing?
Avahi handles zeroconf networking, which is mDNS and DNS-SD.
On a phone at least, it should be "do you want to allow website A to connect to app B."
(It's harder to do for the rest of the local network, though.)
A comprehensive implementation would be a firewall. Which CIDRs, which ports, etc.
I wish there were an API to build such a firewall, e.g. as a part of a browser extension, but also a simple default UI allowing to give access to a particular machine (e.g. router), to the LAN, to a VPN, based on the routing table, or to "private networks" in general, in the sense Windows ascribes to that. Also access to localhost separately. The site could ask one of these categories explicitly.
> I wish there were an API to build such a firewall, e.g. as a part of a browser extension,
There was in Manifest V2, and it still exists in Firefox.
https://developer.mozilla.org/en-US/docs/Mozilla/Add-ons/Web...
That's the API Chrome removed with Manifest V3. You can still log all web requests, but you can't block them dynamically anymore.
I think something like Tailscale is the way to go here.
I worry that there are problems with Ipv6. Can anyone explain to me if there actually is a way to determine whether an IPv6 is site local? If not, the proposal is going to have problems on IPv6-only networks.
I have an struggled with this issue in the past. I have an IoT application whose websever wants to reject any requests from a non-local address. After failing to find a way to distinguish IPv6 local addresses, I ended up redirecting IPv6 requests to the local IPv4 address. And that was the end of that.
I feel like I would be in a better position to raise concerns if I could confirm that my understanding is correct: that there is no practical way for an application to determine whether an IPv6 address is link- or site-local.
I did experiment with IPv6 "link local" addresses, but these seem to be something else altogether different (for use by routers rather than general applications),and don't seem to work for regular application use.
There is some wiggle room provided by including .local address as local servers. But implementation of .local domains seems to be inconsistent across various OSs at present. Raspberry PI OS, for example, will do mDNS resolution of "some_address" but not of "someaddress.local"; Ubuntu 24.04 will resolve "someaddress.local", but not "someaddress". And neither will resolve "someaddress.local." (which I think was recommended at one point, but is now deprecated and non-functional). Which does seems like an issue worth raising.
And it frustrates the HECK out of me that nobody will allow use of privately issued certs for local network addresses. The "no https for local addresses" thing needs to be fixed.
IPv6 still has the concept of "routable". You just have to decide what site-local means in terms of the routing table.
In old school IPv4 you would normally assign octet two to a site and octet three to a VLAN. Oh and you start with 10.
With IPv6 you have a lot more options.
All IPv6 devices have link local addresses - that's the LAN or local VLAN - a bit like APIPA.
Then you start on .local - that's Apple and DNS and the like and nothing to do with IP addresses. That's name to address.
You can do Lets Encrypt (ACME) for "local network addresses" (I assume you mean RFC 1918 addresses: 10/8, 172.16/12, 192.168/16) - you need to look into DNS-01 and perhaps DNS CNAME. It does require quite some effort.
There is a very good set of reasons why TLS certs are a bit of a bugger to get working effectively these days. There are solutions freely available but they are also quite hard to implement. At least they are free. I remember the days when even packet capture required opening your wallet.
You might look into acme.sh if Certbot fails to work for you. You also might need to bolt down IP addressing in general, IPv4 vs IPv6 and DNS and mDNS (and Bonjour) as concepts - you seem a little hazy on that lot.
Bon chance mate
HTTPS doesn't care about IP addresses. It's all based on domain names. You can get a certificate for any domain you own. You can also set said domain to resolve to any address you like, including a "local" one.
NAT has rotted people's brains unfortunately. RFC 1918 is not really the way to tell if something is "local" or not. 25 years ago I had 4 publicly routable IPv4 addresses. All 4 of these were "local" to me despite also being publicly routable.
An IP address is local if you can resolve it and don't have to communicate via a router.
It seems too far gone, though. People seem unable to separate RFC 1918 from the concept of "local network".
> Can anyone explain to me if there is any way to determine whether an inbound IPv6 address is "local"?
No, because it's the antithesis of IPv6 which is supposed to be globally routable. The concept isn't supposed to exist.
Not to mention Google can't even agree on the meaning of "local" - the article states they completely changed the meaning of "local" to be a redefinition of "private" halfway through brainstorming this garbage.
Creating a nonstandard, arbitrary security boundary based on CIDR subnets as an HTTP extension is completely bonkers.
As for your application, you're going about it all wrong. Just assume your application is public-facing and design your security with that in mind. Too many applications make this mistake and design saloon-door security into their "local only" application which results in overreaction such as the insanity that is the topic of discussion here.
".local" is reserved for mDNS and is in the RFC, though this is frequently and widely ignored.
It's very useful to have this additional information in something like a network address. I agree, you shouldn't rely on it, but IPv6 hasn't clicked with me yet, and the whole "globally routable" concept is one of the reasons. I hear that, and think, no, I don't agree.
Globally routable doesn't mean you don't have firewalls in between filtering and blocking traffic. You can be globally routable but drop all incoming traffic at what you define as a perimeter. E.g. the WAN interface of a typical home network.
The concept is frequently misunderstood in that IPv4 consumer SOHO "routers" often combine a NAT and routing function with a firewall, but the functions are separate.
It is widely understood that my SOHO router provides NAT for IPV4, and routing+firewall (but no NAT) for IPV6. And provides absolutely no configuability for the IpV6 firewall (which would be extremely difficult anyway) because all of the IPV6 addresses allocated to devices on my home network are impermanent and short-lived.
You can make those IPv6 IP addresses permanent and long-lived. They don't need to be short-lived addresses.
Also, I've seen lots of home firewalls which will identify a device based on MAC address for match criteria and let you set firewall rules based on those, so even if their IPv6 address does change often it still matches the traffic.
There’s something about ip6 addresses being big as a guid that makes them hard to remember. Seem like random gibberish, like a hash. But I can look at an ip4 address like a phone number, and by looking tell approximately its rules.
Maybe there’s a standard primer on how to grok ip6 addresses, and set up your network but I missed it.
Also devices typically take 2 or 4 ip6 addresses for some reason so keeping on top of them is even harder.
A few tips:
When just looking at hosts in your network with their routable IPv6 address, ignore the prefix. This is the first few segments, probably the first four in most cases for a home network (a /64 network) When thinking about firewall rules or having things talk to each other, ignore things like "temporary" IP addresses.
So looking at this example:
Ignore all those temporary ones. Ignore the longer one. You can ignore 2600:1700:63c9:a421, as that's going to be the same for all the hosts on your network, so you'll see it pretty much everywhere. So, all you really need to remember if you're really trying to configure things by IP address is this is whatever-is-my-prefix::2000.But honestly, just start using DNS. Ignore IP addresses for most things. We already pretty much ignore MAC addresses and rely on other technologies to automatically map IP to MAC for us. Its pretty simple to get a halfway competent DNS setup going on, so many home routers will have things going by default, and its just way easier to do things in general. I don't want to have to remember my printer is at 192.168.20.132 or 2600:1700:63c9:a421::a210 I just want to go to http://brother or ipp://brother.home.arpa and have it work.
Helps, thanks a lot!
But as you can see this is still an explosion of complexity for the home user. More than 4x (32 --> 128), feels like x⁴ (though might not be accurate).
I like your idea of "whatever..." There should be a "lan" variable and status could be shown factored, like "$lan::2000" to the end user perhaps.
I do use DNS all the time, like "printer.lan", "gateway.lan", etc. But don't think I'm using in the router firewall config. I use openwrt on my router but my knowledge of ipv6 is somewhat shallow.
At home, with both ip v4 and v6. For any device exposed on the Internet, I add a static IPv6 address with the host part the same as the IPv4 adress.
example: 2001:db8::192.168.0.42
This makes it very easy to remember, correlate and firewall.
Ok, that parses somehow in Python, matches, and is apparently legit. ;-)
Openwrt doesn't seem to make ipv6 static assignment easy unfortunately.That makes sense. I do love the idea of living in a world without NAT.
I don’t: NAT may have been a hack at first, but it’s my favorite feature provided by routers and why I disable ipv6 on my local network
Why do you like NAT?
Does your router being slower and taking more CPU make you feel happy?
Do you enjoy not seeing the correct IP in remote logs, thus making debugging issues harder?
Do you like being able to naively nmap your local network fairly easily?
I like all the computers in my house appearing to remote servers as a single remote host. Avoids leaking details about my home network.
Perf concerns over 32bit numbers ended in the nineties. Who at home cares about remote logs?
@donnachangstein:
The device is an IoT guitar pedal that runs on a Raspberry Pi. In performance, on stage, a Web UI runs on a phone or tablet over a hotspot connection on the PI, which is NOT internet connected (since there's no expectation that there's a Wi-Fi router or internet access at a public venue). OR the pi runs on a home wifi network, using a browser-hosted UI on a laptop or desktop. OR, I suppose over an away-from-home Wi-Fi connection at a studio or rehearsal space, I suppose.
It is not reasonable to expect my users to purchase domain names and certs for their $60 guitar pedal, which are not going to work anyway, if they are playing away from their home network. Nor is ACME provisioning an option because the device may be in use but unconnected to the internet for months at a time if users are using the Pi Hotspot at home.
I can't use password authentication to get access to the Pi Web server, because I can't use HTTPS to conceal the password, and browsers disable access to javascript crypto APIs on non non-HTTPS pages (not that I'd really trust myself to write javascript code to obtain auth tokens from the pi server anyway), so doing auth over an HTTP connection doesn't really strike me as a serious option either..
Nor is it reasonable to expect my non-technical users to spend hours configuring their networks. It's an IoT device that should be just drop and play (maybe with a one-time device setup that takes place on the Pi).
There is absolutely NO way I am going to expose the server to the open internet without HTTPS and password authentication. The server provides a complex API to the client over which effects are configured and controlled. Way too much surface area to allow anyone of the internet to poke around in. So it uses IP/4 isolation, which is the best I can figure out given the circumstances. It's not like I havem't given the problem serious consideration. I just don't see a solution.
The use case is not hugely different from an IoT toothbrush. But standards organizations have chosen to leave both my (hypothetical) toothbrush and my application utterly defenseless when it comes to security. Is it any surprise that IoT toothbrushes have security problems?
How would YOU see https working on a device like that?
> ".local" is reserved for mDNS and is in the RFC, though this is frequently and widely ignored.
Yes. That was my point. It is currently widely ignored.
Grandparent explained that a firewall is also needed with ip6.
I understand that setting it up to delineate is harder in practice. Therein lies the rub.
> can't even agree on the meaning of "local"
Well, who can agree on this? Local network, private network, intranet, Tailscale and VPN, Tor? IPv6 ULA, NAT/CGNAT, SOCKS, transparent proxy? What resources are "local" to me and what resources are "remote"?
This is quite a thorny and sometimes philosophical question. Web developers are working at the OSI Layer 6-7 / TCP/IP Application Layer.
https://en.wikipedia.org/wiki/OSI_model#Comparison_with_TCP/...
Now even cookies and things like CSRF were trying to differentiate "servers" and "origins" and "resources" along the lines of the DNS hierarchy. But this has been fraught with complication, because DNS was not intended to delineate such things, and can't do so cleanly 100% of the time.
Now these proposals are trying to reach even lower in the OSI model - Layer 3, Layer 2. If you're asking "what is on my LAN" or "what is a private network", that is not something that HTTPS or web services are supposed to know. Are you going to ask them to delve into your routing table or test the network interfaces? HTTPS was never supposed to know about your netmask or your next-hop router.
So this is only one reason that there is no elegant solution for the problem. And it has been foundational to the way the web was designed: "given a uniform locator, find this resource wherever it may be, whenever I request it." That was a simpler proposition when the Web was used to publish interesting and encyclopedic information, rather than deliver applications and access sensitive systems.
Cors doesnt stop POST request also not fetch with 'no-cors'supplied in javascript its that you cant read response that doesnt mean request is not sent by browser
Then again local app can run server with proxy that adds adds CORS headers to the proxied request and you can access any site via js fetch/xmlhttprequest interface, even extension is able to modify headers to bypass cors
Cors bypassing is just matter of editing headers whats really hard to or impossible to bypass in CSP rules,
Now facebook app itself is running such cors server proxy even without it an normal http or websocket server is enought to send metrics
Chrome already has flag to prevent locahost access still as said websocket can be used
Completely banning localhost is detrimental
Many users are using self hosted bookmarking, note app, pass managers like solutions that rely on local server
Explainer by non-Googler
Is the so-called "modern" web browser too large and complex
I never asked for stuff like "websockets"; I have to disable it, why
I still prefer a text-only browser for reading HTML; it does not run Javascript, it does not do websockets, CSS, images or a gazillion other things; it does not even autoload resources
It is relatively small, fast and reliable; very useful
It can read larger HTML files that make so-called "modern" web browsers choke
It does not support online ad services
The companies like Google that force ads on www users are known for creating problems for www users and then proposing solutions to them; why not just stop creating the problems
Text-only browsers are not a "solution". That is not the point of the comment. Such simpler clients are not a problem.
The point is that gigantic, overly complex "browsers" designed for surveillance and advertising are the problem. They are not a solution.
Going back to text-only browsers is not the solution.
Do note that since the removal of NPAPI plugins years ago, locally-installed software that intends to be used by one or more public websites has to run an HTTP server on localhost.
It would be really annoying if this use case was made into an unreasonable hassle or killed entirely. (Alternatively, browser developers could've offered a real alternative, but it's a bit late for that now.)
Doesn't most software just register a protocol handler with the OS? Then a website can hand the browser a zoommtg:// link, which the browser opens with zoom ?
Things like Jupyter Notebooks will presumably be unaffected by this, as they're not doing any cross-origin requests.
And likewise, when a command line tool wants you to log in with oauth2 and returns you to a localhost URL, it's a simple redirect not a cross-origin request, so should likewise be allowed?
A common use case, whether for 3D printers, switches, routers, or NAS devices is that you've got a centrally hosted management UI that then sends requests directly to your local devices.
This allows you to use a single centrally hosted website as user interface, without the control traffic leaving your network. e.g. Plex uses this.
I don't think this proposal will stop you visiting the management UI for devices like switches and NASes on the local network. You'll be able to visit http://192.168.0.1 and it'll work just fine?
This is just about blocking cross-origin requests from other websites. I probably don't want every ad network iframe being able to talk to my router's admin UI.
That's not what I'm talking about.
A common example is this:
1. I visit ui.manufacturer.tld
2. I click "add device" and enter 192.168.0.230, repeating this for my other local devices.
3. The website ui.manufacturer.tld now shows me a dashboard with aggregate metrics from all my switches and routers, which it collects by fetch(...) ing data from all of them.
The manufacturers site is just a static page. It stores the list of devices and credentials to connect to them in localStorage.
None of the data ever leaves my network, but I can just bookmark ui.manufacturer.tld and control all of my devices at once.
This is a relatively neat approach providing the same comfort as cloud control, without the privacy nightmare.
If it is truly static site/page, download it and open from local disk. And nudge vendor to release it as archive which can be downloaded and unpacked locally.
It has a multitude of benefits comparing opening it from vendor site each time:
1) It works offline.
2) It works if vendor site is down.
3) It works if vendor restrict access to it due to acquisition, making it subscription-based, discontinuation of feature "because fuck you".
4) It works if vendor goes out of business or pivot to something else.
5) It still works with YOUR devices if vendor decides to drop support for old ones.
6) It still works with YOUR versions of firmwares if vendor decides to push new ones, with features which are user-hostile (I'm looking at you, BambuLab).
7) It cannot be compromised, as copy on vendor site can be If your system is compromised, you have bigger problems than forged UI for devices. Even best of vendors have data breaches this days.
8) It cannot upload your data if vendor goes rogue.
Downsides? If you really need to update it, you need to re-download it manually. Not a big hassle, IMHO.
> If it is truly static site/page, download it and open from local disk. And nudge vendor to release it as archive which can be downloaded and unpacked locally.
Depending on the browser, file:/// is severely limited in what CORS requests are allowed.
And then there's products like Plex, where it's not a static site, but you still want a central dashboard that connects to your local Plex server directly via CORS.
Why local Plex which you need to install & run (it is already Server) cannot provide its own UI to browser, without 3rd party sites? It is absurd design, IMHO. I'll never allow this in my network. It looks security nightmare. Today it shows me dashboard (of what? Several my Plex servers?), tomorrow it is forced to report pirated movies to police. No, thanx.
HTTPS, basically. I've gone around and around in circles on this for a device I work on. You'd like to present an HTTPS web UI, because a) you'd like encryption between the UI and the device, and b) browsers lock down a lot of APIs, sometimes arbitrarily, behind being in a 'secure context' (ironically, including the cryptography APIs!). But your device doesn't control it's IP address or hostname, and may not even have access to the internet, so there's no way for it to have a proper HTTPS certificate, and a self-signed certificate will create all kinds of scary warnings in the browser (which HTTP will not, ironically).
So manufacturers create all kinds of crazy workarounds, like plex's, to be able to present an HTTPS web page that is easily accessible and can just talk to the device. (Except it's still not that simple, because you can't easily make an HTTP request from an HTTPS context, so plex also jumps through a bunch of hoops to co-ordinate some HTTPS certificate for the local device, which requires an internet connection).
It's a complete mess, and browsers really seem to be keen on blocking any 'let HTTPS work for local devices' solution, even if it were just a simple upgrade to the status quo that would otherwise just be treated like HTTP. Nor will they stop putting useful APIs behind a 'secure context' like an HTTPS certificate implies any level of trust except that a page is associated with a given domain name.
(Someone at plex seems to have finally gotten through to some of the devs at Chrome, and AFAIK there is now a somewhat reasonable flow that would allow e.g. a progressive webapp to request access to a local device and communicate with it without an HTTPS certificate, which is something, but still way to just host the damn UI on the device without limiting the functionality! And it's chrome-only, maybe still in preview? Haven't gotten around to trying to implement it yet)
See this long, painful, multi-year discussion on the topic: https://github.com/WICG/private-network-access/issues/23
It is all very wired.
> a) you'd like encryption between the UI and the device
No, I don't. It is on my local network. If device has public IP and I want to browse my collection when I'm out of my local network, then I do, but then Let's encrypt solved this problem many years ago (10 years!). If device doesn't have public IP but I punch hole in my NAT or install reverse proxy on gateway, then I'm tech-savvy enough to obtain Let's Encrypt cert for it, too.
> b) browsers lock down a lot of APIs, sometimes arbitrarily
Why does GUI which is served from server co-hosted with mediaserver needs any special APIs at all? It can generate all content on server side and basic JS is enough to add visual effects for smooth scrolling, drop-down menus, etc.
Its all look over-engineered in the sake of what? Of imitating desktop app in browser? Looks like it creates more problems than writing damn native desktop app. In QT, for example, which will be not-so-native (but more native than any site or Electron) but work on all 3 major OSes and *BSD from single sources.
Even on a local network, you should probably not be sending e.g. passwords around in plaintext. Let's encrypt is a solution for someone who's tech-savvy enoug to set it up, not the average user.
> Its all look over-engineered in the sake of what? Of imitating desktop app in browser?
Pretty much, yeah. And not just desktop app, but mobile app as well. The overhead of supporting multiple platforms, especially across a broad range of devices, is substantial. Wep applications sidestep a lot of that and can give you a polished UX across basically every device, especially e.g. around the installation process (because there doesn't need to be one).
> of what? Several my Plex servers?
People commonly use this to browse the collections of their own servers, and the servers of their friends, in a unified interface.
Media from friends is accessed externally, media from your own server is accessed locally for better performance.
> Depending on the browser, file:/// is severely limited in what CORS requests are allowed.
And it is strange to me too. Local (on-disk) site is like local Electron app without bundling Chrome inside. Why it should be restricted when Electron app can do everything? It looks illogical.
I agree that the current situation sucks, but that doesn't mean breaking the existing solutions is any better.
That absolutely is a privacy nightmare.
How so? It's certainly better than sending all that traffic through the cloud.
While I certainly prefer stuff I can just self-host, compared to the modern cloud-only reality with WebUSB and stuff, this is a relatively clean solution.
Windows Admin center but it's only local which I rather hate
That works if you want to launch an application from a website, but it doesn't work if you want to actively communicate with an application from a website.
This needs more detail to make it clear what you are wishing for that will not happen.
It seems like you're thinking of a specific application, or at least use-case. Can you elaborate?
Once you're launching an application, it seems like the application can negotiate with the external site directly if it wants.
#1 use case would be a password manager. It would be best if the browser plugin part can ping say, the 1password native app, which runs locally on your pc, and say "Yo I need a password for google.com" - then the native app springs into action, prompts for biometrics, locates the password or offers the user to choose, then returns it directly to the browser for filling.
Sure you can make a fully cloud-reliant PW manager, which has to have your key stored in the browser and fetch your vault from the server, but a lot of us like having that information never have to leave our computers.
Extensions can already use a better mechanism for this (https://developer.mozilla.org/en-US/docs/Mozilla/Add-ons/Web...) than starting a local web server.
Browser extensions play by very different rules than websites already. The proposal is for the latter and I doubt it is going to affect the former, other than MAYBE an extra permanent permission.
you missed the point. password managers are one of the many use cases for this feature; that they just so happen to be mostly implemented as extensions does not mean that the feature is only useful for extensions
I don't believe this is true, as https://developer.mozilla.org/en-US/docs/Mozilla/Add-ons/Web... exists. It does need an extension to be installed, but I think that's fair in your comparison with NPAPI.
It would be amazing if that method of communicating with a local app was killed entirely, because it's been a very very common source of security vulnerabilities.
> locally-installed software that intends to be used by one or more public websites has to run an HTTP server on localhost
if that software runs with a pull approach, instead of a push one, the server becomes unnecessary
bonus: then you won't have websites grossly probing local networks that aren't theirs (ew)
It's harder to run html and xml files with xslt by just opening them in a web browser (things like nunit test run output). To view these properly now -- to get the css, xslt, images, etc. to load -- you now typically have to run a web server at that file path.
Note: this is why the viewers for these tools will spin up a local web server.
With local LLMs and AI it is now common to have different servers for different tasks (LLM, TTS, ASR, etc.) running together where they need to communicate to be able to create services like local assistants. I don't want to have to jump through hoops of running these through SSL (including getting a verified self-signed cert.), etc. just to be able to run a local web service.
I'm not sure any of that is necessary for what we're talking about: locally-installed software that intends to be used by one or more public websites.
For instance, my interaction with local LLMs involves 0 web browsers, and there's no reason facebook.com needs to make calls to my locally-running LLM.
Running HTML/XML files in the browser should be easier, but at the moment it already has the issues you speak of. It might make sense, IMO, for browsers to allow requests to localhost from websites also running on localhost.
[flagged]
> Googlers present a solution no one is asking for,
I'm asking for it. Random web sites have no business poking around my internal network.
>I'm asking for it.
Proof? Link to issue? Mailing list? Anything?
I think you just made that up.
> Proof? ... Anything?
I saw them ask for it in the post you're responding to. I am also asking for it right now. That's 2 people asking for it so far.
> Link to issue? Mailing list?
That is not necessary.
Wow, so Google is sitting on a time machine that can see the future.
Amazing!
What an interesting, non-sequitur response.
One of the very few security inspired restrictions I can wholeheartedly agree with. I don't want random websites be able to read my localhost. I hope it gets accepted and implemented sooner than later.
OTOH it would be cool if random websites were able to open up and use ports on my computer's network, or even on my LAN, when granted permission of course. Browser-based file- and media sharing between my devices, or games if multi-person.
> OTOH it would be cool if random websites were able to open up and use ports on my computer's network
That's what WebRTC does. There's no requirement that WebRTC is used to send video and audio as in a Zoom/Meet call.
That's how WebTorrent works.
https://webtorrent.io/faq
uBlock / uMatrix does this by default, I believe.
I often see sites like Paypal trying to probe 127.0.0.1. For my "security", I'm sure...
It appears to not have been enabled by default on my instance of uBlock; it seems a specific filter list is used to implement this [0]; that filter was un-checked; I have no idea why. The contents of that filter list are here [1]; notice that there are exceptions for certain services, so be sure to read through the exceptions before enabling it.
[0] Filter Lists -> Privacy -> Block Outsider Intrusion into Lan
[1] <https://github.com/uBlockOrigin/uAssets/blob/master/filters/...>
This filter broke twitch for me. I had to create custom rules for twitch if I wanted to use it with this filter enabled.
Would you mind sharing those custom rules?
This has the potential to break rclone's oauth mechanism as it relies on setting the redirect URL to localhost so when the oauth is done rclone (which is running on your computer) gets called.
I guess if the permissions dialog is sensibly worded then the user will allow it.
I think this is probably a sensible proposal but I'm sure it will break stuff people are relying on.
IIUC this should not break redirects. This only affects: (1) fetch/xmlhttprequests (2) resources linked to AND loaded on a page (e.g. images, js, css, etc.)
As noted in another comment this doesn't work unless the server responding provides proper CORS headers allowing the content to be loaded by the browser in that context: so for any request to work the server is either wide open (cors: *) or are cooperating with the requesting code (cors: website.co). The changes prevent communication without user authorization.
Deny any incoming requests using ufw or nftables. Only allow outbound requests by default
Relevant Firefox extension - https://addons.mozilla.org/en-US/firefox/addon/behave/
Isn't it time for disallowing browsers to connect to anything outside same origin pages except for actual navigation?
Servers can do all the hard work of gathering content from here and there.
Is it possible to do this today with browser extensions? I ran noscript 10 years ago and it was really tough. Kinda felt like being gaslit constantly. I could go back, only enabling sites selectively, but it's not going to work for family. Wondering if just blocking cross origin requests would be more feasible.
I do not understand. Doesn't same-origin prevent all of these issues? Why on earth would you extend some protection to resources based on IP address ranges? It seems like the most dubious criteria of all.
I think the problem is that some local server are not really designed to be as secure as a public server. For example, a local server having a stupid unauthenticated endpoint like "GET /exec?cmd=rm+-rf+/*", which is obviously exploitable and same-origin does not prevent that.
I think you're mistaken about this.
Use case 1 in the document and the discussion made it clear to me.
Browsers allow launching HTTP requests to localhost in the same way they allow my-malicious-website.com to launch HTTP requests to say mail.google.com . They can _request_ a resource but that's about it -- everything else, even many things you would expect to be able to do with the downloaded resource, are blocked by the same origin policy. [1] Heck, we have a million problems already where file:/// websites cannot access resources from http://localhost , and viceversa.
So what's the attack vector exactly? Why it would be able to attack a local device but not attack your Gmail account ( with your browser happily sending your auth cookies) or file:///etc/passwd ?
The only attack I can imagine is that _the mere fact_ of a webserver existing on your local IP is a disclosure of information for someone, but ... what's the attack scenario here again? The only thing they know is you run a webserver, and maybe they can check if you serve something at a specified location.
Does this even allow identifying the router model you use? Because I can think of a bazillion better ways to do it -- including the simple "just assume is the default router of the specific ISP from that address".
[1] https://developer.mozilla.org/en-US/docs/Web/Security/Same-o...
In fact, [1] literally says
> [Same-origin policy] prevents a malicious website on the Internet from running JS in a browser to read data from [...] a company intranet (which is protected from direct access by the attacker by not having a public IP address) and relaying that data to the attacker.
This is specifically in response to the recent Facebook chicanery where their app was listening on localhost and spitting out a unique tracking ID to anything that connects, allowing arbitrary web pages to get the tracking ID and correspondingly identify the user visiting the page.
But this is trying to solve the problem in the wrong place. The problem isn't that the browser is making the connection, it's that the app betraying the user is running on the user's device. The Facebook app is malware. The premise of app store curation is that they get banned for this, right? Make everyone who wants to use Facebook use the web page now.
The existing PNA is easily defeated for bugs that can be triggered with standard cross origin requests. For example PNA does nothing to stop a website from exploiting some EOL devices I have with POST requests and img tags.
This is a much better approach.
The alternative proposal sounds much nicer, but unfortunately was paused due to concerns about devices not being able to support it.
I guess once this is added maybe the proposed device opt in mechanism could be used for applications to cooperatively support access without a permission prompt?
IIRC Flash has a similar design. One Flash app can access the internet, or local network, but not both.
Just wanted to confirm something, this only works for HTTP right? browser dont allow arbitrary TCP reqs right?
Browser should just allow per-site settings or global allow/deny all to allow deny permission to localhost
So thats user will be in control
cant just write a extension that blocks access to domains based on origin
So user can just add facebook.com as origin to block all facebook* sites from sending any request to any registered url in these case localhost/127.0.0.1 domains
DNR api allows blocking based on initiatorDomains
Proposing this in 2025. While probably knowing about this problem since Chrome was released (2008).
Why not treat any local access as if it were an access to a microphone?
I would love for someone with more knowledge to opine on this, because, to me, it seems like it would actually be the most sane default state.
That is literally what this proposal is suggesting.
This used to cause malicious sites to reboot home internet routers around 2013.
Assuming that RFC1918 addresses mean "local" network is wrong. It means "private". Many large enterprises use RFC1918 for private, internal web sites.
One internal site I spend hours a day using has a 10.x.x.x IP address. The servers for that site are on the other side of the country and are many network hops away. It's a big company, our corporate network is very very large.
A better definition of "local IP" would be whether the IP is in the same subnet as the client, i.e. look up the client's own IP and subnet mask and determine if a packet to a given IP would need to be routed through the default gateway.
The article spends a lot of effort defining the words "local" and "private" here. It then says:
> Note that local -> local is not a local network request
So your use case won't be affected.
The computer I use at work (and not only mine, many many of them) has a public IP address. Many internal services are on 10.0.0.0/8. How is this being taken into account?
10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16 are all private addresses per RFC1918 and documents superseding it(5735?). If it's like 66.249.73.128/27 or 164.13.12.34/12, those are "global" IP.
1: https://www.rfc-editor.org/rfc/rfc1918
2: https://www.rfc-editor.org/rfc/rfc5735
3: https://en.wikipedia.org/wiki/Private_network
Yes that's the point: many of our work PCs have global public IPs from something like 128.130.0.0/15 (not this actual block, but something similar), and many internal services are on 10.0.0.0/8. I'm not sure I get exactly how the proposal is addressing this. How does it know that 128.130.0.0/15 is actually internal and should be considered for content loaded from an external site?
The proposal doesn't need to address this because it doesn't even consider the global public IP of 128.130.0.0/15 in your example. If you visit a site on 10.0.0.0/8 that accesses resources on 10.0.0.0/8 it's allowed. But if you visit a random other site on the internet it will be (by default) forbidden to access the internal resource at 10.0.0.0/8.
My reading is this just adds a dialog box before browser loads RFC1918 ranges. At IP layer, a laptop with 128.130.0.123 on wlan0 should not be able to access 10.0.10.123:80, but I doubt they bother to sanity check that. Just blindly assuming all RFC1918 and only RFC1918 are local should do the job for quite a while.
btw, I've seen that kind of network. I was young, and it took me a while to realize that they DHCP assign global IPs and double NAT it. That was weird.
Your computer's own IP address is completely irrelevant. What matters is the site hostname and the IP address it resolves to.
People believe that "my computer" or "my smartphone" has an Internet address, but this is a simplification of how it's really working.
The reality is that each network interface has at least one Internet address, and these should usually all be different.
An ordinary computer at home could be plugged into Ethernet and active on WiFi at the same time. The Ethernet interface may have an IPv4 address and a set of IPv6 addresses, and belong to their home LAN. The WiFi adapter and interface may have a different IPv4 address, and belongs to the same network, or some other network. The latter is called "multi-homing".
If you visit a site that reveals your "public" IP address(es), you may find that your public, routable IPv4 and/or IPv6 addresses differ from the ones actually assigned to your interfaces.
In order to be compliant with TCP/IP standards, your device always needs to respond on a "loopback" address in 127.0.0.0/8, and typically this is assigned to a "loopback" interface.
A network router does not identify with a singular IP address, but could answer to dozens, when many interface cards are installed. Linux will gladly add "alias" IPv4 addresses to most interface devices, and you'll see SLAAC or DHCPv6 working when there's a link-local and perhaps multiple routable IPv6 addresses on each interface.
The GP says that their work computer has a [public] routable IP address. But the same computer could have another interface, or even the same interface has additional addresses assigned to it, making it a member of that private 10.0.0.0/8 intranet. This detail may or may not be relevant to the services they're connecting to, in terms of authorization or presentation. It may be relevant to the network operators, but not to the end-user.
So as a rule of thumb: your device needs at least one IP address to connect to the Internet, but that address is associated with an interface rather than your device itself, and in a functional system, there are multiple addresses being used for different purposes, or held in reserve, and multiple interfaces that grant the device membership on at least one network.
Ideally, in an organization this should be a centrally pushed group policy defining CIDRs.
Like, at home, I have 10/8 and public IPv6 addresses.
As far as I understand that doesn't matter. What matters is the frame's origin and the request.
Many years ago, before it was dropped, IP version 6 had a concept of "site local" addresses, which (if it had applied to version 4) would have encompassed the corporate intranet addresses that you are talking about. Routed within the corporate intranet; but not routed over corporate borders.
Think of this proposal's definition of "local" (always a tricky adjective in networking, and reportedly the proposers here have bikeshedded it extensively) as encompassing both Local Area Network addresses and non-LAN "site local" addresses.
fd00::/8 (within fc00::/7) is still reserved for this purpose (site-local IPv6 addressing).
fc00::/8 (a network block for a registry of organisation-specific assignments for site-local use) is the idea that was abandoned.
Roughly speaking, the following are analogs:
169.254/16 -> fe80::/64 (within fe80::/10)
10/8, 172.16/12, 192.168/16 -> a randomly-generated network (within fd00::/8)
For example, a service I maintain that consists of several machines in a partial WireGuard mesh uses fda2:daf7:a7d4:c4fb::/64 for its peers. The recommendation is no larger than a /48, so a /64 is fine (and I only need the one network, anyway).
fc00::/7 is not globally routable.
So in my case, I guess I need to blame the unconfigurable cable router my ISP provided me with? Since there's no way to provide reservations for IPv6 addresses. :-/
Right. OpenWRT, for example, will automatically generate a random /48 within fd00::/8 to use as a ULA (unique local addressing) prefix for its LAN interfaces, and will advertise those prefixes to its clients. You can also manually configure a specific prefix instead.
e.g. Imagine the following OpenWRT setup:
ULA: fd9e:c023:bb5f::/48
(V)LAN 1: IPv6 assignment hint 1, suffix 1
(V)LAN 2: IPv6 assignment hint 2, suffix ffff
Clients on LAN 1 would be advertised the prefix fd9e:c023:bb5f:1::/64 and automatically configure addresses for themselves within it. The router itself would be reachable at fd9e:c023:bb5f:1::1.
Clients on LAN 2 would be advertised the prefix fd9e:c023:bb5f:2::/64 and automatically configure addresses for themselves within it. The router itself would be reachable at fd9e:c023:bb5f:2::ffff.
Clients on LAN 1 could communicate with clients on LAN 2 (firewall permitting) and vice versa by using these ULA addresses, without any IPv6 WAN connectivity or global-scope addresses.
Is it a gross generalization to say that if you're visiting a site whose name resolves to a private IP address, it's a part of the same organizational entity as your computer is?
The proposal here would consider that site local and thus allowed to talk to local. What are the implications? Your employer whose VPN you're on, or whose physical facility you're located in, can get some access to the LAN where you are.
In the case where you're a remote worker and the LAN is your private home, I bet that the employer already has the ability to scan your LAN anyway, since most employers who are allowing you onto their VPN do so only from computers they own, manage, and control completely.
> Is it a gross generalization to say that if you're visiting a site whose name resolves to a private IP address, it's a part of the same organizational entity as your computer is?
Yes. That's a gross generalization.
I support applications delivered via site-to-site VPN tunnels hosted by third parties. In the Customer site the application is accessed via an RFC 1918 address. It is is not part of the Customer's local network, however.
Likewise, I support applications that are locally-hosted but Internet facing and appear on a non-RFC1918 IP address even though the server is local and part of the Customer's network.
Access control policy really should be orthogonal to network address. Coupling those two will enivtably lead to mismatches to work around. I would prefer some type of user-exposed (and sysadmin-exposed, centrally controllable) method for declaring the network-level access permitted by scripts (as identified by the source domain, probably).
Don't some internet providers to large scale NAT (CGNAT), so customers each get a 10.x address instead of a public one? I'm not sure if this is a problem or not. It sounds like it could be.
It wouldn’t be important in this scenario, because what your own IP address is doesn’t matter (and most of us are sitting behind a NAT router too, after all).
It would block a site from scanning your other 10.x peers on the same network segment, thinking they’re “on your LAN” but that’s not a problem in my humble opinion.
Off-topic: Is the placement of the apostrophe right in the title? Should it be "a users' local network" (current version) or "a user's local network"?
It should be "from accessing a user's local network", or "from accessing users' local networks".
Why do you think so? How is "a users' local network" significantly different from "a children's book"?
Why is this a Chrome thing, not an Android thing?
I get that this could happen on any OS, and the proposal is from browser maker's perspective. But what about the other side of things, an app (not necessarily browser) talking to arbitrary localhost address?
Basically any inter-process communication (IPC). https://en.wikipedia.org/wiki/Inter-process_communication . There are fancier IPC mechanisms, but none as widely supported as just sending arbitrary data over a socket. It wouldn't surprise me if e.g. this is how Chrome processes communicate with each other.
Chris Siebenmann weighs in with thoughts on:
Browers[sic] can't feasibly stop web pages from talking to private (local) IP addresses (2019)
https://utcc.utoronto.ca/~cks/space/blog/web/BrowsersAndLoca...
The split horizon DNS model mentioned in that article is to me insane. Your DNS responses should not change based on what network you are connected to. It breaks so many things. For one, caching breaks because DNS caching is simplistic and is only cached with a TTL: no way to tell your OS to associate a DNS cached response to a network.
I understand why some companies want this, but doing it on the DNS level is a massive hack.
If I were the decision maker I would break that use case. (Chrome probably wouldn't though.)
> Your DNS responses should not change based on what network you are connected to.
GeoDNS and similar are very broadly used by services you definitely use every day. Your DNS responses change all the time depending on what network you're connecting from.
Further: why would I want my private hosts to be resolvable outside my networks?
Of course DNS responses should change depending on what network you're on.
> but if you're inside our network perimeter and you look up their name, you get a private IP address and you have to use this IP address to talk to them
In the linked article using the wrong DNS results in inaccessibility. GeoDNS is merely a performance concern. Big difference.
> why would I want my private hosts
Inaccessibility is different. We are talking about accessible hosts requiring different IP addresses to be accessed in different networks.
If you have two interfaces connected to two separate networks, you can absolutely have another host connected to the same two networks. That host will have a different IP for each of their interfaces, you could reach it on either, and DNS on each network should resolve to the IP it's reachable on on that network.
Correct, and this is by design. Keeping in mind "hairpin"-style connections often don't work, also by design (leaving a network then hairpinning back into the same network).
Let's say you have an internal employee portal. Accessing it from somewhere internal goes to an address in private space, while accessing it from home gives you the globally routable address. The external route might have more firewalls / WAFs / IPSes etc in the way. There's no other way you could possibly achieve this than by serving a different IP for each of the two networks, and you can do that through DNS, by having an internal resolver and an external resolver.
> but you could just have two different fqdns
Good luck training your employees to use two different URLs depending on what network they originate from.
I'm surprised you've never seen this before.
Especially for universities it's very common to have the same hostname resolve to different servers, and provide different results, depending on whether you're inside the university network or not.
Some sites may require login if you're accessing them from the internet, but are freely accessible from the intranet.
Others may provide read-write access from inside, but limited read-only access from the outside.
Similar situations with split-horizon DNS are also common in corporate intranets or for people hosting Plex servers.
Ultimately all these issues are caused by NAT and would disappear if we switched to IPv6, but that would also circumvent the OP proposal.
No I haven't seen this before. I have seen however the behavior where login is required from the Internet but not on the university network; I had assumed this is based on checking the source IP of the request.
Similarly the use case of read-write access from inside, but limited read-only access from the outside is also achievable by checking the source IP.
But how do you check the source IP if everyone is behind NAT?
Take the following example (all IPs are examples):
1. University uses 10./8 internally, with 10.1./16 and 10.2./16 being students, 10.3./16 being admin, 10.4. being natsci institute, 10.5. being tech institute, etc.
2. You use radius to assign users to IP ranges depending on their group membership
3. If you access the website from one of these IP ranges, group membership is implied, otherwise you'll have to log in.
4. The website is accessible at 10.200.1.123 internally, and 205.123.123.123 externally with a CDN.
Without NAT, this would just work, and many universities still don't use NAT.
But with NAT, the website wont see my internal IP, just the gateway's IP, so it can't verify group membership.
In some situations I can push routes to end devices so they know 205.123.123.123 is available locally, but that's not always an option.
In this example the site is available externally through Cloudflare, with many other sites on the same IP.
So I'll have to use split horizon DNS instead.
Ohh, your Example Documentation was sooo close to being RFC-compliant! Except for those unnecessary abbreviations of CIDR notation, and...
You can use 203.0.113.0/24 in your examples because it is specifically reserved for this purpose by IETF/IANA: https://en.wikipedia.org/wiki/Reserved_IP_addresses#IPv4
I usually try to write comments with proper notation and proper example values, but if — like in this instance — I'm interrupted IRL and lose my draft, I'll focus on getting my idea across at all rather than writing the perfect comment. Even if that leads to excessive abbreviations, slightly off example values, inconsistency between you/I/passive voice or past/present/future tense.
In this case the comment you see is the third attempt, ultimately written on a phone (urgh), but I hope the idea came across nonetheless.
The web is currently just “controlled code execution” on your device. This will never work if not done properly. We need a real “web 3.0” where web apps can run natively and containerized, but done correctly, where they are properly sandboxed. This will bring performance and security.
The underlying problem is that we are trying to run untrusted code safel, with very few restrictions on its capabilities.
Disagree. Untrusted code was thought to be a meaningful term 20-30 years ago when you ran desktop OSs with big name software like Microsoft Word and Adobe, and games. What happened in reality is that this fence had false positives (ie Meta being one of your main adversaries) and an enormous amount of false negatives (all indie or small devs that would have their apps classified as viruses).
The model we need isn’t a boolean form of trust, but rather capabilities and permissions on a per-app, per-site or per-vendor basis. We already know this, but it’s incredibly tricky to design, retrofit and explain. Mobile OSs did a lot here, even if they are nowhere near perfect. For instance, they allow apps (by default even) to have private data that isn’t accessible from other apps on the same device.
Whether the code runs in an app or on a website isn’t actually important. There is no fundamental reason for the web to be constrained except user expectations and the design of permission systems.
This seems like a silly solution, considering we are in the middle of IPv6 transition, where local networks use public addresses.
Even IPv6 has local devices. Determining whether that's a /64 or a /56 network may need some work, but the concept isn't all that different. Plus, you have ::1 and fe80::, of course.
Whatever happened to IPv6 site-local and link local address ranges (address ranges that were specifically defined as address ranges that would not cross router or WAN boundaries? They were in the original IPv6 standards, but don't seem to be implemented or supported. Or at least they aren't implemented or supported by my completely uconfigurable home cable router povided by my ISP.
IPv6 in normal ethernet/wlan like uses requires link-local to for functioning neighbour discovery (equivalent for v4's ARP) so it's very likely it works. Not meant for normal application usage though. Site local was phased out in favour of ULA etc.
But if you're not using global addresses you're probably doing it wrong. Global addressing doesn't mean you're globally reachable, confusing addressing vs reachability is the source of a lot of misunderstandings. You can think of it as "everyone gets their own piece of unique address space, not routed unless you want it to be".
So because IPv6 exists we shouldn’t even try?
It’s insane to me that random internet sites can try to poke at my network or local system for any purpose without me knowing and approving it.
With all we do for security these days this is such a massive hole it defies belief. Ever since I first saw an enterprise thing that just expected end users to run a local utility (really embedded web server) for their website to talk to I’ve been amazed this hasn’t been shut down.
Even in this case, it could be useful to limit the access websites have to local servers within your subnet (/64, etc), which might be a better way to define the “local” network.
(And then corporate/enterprise managed Chrome installs could have specific subnets added to the allow list)
I really hope this gets implemented, and more importantly, I really hope they have the ability to access an HTTP local site from an HTTPS domain.
There are so many excellent home automation and media/entertainment use cases for something like this.
this thing’s leaking. localhost ain’t private if random sites can hit it and get responses. devices still exposing ports like it’s 2003. prompts don’t help, people just just click through till it goes away. cors not doing much, it’s just noise now.issue’s been sitting there forever, everyone patches on top but none of these local services even check who’s knocking. just answers. every time.
similar thread: https://news.ycombinator.com/item?id=44179276
As sooner this happens, the better.
Choose one:
Web browsers use sandboxing to keep you safe.
Web browsers can portscan your local network.
Ironic given that on my Mac, Chrome always asks to find other devices on my network by Firefox never does.
Won't this break every local-device oauth flow?
This seems like such a no-brainer, I’m shocked this isn’t already something sites need explicit permission to do
I don’t see this mentioned anywhere but Safari on iOS already does this. If you try to access a local network endpoint you’ll be asked to allow it by Safari, and the permission is per-site.
A browser can't tell if a site is on the local network. Ambiguous addresses may not be on the local network and conversely a local network may use global addresses especially with v6.
What is so hard in blocking apps on android from listening on random ports without permission?
The same thing that makes blocking ports on iOS and macOS so hard: there's barely any firewall on these devices by default, and the ones users may find cause more problems than users will ever think they solve.
Listening on a specific port is one of the most basic things software can possibly do. What's next, blocking apps from reading files?
Plus, this is also about blocking your phone's browser from accessing your printer, your router, or that docker container you're running without a password.
That doesn't seem right. Can't speak to macOS, but on Android every application is sandboxed. Restricting its capabilities is trivial. Android apps certainly ARE blocked from reading files, except for some files in its storage and files the user grants it access to.
Adding two Android permissions would fix this entire class of exploits: "run local network service", and "access local network services" (maybe with a whitelist).
It's not only about android, it's about exploiting local services too..
This should not be possible in the first place. There is no legitimate reason for it. Having users grant "concent" is just a way to make it more OK, not to stop it.
There are definitely legitimate reasons—for example, a browser-based CAD system communicating with a 3D mouse.
Is it just me or am I not seeing any example that isn't pure theory?
And if it is just me, fine I'll jump in - they should also make it so that users have to approve local network access three times. I worry about the theoretical security implications that come after they only approve local network access once.
Personally I had completely forgotten that anyone and anything can do this right now.
TLDR, IIUC, right now, random websites can try accessing contents on local IPs. You can try to blind load e.g. http://192.168.0.1/cgi-bin/login.cgi from JavaScript, iterating through a gigantic malicious list of such known useful URLs, then grep and send back whatever you want to share with advertisers or try POSTing backdoors to printer update page. No, we don't need that.
Of course, OTOH, many webapps today use localhost access to pass tokens and to talk to cooperating apps, but you only need access to 127.0.0.0/8 for that which is harder to abuse, so that range can be default exempted.
Disabling this, as proposed, does not affect your ability to open http://192.168.0.1/login.html, as that's just another "web" site. If JS on http://myNAS.local/search-local.html wants to access http://myLaptop.local:8000/myNasDesktopAppRemotingApi, only then you have to click some buttons to allow it.
Edit: uBlock Origin has filter for it[1]; was unchecked in mine.
1: https://news.ycombinator.com/item?id=44184799
> so that range can be default exempted
I disagree. I know it’s done, but I don’t think that makes it safe or smart.
Require the user to OK it and require the server to send a header with the one _exact_ port it will access. Require that the local server _must_ use CORS and allow that server.
No website not loaded from localhost should ever be allowed to just hit random local/private IPs and ports without explicit permission.
The server has to allow cross origin requests for it to return a response though, right?
Honestly I think cross-site requests were a mistake. Tracking cookies, hacks, XSS attacks, etc.
My relationship is with your site. If you want to outsource that to some other domain, do that on your servers, not in my browser.
The mistake was putting CORS on the server side. It should have been part of the browser. "Facebook.com wants to access foo.example.com: y/n?"
But then we would have had to educate users, and ad peddlers would have lost revenue.
Cross-site requests have been built in to the design of the WWW since the beginning. The whole idea of hyperlinking from one place to another, and amalgamating media from multiple sites into a single page, is the essence of the World Wide Web that Tim Berners-Lee conceived at CERN, based on the HyperCard stacks and Gopher and Wais services that had preceded it.
Of course it was only later that cookies and scripting and low-trust networks were introduced.
The WWW was conceived as more of a "desktop publishing" metaphor, where pages could be formatted and multimedia presentations could be made and served to the public. It was later that the browser was harnessed as a cross-platform application delivery front-end.
Also, many sites do carefully try to guard against "linking out" or letting the user escape their walled gardens without a warning or disclaimer. As much as they may rely on third-party analytics and ad servers, most web masters want the users to remain on their site, interacting with the same site, without following an external link that would end their engagement or web session.
CORS != hyperlinks. CORS is about random websites in your browser accessing other domains without your say-so. Websites doing stuff behind your back does feel antithetical to Tim Berners Lee's ideals...
I'm aware of that but obviously there's a huge difference between the user clicking a link and navigating to a page on another domain and the site making that request on the user's behalf for a blob of JS.
I propose restricting android apps, not websites.
Android apps need UDP port binding to function. You can't do QUIC without UDP. Of course you can (should) restrict localhost bound ports to the namespaces of individual apps, but there is no easy solution to this problem at the moment.
If you rely on users having to click "yes", then you're just making phones harder to use because everyone still using Facebook or Instagram will just click whatever buttons make the app work.
On the other hand, I have yet to come up with a good reason why arbitrary websites need to set up direct connections to devices within the local network.
There's the IPv6 argument against the proposed measures, which requires work to determine if an address is local or global, but that's also much more difficult to enumerate than the IPv4 space that some websites try to scan. That doesn't mean IPv4 address shouldn't be protected at all, either. Even with an IPv6-shaped hole, blocking local networks (both IPv4 and local IPv6) by default makes sense for websites originating from outside.
IE did something very similar to this decades ago. They also had a system for displaying details about websites' privacy policies and data sharing. It's almost disheartening to see we're trying to come up with solutions to these problems again.
Android apps obviously shouldn't be able to just open or read a global communication channel on your device. But this applies to websites too.
Thus ignoring local private web servers, and bypassing local network administered zone policy.
Seems like a sleazy move to draw down even more user DNS traffic data, and a worse solution than the default mitigation policy in NoScript =3
Why can browsers do the kinds of things they do at all?
Why does a web browser need USB or Bluetooth support? They don’t.0
Browsers should not be the universal platform. They’ve become the universal attack vector.
With WebUSB, you can program a microcontroller without needing to install local software. With Web Bluetooth, you can wirelessly capture data from + send commands to that microcontroller.
As a developer, these standards prevent you from needing to maintain separate implementations for Windows/macOS/Linux/Android.
As a user, they let you grant and revoke sandbox permissions in a granular way, including fully removing the web app from your computer.
Browsers provide a great cross-platform sandbox and make it much easier to develop secure software across all platforms.
WebUSB and Web Bluetooth are opt-in when the site requests a connection/permission, as opposed to unlimited access by default for native apps. And if you don't want to use them, you can choose a browser that doesn't implement those standards.
What other platform (outside of web browsers) is a good alternative for securely developing cross-platform software that interacts with hardware?
I’m ok with needing non-browser software for those things.
> Browsers provide a great cross-platform sandbox and make it much easier to develop secure software across all platforms.
Sure, until advertising companies find ways around and through those sandboxes because browser authors want the browsers be capable of more, in the name of a cross platform solution. The more a browser can do, the more surface area the sandbox has. (An advertising company makes the most popular browser, by the way.)
> What other platform (outside of web browsers) is a good alternative for securely developing cross-platform software that interacts with hardware?
There isn’t one, other than maybe video game engines, but it doesn’t matter. OS vendors need to work to make cross-platform software possible; it’s their fault we need a cross-platform solution at all. Every OS is a construct, and they were constructed to be different for arbitrary reasons.
A good app-permission model in the browser is much more likely to happen, but I don’t see that really happening, either. “Too inconvenient for users [and our own in-house advertisers/malware authors]” will be the reason.
MacOS handles permissions pretty well, but it could do better. If something wants local network permission, the user gets prompted. If the user says no, those network requests fail. Same with filesystem access. Linux will never have anything like this, nor will Windows, but it’s what security looks like, probably.
Users will say yes to those prompts ultimately, because as soon as users have the ability to say “no” on all platforms, sites will simply gate site functionality behind the granting of those permissions because the authors of those sites want that data so badly.
The only thing that is really going to stop behavior like this is law, and that is NEVER going to happen in the US.
So, short of laws, browsers themselves must stop doing stupid crap like allowing local network access from sites that aren’t on the local network, and nonsense stuff like WebUSB. We need to give up on the idea that anyone can be safe on a platform when we want that platform to be able to do anything. Browsers must have boundaries.
Operating systems should be the police, probably, and not browsers. Web stuff is already slow as hell, and browsers should be less capable, not more capable for both security reasons and speed reasons.
Advertising firms hate this.
> A proposal to restrict sites from accessing a users' local network
A proposal to treat webbrowsers as malware ? Why would a webbrowser connect to a socket/internet ?
The proposal is directed at the websites in the browser (using JS, embedded images or whatever), not the code that implements the browser.
[flagged]
just the fact that this comes from google is a hard pass for me. they sell so many adwords scams that they clearly do not give a damn about security. “security” from google is just another one of their trojan horses.
Don't post shallow dismissals. The same company runs Project Zero, which has a major positive security impact.
[1]: https://googleprojectzero.blogspot.com/
project zero is ZERO compared to the millions of little old ladies around the world getting scammed through adwords. only security big g cares about is its own. they have the tools to laser-in on and punish the subtlest of wrongthink on youtube, yet it’s just too tall of an order to focus the same laser on tech support scammers…
[dead]
Google loves wreaking havoc on web standards. Is there really anything anyone can do about it at this point? The number of us using alternative browsers are a drop in the bucket when compared to Chrome's market share.
Google open source the implementation of them which any other browser is free to use.
make this malicious website and show me that it works. I have doubts.
I don't like the implications of this. say you want to host a game that has a lan play component. that would be illegal.
Cia isn't going to like this. I bet that google monopoly case suddenly reaches a new resolution.
I understand the idea behind it and am still kinda chewing on the scope of it all. It will probably break some enterprise applications and cause some help desk or group policy/profile headaches for some.
It would be nice to know when a site is probing the local network. But by the same token, here is Google once again putting barriers on self sufficiency and using them to promote their PaaS goals.
They'll gladly narc on your self hosted application doing what it's supposed to do, but what about the 23 separate calls to Google CDN, ads, fonts, ect that every website has your browser make?
I tend to believe the this particular functionality is no longer of any use to Google, which is why they want to deprecate it to raise the barrier of entry for others.
Idk, I like the idea of my browser warning me when a random website I visit tries to talk to my network. if there's a legitimate reason I can still click yes. This is orthogonal to any ads and data collection.
I have this today from macOS. To me it feels more appropriate to have the OS attempt to secure running applications.
No you don’t - you get a single permission prompt for the entire browser. You definitely don’t get any permission-site permission options from the OS
Ah I misunderstood, thank you
I agree that any newly proposed standards for the web coming from Google should be met with a skeptical eye — they aren’t good stewards IMO and are usually self-serving.
I’d be interested in hearing what the folks at Ladybird think of this proposal.
On a quick look, isn't this a bit antithetical to the concept of the internet as a decentralized and hierarchical system? You have to route through the public internet to interoperate with the rest of the public internet?