mystifyingpoi 2 days ago

I like this on the first glance. The idea of a random website probing arbitrary local IPs (or any IPs for that matter) with HTTP requests is insane. I wouldn't care if it breaks some enterprise apps or integrations - enterprises could reenable this "feature" via management tools, normal users could configure it themselves, just show a popup "this website wants to control local devices - allow/deny".

  • buildfocus 2 days ago

    This is a misunderstanding. Local network devices are protected from random websites by CORS, and have been for many years. It's not perfect, but it's generally quite effective.

    The issue is that CORS gates access only on the consent of the target server. It must return headers that opt into receiving requests from the website.

    This proposal aims to tighten that, so that even if the website and the network device both actively want to communicate, the user's permission is also explicitly requested. Historically we assumed server & website agreement was sufficient, but Facebook's recent tricks where websites secretly talked to apps on your phone have broken that assumption - websites might be in cahoots with local network servers to work against your interests.

    • xp84 2 days ago

      Doesn't CORS just restrict whether the webpage JS context gets to see the response of the target request? The request itself happens anyway, right?

      So the attack vector that I can imagine is that JS on the browser can issue a specially crafted request to a vulnerable printer or whatever that triggers arbitrary code execution on that other device. That code might be sufficient to cause the printer to carry out your evil task, including making an outbound connection to the attacker's server. Of course, the webpage would not be able to discover whether it was successful, but that may not be important.

      • jonchurch_ 21 hours ago

        I think CORS is so hard for us to hold in our heads in large part due to how much is stuffed into the algorithm.

        It may send an OPTIONS request, or not.

        It may block a request being sent (in response to OPTIONS) or block a response from being read.

        It may restrict which headers can be set, or read.

        It may downgrade the request you were sending silently, or consider your request valid but the response off limits.

        It is a matrix of independent gates essentially.

        Even the language we use is imprecise, CORS itself is not really doing any of this or blocking things. As others pointed out it’s the Single Origin Policy that is the strict one, and CORS is really an exception engine to allow us to punch through that security layer.

      • tombakt 2 days ago

        No, a preflight (OPTIONS) request is sent by the browser first prior to sending the request initiated by the application. I would be surprised if it is possible for the client browser to control this OPTIONS request more than just the URL. I am curious if anyone else has any input on this topic though.

        Maybe there is some side-channel timing that can be used to determine the existence of a device, but not so sure about actually crafting and delivering a malicious payload.

        • varenc a day ago

          This tag:

              <img src="http://192.168.1.1/router?reboot=1">
          
          triggers a local network GET request without any CORS involvement.
          • grrowl a day ago

            I remember back in the day you could embed <img src="http://someothersite.com/forum/ucp.php?mode=logout"> in your forum signature and screw with everyone's sessions across the web

            • lobsterthief 21 hours ago

              Haha I remember that. The solution at the time for many forum admins was to simply state that anyone found to be doing that would be permabanned. Which was enough to make it stop completely, at least for the forums that I moderated. Different times indeed.

              • sedatk 10 hours ago

                Or you could just make the logout route POST-only. Problem solved.

            • anthk a day ago

              <img src="C:\con\con"></img>

              • jbverschoor 21 hours ago

                It's essentially the same, as many apps use HTTP server + html client instead of something native or with another IPC.

          • buildfocus a day ago

            This expectation is that this should not work - well behaved network devices shouldn't accept a blind GET like this for destructive operations. Plenty of other good reasons for that. No real alternative unless you're also going to block page redirects & links to these URLs as well, which also trigger a similar GET. That would make it impossible to access any local network page without typing it manually.

            While it clearly isn't a hard guarantee, in practice it does seem to generally work as these have been known issues without apparent massive exploits for decades. That CORS restrictions block probing (no response provided) does help makes this all significantly more difficult.

            • oasisbob 18 hours ago

              "No true Scotsman allows GETs with side effects" is not a strong argument

              It's not just HTTP where this is a problem. There are enough http-ish protocols where protocol smuggling confusion is a risk. It's possible to send chimeric HTTP requests at devices which then interpret them as a protocol other than http.

              • bawolff 4 hours ago

                Yes, which is why web browsers way back even in the netscape navigator era had a blacklist of ports that are disallowed.

          • lyu07282 a day ago

            Exactly you can also trigger forms for POST or DELETE etc. this is called CSRF if the endpoint doesn't validate some token in the request. CORS only protects against unauthorized xhr requests. All decades old OWASP basics really.

            • bawolff 4 hours ago

              > Exactly you can also trigger forms for POST or DELETE etc

              You cant do a DELETE from a form. You have to use ajax. If cross DELETE needs preflight.

              To nitpick, CSRF is not the ability to use forms per se, but relying solely on the existence of a cookie to authorize actions with side effects.

            • formerly_proven a day ago

              That highly ranked comments on HN (an audience with way above average-engineer interest in software and security) get this wrong kinda explains why these things keep being an issue.

              • lyu07282 20 hours ago

                I don't know why you are getting downvoted, you do have a point. Some of the comments appear knowing what CORS headers are, but neither their purpose nor how it relates to CSRF it seems, which is worrying. It's not meant as disparaging. My university thought a course on OWASP thankfully, otherwise I'll probably also be oblivious.

                • asmor 13 hours ago

                  If you're going cross-domain with XHR, I'd hope you're mostly sending json request bodies and not forms.

                  Though to be fair, a lot of web frameworks have methods to bind named inputs that allow either.

                  • bawolff 4 hours ago

                    This misses the point a bit. CSRF usually applies to people who want only same domain requests and dont realize that cross domain is an option for the attacker.

                    In the modern web its much less of an issue due to samesite cookies being default .

        • LegionMammal978 2 days ago

          The idea is, the malicious actor would use a 'simple request' that doesn't need a preflight (basically, a GET or POST request with form data or plain text), and manage to construct a payload that exploits the target device. But I have yet to see a realistic example of such a payload (the paper I read about the idea only vaguely pointed at the existence of polyglot payloads).

          • MajesticHobo2 2 days ago

            There doesn't need to be any kind of "polyglot payload". Local network services and devices that accept only simple HTTP requests are extremely common. The request will go through and alter state, etc.; you just won't be able to read the response from the browser.

            • EGreg a day ago

              Exactly. People who are answering must not have been aware of “simple” requests not requiring preflight.

          • Sophira 21 hours ago

            I can give an example of this; I found such a vulnerability a few years ago now in an application I use regularly.

            The target application in this case was trying to validate incoming POST requests by checking that the incoming MIME type was "application/json". Normally, you can't make unauthorized XHR requests with this MIME type as CORS will send a preflight.

            However, because of the way it was checking for this (checking if the Content-Type header contained the text "application/json"), It was relatively easy to construct a new Content-Type header that bypasses CORS:

            Content-Type: multipart/form-data; boundary=application/json

            It's worth bearing in mind in this case that the payload doesn't actually have to be form data - the application was expecting JSON, after all! As long as the web server doesn't do its own data validation (which it didn't in this case), we can just pass JSON as normal.

            This was particularly bad because the application allowed arbitrary code execution via this endpoint! It was fixed, but in my opinion, something like that should never have been exposed to the network in the first place.

            • apitman 5 hours ago

              This is a great example; thanks.

          • freeone3000 2 days ago

            Oh, you can only send arbitrary text or form submissions. That’s SO MUCH.

          • chuckadams a day ago

            Some devices don't bother to limit the size of the GET, which can enable a DOS attack at least, a buffer overflow at worst. But I think the most typical vector is a form-data POST, which isn't CSRF-protected because "it's on localhost so it's safe, right?"

            I've been that sloppy with dev servers too. Usually not listening on port 80 but that's hardly Ft Knox.

          • drexlspivey 2 days ago

            It can send a json-rpc request to your bitcoin node and empty your wallet

            • LegionMammal978 15 hours ago

              Do you know of any such node that doesn't check the Content-Type of requests and also has no authentication?

        • bawolff 4 hours ago

          > No, a preflight (OPTIONS) request is sent by the browser first prior to sending the request initiated by the application.

          Note: preflight is not required for any type of request that browser js was capable of making prior to CORS being introduced. (Except for local network)

          So a simple GET or POST does not require OPTIONS, but if you set a header it might require OPTIONS (unless its a header you could set in the pre-cors world)

        • rafram a day ago

          You’re forgetting { mode: 'no-cors' }, which makes the response opaque (no way to read the data) but completely bypasses the CORS preflight request and header checks.

          • jonchurch_ 21 hours ago

            This is missing important context. You are correct that preflight will be skipped, but there are further restrictions when operating in this mode. They don't guarantee your server is safe, but it does force operation under a “safer” subset of verbs and header fields.

            The browser will restrict the headers and methods of requests that can be sent in no-cors mode. (silent censoring in the case of headers, more specifically)

            Anything besides GET, HEAD, POST will result in an error in browser, and not be sent.

            All headers will be dropped besides the CORS safelisted headers [0]

            And Content-Type must be one of urlencoded, form-data, or text-plain. Attempting to use anything else will see the header replaced by text-plain.

            [0] https://developer.mozilla.org/en-US/docs/Glossary/CORS-safel...

            • rafram 21 hours ago

              That’s just not that big of a restriction. Anecdotally, very few JSON APIs I’ve worked with have bothered to check the request Content-Type. (“Minimal” web frameworks without built-in security middleware have been very harmful in this respect.) People don’t know about this attack vector and don’t design their backends to prevent it.

              • jonchurch_ 21 hours ago

                I agree that it is not a robust safety net. But in the instance you’re citing, thats a misconfigured server.

                What framework allows you to setup a misconfigured parser out of the box?

                I dont mean that as a challenge, but as a server framework maintainer Im genuinely curious. In express we would definitely allow people to opt into this, but you have to explicitly make the choice to go and configure body-parser.json to accept all content types via a noop function for type checking.

                Meaning, its hard to get into this state!

                Edit to add: there are myriad ways to misconfigure a webserver to make it insecure without realizing. But IMO that is the point of using a server framework! To make it less likely devs will footgun via sane defaults that prevent these scenarios unless someone really wants to make a different choice.

                • rafram 19 hours ago

                  SvelteKit for sure, and any other JS framework that uses the built-in Request class (which doesn’t check the Content-Type when you call json()).

                  I don’t know the exact frameworks, but I consume a lot of random undocumented backend APIs (web scraper work) and 95% of the time they’re fine with JSON requests with Content-Type: text/plain.

            • afavour 21 hours ago

              I think you’re making those restrictions out to be bigger than they are.

              Does no-cors allow a nefarious company to send a POST request to a local server, running in an app, containing whatever arbitrary data they’d like? Yes, it does. When you control the server side the inability to set custom headers etc doesn’t really matter.

              • jonchurch_ 20 hours ago

                My intent isnt to convince people this is a safe mode, but to share knowledge in the hope someone learns something new today.

                I didnt mean it to come across that way. The spec does what the spec does, we should all be aware of it so we can make informed decisions.

          • chuckadams 21 hours ago

            Thankfully no-cors also restricts most headers, including setting content-type to anything but the built-in form types. So while CSRF doesn't even need a click because of no-cors, it's still not possible to do csrf with a json-only api. Just be sure the server is actually set up to restrict the content type -- most frameworks will "helpfully" accept and convert form-data by default.

        • thayne 19 hours ago

          It depends. GET requests are assumed not to have side effects, so often don't have a preflight request (although there are cases where it does). But of course, not all sites follow those semantics, and it wouldn't surprise me if printer or router firmware used GETs to do something dangerous.

          Also, form submission famously doesn't require CORS.

        • layer8 2 days ago

          There is a limited, but potentially effective, attack surface via URL parameters.

        • rerdavies a day ago

          I can confirm that local websites that don't implement CORS via the OPTIONS request cannot be browsed with mainstream browsers. Does nothing to prevent non-browser applications running on the local network from accessing your website.

          As far as I can tell, the only thing this proposal does that CORS does not already do is provide some level of enterprise configuration control to guard against the scenario where your users are using compromised internet sites that can ping around your internal network for agents running on compromised desktops. Maybe? I don't get it.

          If somebody would fix the "no https for local connections" issue, then IoT websites could use authenticated logins to fix both problems. Non-https websites also have no access to browser crypto APIs so roll-your-own auth (the horror) isn't an option either. Fustrating!

          • dgoldstein0 a day ago

            I don't believe this is true? As others have pointed out, preflight options requests only happen for non simple requests. Cors response headers are still required to read a cross domain response, but that is still a huge window for a malicious site to try to send side effectful requests to your local network devices that have some badly implemented web server running.

          • rerdavies 19 hours ago

            [edit]: I was wrong. Just tested that a moment ago. It turns out NOT to be true. My web server during normal operation is current NOT getting OPTIONS requests at all.

            Wondering whether I triggered CORS requests when I was struggling with IPv6 problems. Or maybe it triggers when I redirect index.html requests from IPv6 to IPv4 addresses. Or maybe I got caught by the earlier roll out of version one of this propposal? There was definitely a time while I was developing pipedal when none of my images displayed because my web server wasn't doing CORS. But. Whatever my excuse might be, I was wrong. :-/

      • nbadg 2 days ago

        Or simply perform a timing attack as a way of exploring the local network, though I'm not sure if the browser implementation immediately returns after the request is made (ex fetch API is called) but before the response is received. Presumably it doesn't, which would expose it to timing attacks as a way of exploring the network.

        • dgoldstein0 a day ago

          Almost every js API for making requests is asynchronous so they do return after the request is made. The exception though is synchronous XHR calls, but I'm not sure if those are still supported

          ... Anyhow I think it doesn't matter because you can listen for the error/failure of most async requests. Cors errors are equivalent to network errors - the browser tells the js it got stays v code 0 with no further information - but the timing of that could lead to some sort of interference? Hard to say what that would be though. Maybe if you knew the target webserver was slow but would respond to certain requests, a slower failed local request could mean it actually reached a target device.

          That said, why not just fire off simple http requests with your intended payload? Abusing the csrf vulnerabilities of local network devices seems far easier than trying to make something out of a timing attack here.

    • sidewndr46 21 hours ago

      This is also a misunderstanding. CORS only applies to the Layer 7 communication. The rest you can figure out from the timing of that.

      Significant components of the browser, such as Websockets have no such restrictions at all

      • James_K 19 hours ago

        Won't the browser still append the "Origin" field to WebSocket requests, allowing servers to reject them?

        • bstsb 13 hours ago

          yes, and that's exactly how discord's websocket communication checks work (allowing them to offer a non-scheme "open in app" from the website).

          they also had some kind of RPC websocket system for game developers, but that appears to have been abandoned: https://discord.com/developers/docs/topics/rpc

      • afiori 19 hours ago

        A WebSocket starts as a normal http request, so it is subject to cors if the initial request was (eg if it was a post)

        • hnav 18 hours ago

          websockets aren't subject to CORS, they send the initiating webpage in the Origin header but the server has to decide whether that's allowed.

        • odo1242 19 hours ago

          Unfortunately, the initial WebSocket HTTP request is defined to always be a GET request.

    • rnicholus a day ago

      CORS doesn’t protect you from anything. Quite the opposite: it _allows_ cross origin communication (provided you follow the spec). The same origin policy is what protects you.

    • friendzis a day ago

      > The issue is that CORS gates access only on the consent of the target server. It must return headers that opt into receiving requests from the website.

      False. CORS only gates non-simple requests (via options), simple requests are sent regardless of CORS config, there is no gating whatsoever.

    • Aeolun 2 days ago

      How would Facebook do that? They scan all likely local ranges for what could be your phone, and have a web server running on the phone? That seems more like a problem of allowing the phone app to start something like that and keep it running in the background.

      • londons_explore a day ago

        Webrtc allows you to find the local ranges.

        Typically there are only 256 IP's, so a scan of them all is almost instant.

    • IshKebab 2 days ago

      I think this can be circumvented by DNS rebinding, though your requests won't have the authentication cookies for the target, so you would still need some kind of exploit (or a completely unprotected target).

    • hsbauauvhabzb 2 days ago

      CORS prevents the site from accessing the response body. In some scenarios, a website could, for example, blindly attempt to authenticate to your router and modify settings by guessing your router bran/model and password

    • ameliaquining 2 days ago

      Is this kind of attack actually in scope for this proposal? The explainer doesn't mention it.

    • h4ck_th3_pl4n3t 13 hours ago

      > Local network devices are protected from random websites by CORS

      C'mon. We all know that 99% of the time, Access-Control-Allow-Origin is set to * and not to the specific IP of the web service.

      Also, CORS is not in the control of the user while the proposal is. And that's a huge difference.

    • ars 11 hours ago

      > but Facebook's recent tricks where websites secretly talked to apps on your phone have broken that assumption - websites might be in cahoots with local network servers to work against your interests.

      This isn't going to help for that. The locally installed app, and the website, can both, independently, open a connection to a 3rd party. There's probably enough fingerprinting available for the 3rd party to be able to match them.

    • kmeisthax 2 days ago

      THE MYTH OF "CONSENSUAL" REQUESTS

      Client: I consent

      Server: I consent

      User: I DON'T!

      ISN'T THERE SOMEBODY YOU FORGOT TO ASK?

      • cwillu a day ago

        Does anyone remember when the user-agent was an agent of the user?

  • jm4 a day ago

    This sounds crazy to me. Why should websites ever have access to the local network? That presents an entirely new threat model for which we don’t have a solution. Is there even a use case for this for which there isn’t already a better solution?

    • loaph a day ago

      I've used https://pairdrop.net/ before to share files between devices on the same LAN. It obviously wouldn't have to be a website, but it's pretty convenient since all my devices I wanted to share files on already have a browser.

      • A4ET8a8uTh0_v2 a day ago

        Same use case, but I remember getting approval prompts ( though come to think of it, those were not mandated, but application specific prompts to ensure you consciously choose to share/receive items ). To your point, there are valid use cases for it, but some tightening would likely be beneficial.

    • necovek 19 hours ago

      Not a local network, but localhost example: due to the lousy private certificate capability APIs in web browsers, this is commonly used for signing with electronic IDs for countries issuing smartcard certificates for their citizens (common in Europe). Basically, a web page would contact a web server hosted on localhost which was integrated with PKCS library locally, providing a signing and encryption API.

      One of the solutions in the market was open source up to a point (Nowina NexU), but it seems it's gone from GitHub

      For local network, you can imagine similar use cases — keep something inside the local network (eg. an API to an input device; imagine it being a scanner), but enable server-side function (eg. OCR) from their web page. With ZeroConf and DHCP domain name extensions, it can be a pretty seamless option for developers to consider.

    • Thorrez a day ago

      >Why should websites ever have access to the local network?

      It's just the default. So far, browsers haven't really given different IP ranges different security.

      evil.com is allowed to make requests to bank.com . Similarly, evil.com is allowed to make requests to foo.com even if foo.com DNS resolves to 127.0.0.1 .

      • chuckadams 21 hours ago

        > It's just the default. So far, browsers haven't really given different IP ranges different security.

        I remember having "zone" settings in Internet Explorer 20 years ago, and ISTR it did IP ranges as well as domains. Don't think it did anything about cross-zone security though.

    • EvanAnderson 21 hours ago

      > Is there even a use case for this for which there isn’t already a better solution?

      I deal with a third-party hosted webapp that enables extra when a webserver hosted on localhost is present. The local webserver exposes an API allowing the application to interact more closely with the host OS (think locally-attached devices and servers on the local network). If the locally-installed webserver isn't present the hosted app hides the extra functionality.

      Limiting browser access to the localhost subnet (127.0.0.1/8) would be fine to me, as a sysadmin, so long as I have the option to enable it for applications where it's desired.

    • Thorrez a day ago

      >That presents an entirely new threat model for which we don’t have a solution.

      What attack do you think doesn't have a solution? CSRF attacks? The solution is CSRF tokens, or checking the Origin header, same as how non-local-network sites protect against CSRF. DNS rebinding attacks? The solution is checking the Host header.

    • charcircuit a day ago

      >for which we don’t have a solution

      It's called ZTA, Zero Trust Architecture. Devices shouldn't assume the LAN is secure.

      • udev4096 5 hours ago

        Exactly, LAN is not a "secure" network field. Authenticate everything from everywhere all the time

      • esseph 5 hours ago

        You got grandma running ZTA now?

        This is a problem impacting mass users, not just technical ones.

  • lucideer 2 days ago

    > normal users could configure it themselves, just show a popup "this website wants to control local devices - allow/deny".

    MacOS currently does this (per app, not per site) & most users just click yes without a second thought. Doing it per site might create a little more apprehension, but I imagine not much.

    • mastazi 2 days ago

      Do we have any evidence that most users just click yes?

      My parents who are non-technical click no by default to everything, sometimes they ask for my assistance when something doesn't work and often it's because they denied some permission that is essential for an app to work e.g. maybe they denied access to the microphone to an audio call app.

      Unless we have statistics, I don't think we can make assumptions.

      • technion a day ago

        The amount of "malware" infections I've responded to over the years that involved browser push notifications to Windows desktops is completely absurd. Chrome and Edge clearly ask for permissions to enable a browser push.

        The moment a user gets this permissions request, as far as I can tell they will hit approve 100% of the time. We have one office where the staff have complained that it's impossible to look at astrology websites without committing to desktop popups selling McAfee. Which implies those staff, having been trained to hit "no", believe it's impossible to do.

        (yes, we can disable with a GPO, which I heavily promote, but that org has political problems).

      • Aeolun 2 days ago

        As a counter example, I think all these dialogs are annoying as hell and click yes to almost everything. If I’m installing the app I have pre-vetted it to ensure it’s marginally trustworthy.

      • lucideer a day ago

        I have no statistics but I wouldn't consider older parents the typical case here. My parents never click yes on anything but my young colleagues in non engineering roles in my office do. And I'd say even a decent % of the engineering colleagues do too - especially the vibe coders. And they all spend a lot more time on they computer then my parents.

        • mixmastamyk 19 hours ago

          Interesting parallel between the older-parents who (may have finally learned to) deny and young folks, supposed digital-natives a majority of which who don’t really understand how computers work.

    • paxys 2 days ago

      People accept permission prompts from apps because they conciously downloaded the app and generally have an idea about the developer and what the app does. If a social media app asks for permission to your photos it's easy to understand why, same with a music streamer wanting to connect to your smart speaker.

      A random website someone linked me to wanting to access my local network is a very different case. I'm absolutely not giving network or location or camera or any other sort of access to websites except in very extreme circumstances.

      • poincaredisk 2 days ago

        "Please accept the [tech word salad] popup to verify your identity"

        Maybe this won't fool you, but it would trick 90% of internet users. (And even if it was 20% instead of 90%, that's still way too much.)

        • quacksilver a day ago

          I have seen it posed as 'This site has bot protection. Confirm that you are not a bot by clicking yes', trying to mimic the modern Cloudflare / Google captchas.

      • lucideer a day ago

        To be clear: implementing this in browser on a per site basis would be a massive improvement over in-OS/per-app granularity. I want this popup in my browser.

        But I was just pointing out that, while I'll make good use of it, it still probably won't offer sufficient protection (from themselves) for most.

    • lxgr 21 hours ago

      And annoyingly, for some reason it does not remember this decision properly. Chrome asks me about local access every few weeks, it seems.

      Yes, as a Chromecast user, please do give me a break from the prompts, macOS – or maybe just show them for Airplay with equal frequency and see how your users like that.

    • grokkedit 2 days ago

      problem is: without allowing it webUIs like synology won't work, since they require your browser to connect to the local network... as it is, it's not great

      • planb 2 days ago

        Why? I’d guess requests from a local network site to itself (maybe even to others on the same network) will be allowed.

        • zbuttram 2 days ago

          With the proposal in the OP, I would think so yes. But the MacOS setting mentioned directly above is blanket per-app at the OS level.

        • grokkedit a day ago

          yes, but I'm answering to the comment that explains how currently macos works

      • jay_kyburz 2 days ago

        This proposal is for websites outside your network contacting inside your network. I assume local IPs will still work.

        • grokkedit a day ago

          I'm answering to the comment that explains how currently macos works

        • Marsymars 2 days ago

          Note that the proposal also covers loopbacks, so domain names for local access would also still work.

    • mystified5016 2 days ago

      I can't believe that anyone still thinks a popup permission modal offers any type of security. Windows UAC has shown quite definitively that users will always click through any modal in their way without thought or comprehension.

      Besides that, approximately zero laypersons will have even the slightest clue what this permission means, the risks involved, or why they might want to prevent it. All they know is that the website they want is not working, and the website tells them to enable this or that permission. They will all blindly enable it every single time.

      • ameliaquining 2 days ago

        I don't think anyone's under the impression that this is a perfect solution. But it's better than nothing, and the options are this, nothing, or a security barrier that can't be bypassed with a permission prompt. And it was determined that the latter would break too many existing sites that have legitimate (i.e., doing something the end user actively wants) reason to talk to local devices.

      • knome a day ago

        I wonder how much of that is on the modal itself. If we instead popped up an alert that said "blocked an attempt to talk to your local devices, since this is generally a dangerous thing for websites to do. <dismiss>. to change this for this site, go to settings/site-security", making approval a more annoying multi-click deliberate affair, and defaulting the knee-jerk single-click dismissal to the safer option of refusal.

      • A4ET8a8uTh0_v2 a day ago

        Maybe. But eventually they will learn. In the meantime, other users, who at least try to stay somewhat safe ( if it is even possible these days ), can make appropriate adjustments.

      • lxgr 21 hours ago

        I think it does, in many (but definitely not all) contexts.

        For example, it's pretty straightforward what camera, push notification, or location access means. Contact sharing is already a stretch ("to connect you with your friends, please grant...").

        "Local network access"? Probably not.

      • xp84 2 days ago

        This is so true. The modern Mac is a sea of Allow/Don't Allow prompts, mixed with the slightly more infantilizing alternative of the "Block" / "Open System Preferences" where you have to prove you know what you're doing by manually browsing for the app to grant the permission to, to add it to the list of ones with whatever permission.

        They're just two different approaches with the same flaw: People with no clue how tech works cannot completely protect themselves from any possible attacker, while also having sophisticated networked features. Nobody has provided a decent alternative other than some kind of fully bubble-wrapped limited account using Group Policies, to ban all those perms from even being asked for.

        • donnachangstein 2 days ago

          > The modern Mac is a sea of Allow/Don't Allow prompts

          Remember when they used to mock this as part of their marketing?

          https://www.youtube.com/watch?v=DUPxkzV1RTc

          • GeekyBear a day ago

            Windows Vista would spawn a permissions prompt when users did something as innocuous as creating a shortcut on their desktop.

            Microsoft deserved to be mocked for that implementation.

            • Gigachad a day ago

              MacOS asked a permission dialog when I plug my AirPods in to charge. I have no idea what I’m even giving permission for but it pops up every time.

              • GeekyBear 20 hours ago

                Asking you if you trust a device before opening a data connection to it is simply not the same thing as asking the person who just created a shortcut if they should be allowed to do that.

                • esseph 5 hours ago

                  How do you know the person created the shortcut and not some malware trying to get a user to click on an executable and elevate permissions?

            • AStonesThrow a day ago

              I once encountered malware on my roommate’s Windows 98 system. It was a worm designed to rewrite every image file as a VBS script that would replicate and re-infect every possible file whenever it was clicked or executed. It hid the VBS extensions and masqueraded as the original images.

              Creation of a shortcut on Windows is not necessarily innocuous. It was a common first vector to drop malware as users were accustomed to installing software that did the same thing. A Windows shortcut can hide an arbitrary pathname, arbitrary command-line arguments, a custom icon, and more; these can be modified at any time.

              So whether it was a mistake for UAC to be overzealous or obstructionist, or Microsoft was already being mocked for poor security, perhaps they weren’t wrong to raise awareness about such maneuvers.

              • GeekyBear a day ago

                A user creating a shortcut manually is not something that requires a permissions prompt.

                If you want to teach users to ignore security prompts, then completely pointless nagging is how you do it.

        • Gigachad a day ago

          A better option would be to put Mark Zuckerberg in prison for deploying malware to a massive number of people.

  • broguinn a day ago

    This Web Security lecture by Feross Aboukhadijeh has a great example of Zoom's zero-day from 2019 that allowed anyone to force you to join a zoom meeting (and even cause arbitrary code execution), using a local server:

    https://www.youtube.com/watch?v=wLgcb4jZwGM&list=PL1y1iaEtjS...

    It's not clear to me from Google's proposal if it also restricts access to localhost, or just your local network - it'd be great if it were both, as we clearly can't rely on third parties to lock down their local servers sufficiently!

    edit: localhost won't be restricted:

    "Note that local -> local is not a local network request, as well as loopback -> anything. (See "cross-origin requests" below for a discussion on potentially expanding this definition in the future.)"

    • Thorrez a day ago

      >edit: localhost won't be restricted:

      It will be restricted. This proposal isn't completely blocking all localhost and local IPs. Rather, it's preventing public sites from communicating with localhost and local IPs. E.g:

      * If evil.com makes a request to a local address it'll get blocked.

      * If evil.com makes a request to a localhost address it'll get blocked.

      * If a local address makes a request to a localhost address it'll get blocked.

      * If a local address makes a request to a local address, it'll be allowed.

      * If a local address makes a request to evil.com it'll be allowed.

      * If localhost makes a request to a localhost address it'll be allowed.

      * If localhost makes a request to a local address, it'll be allowed.

      * If localhost makes a request to evil.com it'll be allowed.

      • broguinn 15 hours ago

        Ahh, thanks for clarifying! It's the origin being compared, not the context - of course.

  • donnachangstein 2 days ago

    [flagged]

    • kulahan 2 days ago

      I agree fully with him. I don’t care what part of your job gets harder, or what software breaks if you can’t make it work without unnecessarily invading my privacy. You could tell me it’s going to shut down the internet for 6 months and I still wouldn’t care.

      You’ll have to come up with a really strong defense for why this shouldn’t happen in order to convince most users.

      • Aeolun 2 days ago

        It just means I run a persistent client on your device that is permanently connected to the mothership, instead of only when you have your browser open.

        • kulahan 14 hours ago

          I’m so glad most people don’t truly consider software devs to be real engineers, because this is a perfect example of why that word deserves so much more respect than this field gives it.

      • donnachangstein 2 days ago

        [flagged]

        • GlacierFox 2 days ago

          I like your "you've been *** my ass for 35 years, please feel free to keep doing it for all eternity" attitude.

    • zaptheimpaler 2 days ago

      I'm sure it will require some work, but this is the price of security. The idea that any website I visit can start pinging/exploiting some random unsecured testing web server I have running on localhost:8080 is a massive security risk.

      • duskwuff 2 days ago

        Or probing your local network for vulnerable HTTP servers, like insecure routers or web cameras. localhost is just the tip of the iceberg.

        • donnachangstein 2 days ago

          Can you define "local network"? Probably not. Most large enterprises own publicly-routable IP space for internal use. Internal doesn't mean 192.168.0.0/24. foo.corp.example.com could resolve to 9.10.11.12 and still be local. What about IPv6? It's a nonsense argument fraught with corner cases.

          • duskwuff 2 days ago

            > Can you define "local network"?

            Sure - a destination is "local" if your machine has a route to that IP which isn't via a gateway.

            If your network is large enough that it consists of multiple routed network segments, and you don't have any ACLs between those segments, then yeah, you won't be fully protected by this browser feature. But you aren't protected right now either, so nothing's getting worse, it's just not getting better for your specific use case.

            • donnachangstein 2 days ago

              > Sure - a destination is "local" if your machine has a route to that IP which isn't via a gateway.

              Fantastic. Well, Google doesn't agree

              The proposal defines it along RFC1918 address space boundaries. The spitballing back and forth in the GitHub issues about which imaginary TLDs they will or won't also consider "local" is absolutely horrifying.

              • account42 a day ago

                Cool so it will protect 99.999% of home networks. Compared to 0% which are protected now. Sounds great!

          • mystifyingpoi 2 days ago

            Not to be snarky, but that's a good example of "perfect being the enemy of good". You are totally right that there are corner cases, sure. But that doesn't stop us from tackling the low hanging fruit first. Which is, as you say, localhost and LAN (if present).

          • eschaton a day ago

            It should not even be able to communicate with the local network at all, it’s a goddamn web page. It should be restricted to just communicate with the server that hosts it and that’s it.

      • donnachangstein 2 days ago

        [flagged]

        • hollerith 2 days ago

          The whole browser is a massive security leak. What genius thought it was a good idea for the web page I visit in the morning to get the weather forecast to be able to run arbitrary code and to communicate with arbitrary hosts on my local network?

    • Wobbles42 2 days ago

      I do understand this sentiment, but isn't the tension here that security improvements by their very nature are designed to break things? Specifically the things we might consider "bad", but really that definition gets a bit squishy at the edges.

    • protocolture a day ago

      This attitude kept IE6 in production well after its natural life should have concluded.

    • aaomidi 2 days ago

      I’m sorry but this proposal is absolutely monumentally important.

      The fact that I have to rely on random extensions to accomplish this is unacceptable.

socalgal2 a day ago

I wish they'd (Apple/Micrsoft/Google/...) would do similar things for USB and Bluetooth.

Lately, every app I install, wants bluetooth access to scan all my bluetooth devices. I don't want that. At most, I want the app to have to declare in their manifest some specific device IDs (short list) that their app is allowed to connect to and have the OS limit their connections to only those devices. For for example the Bose App should only be able to see Bose devices, nothing else. The CVS (pharmacy app) should only be able to connect to CVS devices, whatever those are. All I know is the app asked for permission. I denied it.

I might even prefer if it had to register the device ids and then the user would be prompted, the same way camera access/gps access is prompted. Via the OS, it might see a device that the CVS.app registered for in its manifest. The OS would popup "CVS app would like to connect to device ABC? Just this once, only when the app is running, always" (similar to the way iOS handles location)

By id, I mean some prefix that a company registers for its devices. bose.xxx, app's manifest says it wants to connect to "bose.*" and OS filters.

Similarly for USB and maybe local network devices. Come up with an id scheme, have the OS prevent apps form connecting to anything not that id. Effectively, don't let apps browser the network, usb, bluetooth.

  • 3eb7988a1663 a day ago

    I am still holding out hope that eventually at least Apple will offer fake permission grants to applications. Oh, app XYZ "needs" to see my contact list to proceed? Well it gets a randomized fake list, indistinguishable from the real one. Similar with GPS.

    I have been told that WhatsApp does not let you name contacts without sharing your address book back to Facebook.

    • ordu 4 hours ago

      Yeah. I'd like it too. I can't use my bank's app, because it wants some weird permissions like an access to contacts, I refuse to give them, because I see no use in it for me, and it refuses to work.

    • yonatan8070 3 hours ago

      Also for the camera, just feed them random noise or a user-selectable image/video

    • nothrabannosir a day ago

      In iOS you can share a subset of your contacts. This is functionally equivalent and works as you described for WhatsApp.

      • shantnutiwari a day ago

        >In iOS you can share a subset of your contacts.

        the problem is, the app must respect that.

        WhatsApp, for all the hate it gets, does.

        "Privacy" focused Telegram doesnt-- it wouldnt work unless I shared ALL my contacts-- when I shared a few, it kept complaining I had to share ALL

        • blacklion a day ago

          Is it something specific to iOS Telegram client?

          On Android Telegram works with denied access to the contacts and maintains its own, completely separate, contact list (shared with desktop Telegram and other copies logged in to same account). I'm using Telegram longer than I'm using smartphone and it has completely separate contact list (as it should be).

          And WhatsApp cannot be used without access to contacts: it doesn't allow to create WatsApp-only contact and complains that it has no place to store it till you grant access to Phone contact list.

          To be honest, I prefer to have separate contact lists on all my communication channel, and even sharing contacts between phone app and e-mail app (GMail) bothers me.

          Telegram is good in this aspect, it can use its own contact list, not synchronized or shared with anything else, and WhatsApp is not.

          • kayodelycaon 16 hours ago

            I’ve never allowed Telegram on iOS to access my contacts, camera, or microphone and it’s worked just fine.

          • HnUser12 a day ago

            Looks to me like it was a bug. Not giving access to any contacts broke the app completely but limited access works fine except for an annoying persistent in app notification.

        • nothrabannosir a day ago

          iOS generally solves this through App Store submission reviews so I’m surprised this isn’t a rule and that telegram got away with it. “Apps must not gate functionality behind receiving access to all contacts vs a subset” or something. They definitely do so for location access, for example.

      • WhyNotHugo a day ago

        WhatsApp specifically needs phone numbers, and you can filter out which contacts you share, but not which fields. So if you family uses WhatsApp, you’d share those contacts, but you can’t share ONLY their phone number, WhatsApp also gets their birthdays, addresses, personal notes, and any other personal information which you might have.

        I think this feature is pretty meaningless in the way that it’s implemented.

        It’s also pretty annoying that applications know they have partial permission, so kept prompting for full permission all the time anyway.

    • baobun a day ago

      GrapheneOS has this feature (save for faking GPS) fwiw

    • quickthrowman 18 hours ago

      Apps are not allowed to force you to share your contacts on iOS, report any apps that are asking you to do so as it’s a violation of the App Store TOS.

  • totetsu a day ago

    Like the github 3rd party application integration. "ABC would like to see your repositories, which ones do you want to share?"

    • kuschku a day ago

      Does that UI actually let you choose? IME it just tells me what orgs & repos will be shared, with no option to choose.

  • rjh29 a day ago

    Safari doesn't support Web MIDI apparently for this reason (fingerprinting), but it makes using any kind of MIDI web app impossible.

  • Thorrez a day ago

    Are you talking about web apps, mobile apps, desktop apps, or browser extensions?

  • _bent 18 hours ago

    Apple does this for iOS 18 via the AccessorySetupKit

  • bsder 13 hours ago

    > Lately, every app I install, wants bluetooth access to scan all my bluetooth devices.

    Blame Apple and Google and their horrid BLE APIs.

    An app generally has to request "ALL THE PERMISSIONS!" to get RSSI which most apps are using as a (really stupid, bug prone, broken) proxy for distance.

    What everybody wants is "time of flight"--but for some reason that continues to be mostly unsupported.

paxys 2 days ago

It's crazy to me that this has always been the default behavior for web browsers. A public website being able to silently access your entire filesystem would be an absurd security hole. Yet all local network services are considered fair game for XHR, and security is left to the server itself. If you are developer and run your company's webapp on your dev machine for testing (with loose or non-existent security defaults), facebook.com or google.com or literally anyone else could be accessing it right now. Heck think of everything people deploy unauthed on their home network because they trust their router's firewall. Does every one of them have the correct CORS configuration?

  • 3abiton 17 hours ago

    I majored in CS and I had no idea that was possible: public websites you access have access to your local network. I have to take time to process this. Beside what is suggested in the post, are there any ways to limit this abusive access?

  • Too 16 hours ago

    What’s even crazier is that nobody learned this lesson and new protocols are created with the same systematic vulnerabilities.

    Talking about MCP agents if that’s not obvious.

  • thaumasiotes a day ago

    > Does every one of them have the correct CORS configuration?

    I would guess it's closer to 0% than 0.1%.

  • reassess_blind a day ago

    The local server has to send Access-Control-Allow-Origin: * for this to work, right?

    Are there any common local web servers or services that use that as the default? Not that it’s not concerning, just wondering.

    • meindnoch a day ago

      No, simple requests [1] - such as a GET request, or a POST request with text/plain Content-Type - don't trigger a CORS preflight. The request is made, and the browser may block the requesting JS code from seeing the response if the necessary CORS response header is missing. But by that point the request had already been made. So if your local service has a GET endpoint like http://localhost:8080/launch_rockets, or a POST endpoint, that doesn't strictly validate the body Content-Type, then any website can trigger it.

      [1] https://developer.mozilla.org/en-US/docs/Web/HTTP/Guides/COR...

      • reassess_blind a day ago

        I was thinking in terms of response exfiltration, but yeah, better put that /launch_rockets endpoint behind some auth.

pacifika 2 days ago

Internet Explorer solved this with their zoning system right?

https://learn.microsoft.com/en-us/previous-versions/troubles...

  • donnachangstein 2 days ago

    Ironically, Chrome partially supported and utilized IE security zones on Windows, though it was not well documented.

    • pacifika 2 days ago

      Oh yeah forgot about that, amazing.

  • bux93 a day ago

    Although those were typically used to give ActiveX controls on the intranet unfettered access to your machine because IT put it in the group policy. Fun days.

  • nailer 2 days ago

    Honestly I just assumed a modern equivalent existed. That it doesn’t is ridiculous. Local network should be a special permission like the camera or microphone.

skybrian 2 days ago

While this will help to block many websites that have no business making local connections at all, it's still very coarse-grained.

Most websites that need this permission only need to access one local server. Granting them access to everything violates the principle of least privilege. Most users don't know what's running on localhost or on their local network, so they won't understand the risk.

  • paxys 2 days ago

    > Most users don't know what's running on localhost or on their local network, so they won't understand the risk.

    Yes, which is why they also won't understand when the browser asks if you'd like to allow the site to visit http://localhost:3146 vs http://localhost:8089. A sensible permission message ("allow this site to access resources on your local network") is better than technical mumbo jumbo which will make them just click "yes" in confusion.

    • xp84 2 days ago

      Either way they'll click "yes" as long as the attacker site properly primes them for it.

      For instance, on the phishing site they clicked on from an email, they'll first be prompted like:

      "Chase need to verify your Local Network identity to keep your account details safe. Please ensure that you click "Yes" on the following screen to confirm your identity and access account."

      Yes, that's meaningless gibberish but most people would say:

      • "Not sure what that means..."

      • "I DO want to access my account, though."

      • kevincox a day ago

        This is true, but you can only protect people from themselves so far. At some point you gotta let them do what they want to do. I don't want to live in a world where Google decides what we are and aren't allowed to do.

    • derefr 2 days ago

      In an ideal world, the browser could act as an mDNS client, discovering local services, so that it could then show the pretty name of the relevant service in the security prompt.

      In the world we live in, of course, almost nothing on your average LAN has an associated mDNS service advertisement.

      • mixmastamyk 19 hours ago

        They don’t? Every time I install an OS I turn that stuff off, because I don’t fully understand it. Or is avahi et al another thing?

        • kayodelycaon 16 hours ago

          Avahi handles zeroconf networking, which is mDNS and DNS-SD.

    • skybrian a day ago

      On a phone at least, it should be "do you want to allow website A to connect to app B."

      (It's harder to do for the rest of the local network, though.)

  • nine_k 2 days ago

    A comprehensive implementation would be a firewall. Which CIDRs, which ports, etc.

    I wish there were an API to build such a firewall, e.g. as a part of a browser extension, but also a simple default UI allowing to give access to a particular machine (e.g. router), to the LAN, to a VPN, based on the routing table, or to "private networks" in general, in the sense Windows ascribes to that. Also access to localhost separately. The site could ask one of these categories explicitly.

    • kuschku a day ago

      > I wish there were an API to build such a firewall, e.g. as a part of a browser extension,

      There was in Manifest V2, and it still exists in Firefox.

      https://developer.mozilla.org/en-US/docs/Mozilla/Add-ons/Web...

      That's the API Chrome removed with Manifest V3. You can still log all web requests, but you can't block them dynamically anymore.

    • skybrian 17 hours ago

      I think something like Tailscale is the way to go here.

rerdavies 2 days ago

I worry that there are problems with Ipv6. Can anyone explain to me if there actually is a way to determine whether an IPv6 is site local? If not, the proposal is going to have problems on IPv6-only networks.

I have an struggled with this issue in the past. I have an IoT application whose websever wants to reject any requests from a non-local address. After failing to find a way to distinguish IPv6 local addresses, I ended up redirecting IPv6 requests to the local IPv4 address. And that was the end of that.

I feel like I would be in a better position to raise concerns if I could confirm that my understanding is correct: that there is no practical way for an application to determine whether an IPv6 address is link- or site-local.

I did experiment with IPv6 "link local" addresses, but these seem to be something else altogether different (for use by routers rather than general applications),and don't seem to work for regular application use.

There is some wiggle room provided by including .local address as local servers. But implementation of .local domains seems to be inconsistent across various OSs at present. Raspberry PI OS, for example, will do mDNS resolution of "some_address" but not of "someaddress.local"; Ubuntu 24.04 will resolve "someaddress.local", but not "someaddress". And neither will resolve "someaddress.local." (which I think was recommended at one point, but is now deprecated and non-functional). Which does seems like an issue worth raising.

And it frustrates the HECK out of me that nobody will allow use of privately issued certs for local network addresses. The "no https for local addresses" thing needs to be fixed.

  • gerdesj a day ago

    IPv6 still has the concept of "routable". You just have to decide what site-local means in terms of the routing table.

    In old school IPv4 you would normally assign octet two to a site and octet three to a VLAN. Oh and you start with 10.

    With IPv6 you have a lot more options.

    All IPv6 devices have link local addresses - that's the LAN or local VLAN - a bit like APIPA.

    Then you start on .local - that's Apple and DNS and the like and nothing to do with IP addresses. That's name to address.

    You can do Lets Encrypt (ACME) for "local network addresses" (I assume you mean RFC 1918 addresses: 10/8, 172.16/12, 192.168/16) - you need to look into DNS-01 and perhaps DNS CNAME. It does require quite some effort.

    There is a very good set of reasons why TLS certs are a bit of a bugger to get working effectively these days. There are solutions freely available but they are also quite hard to implement. At least they are free. I remember the days when even packet capture required opening your wallet.

    You might look into acme.sh if Certbot fails to work for you. You also might need to bolt down IP addressing in general, IPv4 vs IPv6 and DNS and mDNS (and Bonjour) as concepts - you seem a little hazy on that lot.

    Bon chance mate

  • globular-toast 14 hours ago

    HTTPS doesn't care about IP addresses. It's all based on domain names. You can get a certificate for any domain you own. You can also set said domain to resolve to any address you like, including a "local" one.

    NAT has rotted people's brains unfortunately. RFC 1918 is not really the way to tell if something is "local" or not. 25 years ago I had 4 publicly routable IPv4 addresses. All 4 of these were "local" to me despite also being publicly routable.

    An IP address is local if you can resolve it and don't have to communicate via a router.

    It seems too far gone, though. People seem unable to separate RFC 1918 from the concept of "local network".

  • donnachangstein 2 days ago

    > Can anyone explain to me if there is any way to determine whether an inbound IPv6 address is "local"?

    No, because it's the antithesis of IPv6 which is supposed to be globally routable. The concept isn't supposed to exist.

    Not to mention Google can't even agree on the meaning of "local" - the article states they completely changed the meaning of "local" to be a redefinition of "private" halfway through brainstorming this garbage.

    Creating a nonstandard, arbitrary security boundary based on CIDR subnets as an HTTP extension is completely bonkers.

    As for your application, you're going about it all wrong. Just assume your application is public-facing and design your security with that in mind. Too many applications make this mistake and design saloon-door security into their "local only" application which results in overreaction such as the insanity that is the topic of discussion here.

    ".local" is reserved for mDNS and is in the RFC, though this is frequently and widely ignored.

    • ryanisnan a day ago

      It's very useful to have this additional information in something like a network address. I agree, you shouldn't rely on it, but IPv6 hasn't clicked with me yet, and the whole "globally routable" concept is one of the reasons. I hear that, and think, no, I don't agree.

      • donnachangstein a day ago

        Globally routable doesn't mean you don't have firewalls in between filtering and blocking traffic. You can be globally routable but drop all incoming traffic at what you define as a perimeter. E.g. the WAN interface of a typical home network.

        The concept is frequently misunderstood in that IPv4 consumer SOHO "routers" often combine a NAT and routing function with a firewall, but the functions are separate.

        • rerdavies a day ago

          It is widely understood that my SOHO router provides NAT for IPV4, and routing+firewall (but no NAT) for IPV6. And provides absolutely no configuability for the IpV6 firewall (which would be extremely difficult anyway) because all of the IPV6 addresses allocated to devices on my home network are impermanent and short-lived.

          • vel0city 19 hours ago

            You can make those IPv6 IP addresses permanent and long-lived. They don't need to be short-lived addresses.

            Also, I've seen lots of home firewalls which will identify a device based on MAC address for match criteria and let you set firewall rules based on those, so even if their IPv6 address does change often it still matches the traffic.

            • mixmastamyk 19 hours ago

              There’s something about ip6 addresses being big as a guid that makes them hard to remember. Seem like random gibberish, like a hash. But I can look at an ip4 address like a phone number, and by looking tell approximately its rules.

              Maybe there’s a standard primer on how to grok ip6 addresses, and set up your network but I missed it.

              Also devices typically take 2 or 4 ip6 addresses for some reason so keeping on top of them is even harder.

              • vel0city 18 hours ago

                A few tips:

                When just looking at hosts in your network with their routable IPv6 address, ignore the prefix. This is the first few segments, probably the first four in most cases for a home network (a /64 network) When thinking about firewall rules or having things talk to each other, ignore things like "temporary" IP addresses.

                So looking at this example:

                   Connection-specific DNS Suffix  . : home.arpa
                   IPv6 Address. . . . . . . . . . . : 2600:1700:63c9:a421::2000
                   IPv6 Address. . . . . . . . . . . : 2600:1700:63c9:a421:e17f:95dd:11a:d62e
                   Temporary IPv6 Address. . . . . . : 2600:1700:63c9:a421:9d5:6286:67d9:afb7
                   Temporary IPv6 Address. . . . . . : 2600:1700:63c9:a421:4471:e029:cc6a:16a0
                   Temporary IPv6 Address. . . . . . : 2600:1700:63c9:a421:91bf:623f:d56b:4404
                   Temporary IPv6 Address. . . . . . : 2600:1700:63c9:a421:ddca:5aae:26b9:a53c
                   Temporary IPv6 Address. . . . . . : 2600:1700:63c9:a421:fc43:7d0a:7f8:e4c8
                   Link-local IPv6 Address . . . . . : fe80::7976:820a:b5f5:39c3%18
                   IPv4 Address. . . . . . . . . . . : 192.168.20.59
                   Subnet Mask . . . . . . . . . . . : 255.255.255.0
                   Default Gateway . . . . . . . . . : fe80::ec4:7aff:fe7f:d167%18
                                                       192.168.20.254
                
                Ignore all those temporary ones. Ignore the longer one. You can ignore 2600:1700:63c9:a421, as that's going to be the same for all the hosts on your network, so you'll see it pretty much everywhere. So, all you really need to remember if you're really trying to configure things by IP address is this is whatever-is-my-prefix::2000.

                But honestly, just start using DNS. Ignore IP addresses for most things. We already pretty much ignore MAC addresses and rely on other technologies to automatically map IP to MAC for us. Its pretty simple to get a halfway competent DNS setup going on, so many home routers will have things going by default, and its just way easier to do things in general. I don't want to have to remember my printer is at 192.168.20.132 or 2600:1700:63c9:a421::a210 I just want to go to http://brother or ipp://brother.home.arpa and have it work.

                • mixmastamyk 16 hours ago

                  Helps, thanks a lot!

                  But as you can see this is still an explosion of complexity for the home user. More than 4x (32 --> 128), feels like x⁴ (though might not be accurate).

                  I like your idea of "whatever..." There should be a "lan" variable and status could be shown factored, like "$lan::2000" to the end user perhaps.

                  I do use DNS all the time, like "printer.lan", "gateway.lan", etc. But don't think I'm using in the router firewall config. I use openwrt on my router but my knowledge of ipv6 is somewhat shallow.

              • bombela 16 hours ago

                At home, with both ip v4 and v6. For any device exposed on the Internet, I add a static IPv6 address with the host part the same as the IPv4 adress.

                example: 2001:db8::192.168.0.42

                This makes it very easy to remember, correlate and firewall.

                • mixmastamyk 16 hours ago

                  Ok, that parses somehow in Python, matches, and is apparently legit. ;-)

                      >>> from ipaddress import IPv6Address as address
                      >>> address('2001:db8::192.168.0.42')
                      IPv6Address('2001:db8::c0a8:2a')
                      >>> int('2a', 16)
                      42
                  
                  Openwrt doesn't seem to make ipv6 static assignment easy unfortunately.
        • ryanisnan a day ago

          That makes sense. I do love the idea of living in a world without NAT.

          • fiddlerwoaroof a day ago

            I don’t: NAT may have been a hack at first, but it’s my favorite feature provided by routers and why I disable ipv6 on my local network

            • TheDong a day ago

              Why do you like NAT?

              Does your router being slower and taking more CPU make you feel happy?

              Do you enjoy not seeing the correct IP in remote logs, thus making debugging issues harder?

              Do you like being able to naively nmap your local network fairly easily?

              • fiddlerwoaroof 6 hours ago

                I like all the computers in my house appearing to remote servers as a single remote host. Avoids leaking details about my home network.

              • mixmastamyk 18 hours ago

                Perf concerns over 32bit numbers ended in the nineties. Who at home cares about remote logs?

    • rerdavies a day ago

      @donnachangstein:

      The device is an IoT guitar pedal that runs on a Raspberry Pi. In performance, on stage, a Web UI runs on a phone or tablet over a hotspot connection on the PI, which is NOT internet connected (since there's no expectation that there's a Wi-Fi router or internet access at a public venue). OR the pi runs on a home wifi network, using a browser-hosted UI on a laptop or desktop. OR, I suppose over an away-from-home Wi-Fi connection at a studio or rehearsal space, I suppose.

      It is not reasonable to expect my users to purchase domain names and certs for their $60 guitar pedal, which are not going to work anyway, if they are playing away from their home network. Nor is ACME provisioning an option because the device may be in use but unconnected to the internet for months at a time if users are using the Pi Hotspot at home.

      I can't use password authentication to get access to the Pi Web server, because I can't use HTTPS to conceal the password, and browsers disable access to javascript crypto APIs on non non-HTTPS pages (not that I'd really trust myself to write javascript code to obtain auth tokens from the pi server anyway), so doing auth over an HTTP connection doesn't really strike me as a serious option either..

      Nor is it reasonable to expect my non-technical users to spend hours configuring their networks. It's an IoT device that should be just drop and play (maybe with a one-time device setup that takes place on the Pi).

      There is absolutely NO way I am going to expose the server to the open internet without HTTPS and password authentication. The server provides a complex API to the client over which effects are configured and controlled. Way too much surface area to allow anyone of the internet to poke around in. So it uses IP/4 isolation, which is the best I can figure out given the circumstances. It's not like I havem't given the problem serious consideration. I just don't see a solution.

      The use case is not hugely different from an IoT toothbrush. But standards organizations have chosen to leave both my (hypothetical) toothbrush and my application utterly defenseless when it comes to security. Is it any surprise that IoT toothbrushes have security problems?

      How would YOU see https working on a device like that?

      > ".local" is reserved for mDNS and is in the RFC, though this is frequently and widely ignored.

      Yes. That was my point. It is currently widely ignored.

      • mixmastamyk 18 hours ago

        Grandparent explained that a firewall is also needed with ip6.

        I understand that setting it up to delineate is harder in practice. Therein lies the rub.

    • AStonesThrow a day ago

      > can't even agree on the meaning of "local"

      Well, who can agree on this? Local network, private network, intranet, Tailscale and VPN, Tor? IPv6 ULA, NAT/CGNAT, SOCKS, transparent proxy? What resources are "local" to me and what resources are "remote"?

      This is quite a thorny and sometimes philosophical question. Web developers are working at the OSI Layer 6-7 / TCP/IP Application Layer.

      https://en.wikipedia.org/wiki/OSI_model#Comparison_with_TCP/...

      Now even cookies and things like CSRF were trying to differentiate "servers" and "origins" and "resources" along the lines of the DNS hierarchy. But this has been fraught with complication, because DNS was not intended to delineate such things, and can't do so cleanly 100% of the time.

      Now these proposals are trying to reach even lower in the OSI model - Layer 3, Layer 2. If you're asking "what is on my LAN" or "what is a private network", that is not something that HTTPS or web services are supposed to know. Are you going to ask them to delve into your routing table or test the network interfaces? HTTPS was never supposed to know about your netmask or your next-hop router.

      So this is only one reason that there is no elegant solution for the problem. And it has been foundational to the way the web was designed: "given a uniform locator, find this resource wherever it may be, whenever I request it." That was a simpler proposition when the Web was used to publish interesting and encyclopedic information, rather than deliver applications and access sensitive systems.

G_o_D a day ago

Cors doesnt stop POST request also not fetch with 'no-cors'supplied in javascript its that you cant read response that doesnt mean request is not sent by browser

Then again local app can run server with proxy that adds adds CORS headers to the proxied request and you can access any site via js fetch/xmlhttprequest interface, even extension is able to modify headers to bypass cors

Cors bypassing is just matter of editing headers whats really hard to or impossible to bypass in CSP rules,

Now facebook app itself is running such cors server proxy even without it an normal http or websocket server is enought to send metrics

Chrome already has flag to prevent locahost access still as said websocket can be used

Completely banning localhost is detrimental

Many users are using self hosted bookmarking, note app, pass managers like solutions that rely on local server

1vuio0pswjnm7 12 hours ago

Explainer by non-Googler

Is the so-called "modern" web browser too large and complex

I never asked for stuff like "websockets"; I have to disable it, why

I still prefer a text-only browser for reading HTML; it does not run Javascript, it does not do websockets, CSS, images or a gazillion other things; it does not even autoload resources

It is relatively small, fast and reliable; very useful

It can read larger HTML files that make so-called "modern" web browsers choke

It does not support online ad services

The companies like Google that force ads on www users are known for creating problems for www users and then proposing solutions to them; why not just stop creating the problems

  • 1vuio0pswjnm7 6 hours ago

    Text-only browsers are not a "solution". That is not the point of the comment. Such simpler clients are not a problem.

    The point is that gigantic, overly complex "browsers" designed for surveillance and advertising are the problem. They are not a solution.

  • HumanOstrich 7 hours ago

    Going back to text-only browsers is not the solution.

ronsor 2 days ago

Do note that since the removal of NPAPI plugins years ago, locally-installed software that intends to be used by one or more public websites has to run an HTTP server on localhost.

It would be really annoying if this use case was made into an unreasonable hassle or killed entirely. (Alternatively, browser developers could've offered a real alternative, but it's a bit late for that now.)

  • michaelt 2 days ago

    Doesn't most software just register a protocol handler with the OS? Then a website can hand the browser a zoommtg:// link, which the browser opens with zoom ?

    Things like Jupyter Notebooks will presumably be unaffected by this, as they're not doing any cross-origin requests.

    And likewise, when a command line tool wants you to log in with oauth2 and returns you to a localhost URL, it's a simple redirect not a cross-origin request, so should likewise be allowed?

    • kuschku 2 days ago

      A common use case, whether for 3D printers, switches, routers, or NAS devices is that you've got a centrally hosted management UI that then sends requests directly to your local devices.

      This allows you to use a single centrally hosted website as user interface, without the control traffic leaving your network. e.g. Plex uses this.

      • michaelt 2 days ago

        I don't think this proposal will stop you visiting the management UI for devices like switches and NASes on the local network. You'll be able to visit http://192.168.0.1 and it'll work just fine?

        This is just about blocking cross-origin requests from other websites. I probably don't want every ad network iframe being able to talk to my router's admin UI.

        • kuschku 2 days ago

          That's not what I'm talking about.

          A common example is this:

          1. I visit ui.manufacturer.tld

          2. I click "add device" and enter 192.168.0.230, repeating this for my other local devices.

          3. The website ui.manufacturer.tld now shows me a dashboard with aggregate metrics from all my switches and routers, which it collects by fetch(...) ing data from all of them.

          The manufacturers site is just a static page. It stores the list of devices and credentials to connect to them in localStorage.

          None of the data ever leaves my network, but I can just bookmark ui.manufacturer.tld and control all of my devices at once.

          This is a relatively neat approach providing the same comfort as cloud control, without the privacy nightmare.

          • blacklion a day ago

            If it is truly static site/page, download it and open from local disk. And nudge vendor to release it as archive which can be downloaded and unpacked locally.

            It has a multitude of benefits comparing opening it from vendor site each time:

            1) It works offline.

            2) It works if vendor site is down.

            3) It works if vendor restrict access to it due to acquisition, making it subscription-based, discontinuation of feature "because fuck you".

            4) It works if vendor goes out of business or pivot to something else.

            5) It still works with YOUR devices if vendor decides to drop support for old ones.

            6) It still works with YOUR versions of firmwares if vendor decides to push new ones, with features which are user-hostile (I'm looking at you, BambuLab).

            7) It cannot be compromised, as copy on vendor site can be If your system is compromised, you have bigger problems than forged UI for devices. Even best of vendors have data breaches this days.

            8) It cannot upload your data if vendor goes rogue.

            Downsides? If you really need to update it, you need to re-download it manually. Not a big hassle, IMHO.

            • kuschku a day ago

              > If it is truly static site/page, download it and open from local disk. And nudge vendor to release it as archive which can be downloaded and unpacked locally.

              Depending on the browser, file:/// is severely limited in what CORS requests are allowed.

              And then there's products like Plex, where it's not a static site, but you still want a central dashboard that connects to your local Plex server directly via CORS.

              • blacklion a day ago

                Why local Plex which you need to install & run (it is already Server) cannot provide its own UI to browser, without 3rd party sites? It is absurd design, IMHO. I'll never allow this in my network. It looks security nightmare. Today it shows me dashboard (of what? Several my Plex servers?), tomorrow it is forced to report pirated movies to police. No, thanx.

                • rcxdude 2 hours ago

                  HTTPS, basically. I've gone around and around in circles on this for a device I work on. You'd like to present an HTTPS web UI, because a) you'd like encryption between the UI and the device, and b) browsers lock down a lot of APIs, sometimes arbitrarily, behind being in a 'secure context' (ironically, including the cryptography APIs!). But your device doesn't control it's IP address or hostname, and may not even have access to the internet, so there's no way for it to have a proper HTTPS certificate, and a self-signed certificate will create all kinds of scary warnings in the browser (which HTTP will not, ironically).

                  So manufacturers create all kinds of crazy workarounds, like plex's, to be able to present an HTTPS web page that is easily accessible and can just talk to the device. (Except it's still not that simple, because you can't easily make an HTTP request from an HTTPS context, so plex also jumps through a bunch of hoops to co-ordinate some HTTPS certificate for the local device, which requires an internet connection).

                  It's a complete mess, and browsers really seem to be keen on blocking any 'let HTTPS work for local devices' solution, even if it were just a simple upgrade to the status quo that would otherwise just be treated like HTTP. Nor will they stop putting useful APIs behind a 'secure context' like an HTTPS certificate implies any level of trust except that a page is associated with a given domain name.

                  (Someone at plex seems to have finally gotten through to some of the devs at Chrome, and AFAIK there is now a somewhat reasonable flow that would allow e.g. a progressive webapp to request access to a local device and communicate with it without an HTTPS certificate, which is something, but still way to just host the damn UI on the device without limiting the functionality! And it's chrome-only, maybe still in preview? Haven't gotten around to trying to implement it yet)

                  See this long, painful, multi-year discussion on the topic: https://github.com/WICG/private-network-access/issues/23

                  • blacklion 2 hours ago

                    It is all very wired.

                    > a) you'd like encryption between the UI and the device

                    No, I don't. It is on my local network. If device has public IP and I want to browse my collection when I'm out of my local network, then I do, but then Let's encrypt solved this problem many years ago (10 years!). If device doesn't have public IP but I punch hole in my NAT or install reverse proxy on gateway, then I'm tech-savvy enough to obtain Let's Encrypt cert for it, too.

                    > b) browsers lock down a lot of APIs, sometimes arbitrarily

                    Why does GUI which is served from server co-hosted with mediaserver needs any special APIs at all? It can generate all content on server side and basic JS is enough to add visual effects for smooth scrolling, drop-down menus, etc.

                    Its all look over-engineered in the sake of what? Of imitating desktop app in browser? Looks like it creates more problems than writing damn native desktop app. In QT, for example, which will be not-so-native (but more native than any site or Electron) but work on all 3 major OSes and *BSD from single sources.

                    • rcxdude 35 minutes ago

                      Even on a local network, you should probably not be sending e.g. passwords around in plaintext. Let's encrypt is a solution for someone who's tech-savvy enoug to set it up, not the average user.

                      > Its all look over-engineered in the sake of what? Of imitating desktop app in browser?

                      Pretty much, yeah. And not just desktop app, but mobile app as well. The overhead of supporting multiple platforms, especially across a broad range of devices, is substantial. Wep applications sidestep a lot of that and can give you a polished UX across basically every device, especially e.g. around the installation process (because there doesn't need to be one).

                • kuschku a day ago

                  > of what? Several my Plex servers?

                  People commonly use this to browse the collections of their own servers, and the servers of their friends, in a unified interface.

                  Media from friends is accessed externally, media from your own server is accessed locally for better performance.

              • blacklion a day ago

                > Depending on the browser, file:/// is severely limited in what CORS requests are allowed.

                And it is strange to me too. Local (on-disk) site is like local Electron app without bundling Chrome inside. Why it should be restricted when Electron app can do everything? It looks illogical.

                • kuschku a day ago

                  I agree that the current situation sucks, but that doesn't mean breaking the existing solutions is any better.

          • account42 a day ago

            That absolutely is a privacy nightmare.

            • kuschku a day ago

              How so? It's certainly better than sending all that traffic through the cloud.

              While I certainly prefer stuff I can just self-host, compared to the modern cloud-only reality with WebUSB and stuff, this is a relatively clean solution.

      • hypercube33 2 days ago

        Windows Admin center but it's only local which I rather hate

    • ronsor 2 days ago

      That works if you want to launch an application from a website, but it doesn't work if you want to actively communicate with an application from a website.

      • fn-mote 2 days ago

        This needs more detail to make it clear what you are wishing for that will not happen.

        It seems like you're thinking of a specific application, or at least use-case. Can you elaborate?

        Once you're launching an application, it seems like the application can negotiate with the external site directly if it wants.

        • xp84 2 days ago

          #1 use case would be a password manager. It would be best if the browser plugin part can ping say, the 1password native app, which runs locally on your pc, and say "Yo I need a password for google.com" - then the native app springs into action, prompts for biometrics, locates the password or offers the user to choose, then returns it directly to the browser for filling.

          Sure you can make a fully cloud-reliant PW manager, which has to have your key stored in the browser and fetch your vault from the server, but a lot of us like having that information never have to leave our computers.

          • spiffyk 2 days ago

            Browser extensions play by very different rules than websites already. The proposal is for the latter and I doubt it is going to affect the former, other than MAYBE an extra permanent permission.

            • cAtte_ 2 days ago

              you missed the point. password managers are one of the many use cases for this feature; that they just so happen to be mostly implemented as extensions does not mean that the feature is only useful for extensions

  • IshKebab 2 days ago

    It would be amazing if that method of communicating with a local app was killed entirely, because it's been a very very common source of security vulnerabilities.

  • ImPostingOnHN 2 days ago

    > locally-installed software that intends to be used by one or more public websites has to run an HTTP server on localhost

    if that software runs with a pull approach, instead of a push one, the server becomes unnecessary

    bonus: then you won't have websites grossly probing local networks that aren't theirs (ew)

    • rhdunn a day ago

      It's harder to run html and xml files with xslt by just opening them in a web browser (things like nunit test run output). To view these properly now -- to get the css, xslt, images, etc. to load -- you now typically have to run a web server at that file path.

      Note: this is why the viewers for these tools will spin up a local web server.

      With local LLMs and AI it is now common to have different servers for different tasks (LLM, TTS, ASR, etc.) running together where they need to communicate to be able to create services like local assistants. I don't want to have to jump through hoops of running these through SSL (including getting a verified self-signed cert.), etc. just to be able to run a local web service.

      • ImPostingOnHN 21 hours ago

        I'm not sure any of that is necessary for what we're talking about: locally-installed software that intends to be used by one or more public websites.

        For instance, my interaction with local LLMs involves 0 web browsers, and there's no reason facebook.com needs to make calls to my locally-running LLM.

        Running HTML/XML files in the browser should be easier, but at the moment it already has the issues you speak of. It might make sense, IMO, for browsers to allow requests to localhost from websites also running on localhost.

  • donnachangstein 2 days ago

    [flagged]

    • afavour 2 days ago

      > Googlers present a solution no one is asking for,

      I'm asking for it. Random web sites have no business poking around my internal network.

      • moralestapia a day ago

        >I'm asking for it.

        Proof? Link to issue? Mailing list? Anything?

        I think you just made that up.

        • ImPostingOnHN 21 hours ago

          > Proof? ... Anything?

          I saw them ask for it in the post you're responding to. I am also asking for it right now. That's 2 people asking for it so far.

          > Link to issue? Mailing list?

          That is not necessary.

          • moralestapia 20 hours ago

            Wow, so Google is sitting on a time machine that can see the future.

            Amazing!

            • ImPostingOnHN 14 hours ago

              What an interesting, non-sequitur response.

bmacho 20 hours ago

One of the very few security inspired restrictions I can wholeheartedly agree with. I don't want random websites be able to read my localhost. I hope it gets accepted and implemented sooner than later.

OTOH it would be cool if random websites were able to open up and use ports on my computer's network, or even on my LAN, when granted permission of course. Browser-based file- and media sharing between my devices, or games if multi-person.

  • avidiax 18 hours ago

    > OTOH it would be cool if random websites were able to open up and use ports on my computer's network

    That's what WebRTC does. There's no requirement that WebRTC is used to send video and audio as in a Zoom/Meet call.

    That's how WebTorrent works.

    https://webtorrent.io/faq

AdmiralAsshat 2 days ago

uBlock / uMatrix does this by default, I believe.

I often see sites like Paypal trying to probe 127.0.0.1. For my "security", I'm sure...

  • potholereseller 2 days ago

    It appears to not have been enabled by default on my instance of uBlock; it seems a specific filter list is used to implement this [0]; that filter was un-checked; I have no idea why. The contents of that filter list are here [1]; notice that there are exceptions for certain services, so be sure to read through the exceptions before enabling it.

    [0] Filter Lists -> Privacy -> Block Outsider Intrusion into Lan

    [1] <https://github.com/uBlockOrigin/uAssets/blob/master/filters/...>

    • reyqn 20 hours ago

      This filter broke twitch for me. I had to create custom rules for twitch if I wanted to use it with this filter enabled.

      • apazzolini 17 hours ago

        Would you mind sharing those custom rules?

nickcw 2 days ago

This has the potential to break rclone's oauth mechanism as it relies on setting the redirect URL to localhost so when the oauth is done rclone (which is running on your computer) gets called.

I guess if the permissions dialog is sensibly worded then the user will allow it.

I think this is probably a sensible proposal but I'm sure it will break stuff people are relying on.

  • 0xCMP 2 days ago

    IIUC this should not break redirects. This only affects: (1) fetch/xmlhttprequests (2) resources linked to AND loaded on a page (e.g. images, js, css, etc.)

    As noted in another comment this doesn't work unless the server responding provides proper CORS headers allowing the content to be loaded by the browser in that context: so for any request to work the server is either wide open (cors: *) or are cooperating with the requesting code (cors: website.co). The changes prevent communication without user authorization.

udev4096 4 hours ago

Deny any incoming requests using ufw or nftables. Only allow outbound requests by default

benob 16 hours ago

Isn't it time for disallowing browsers to connect to anything outside same origin pages except for actual navigation?

Servers can do all the hard work of gathering content from here and there.

  • globular-toast 15 hours ago

    Is it possible to do this today with browser extensions? I ran noscript 10 years ago and it was really tough. Kinda felt like being gaslit constantly. I could go back, only enabling sites selectively, but it's not going to work for family. Wondering if just blocking cross origin requests would be more feasible.

AshamedCaptain 2 days ago

I do not understand. Doesn't same-origin prevent all of these issues? Why on earth would you extend some protection to resources based on IP address ranges? It seems like the most dubious criteria of all.

  • maple3142 a day ago

    I think the problem is that some local server are not really designed to be as secure as a public server. For example, a local server having a stupid unauthenticated endpoint like "GET /exec?cmd=rm+-rf+/*", which is obviously exploitable and same-origin does not prevent that.

  • fn-mote 2 days ago

    I think you're mistaken about this.

    Use case 1 in the document and the discussion made it clear to me.

    • AshamedCaptain 2 days ago

      Browsers allow launching HTTP requests to localhost in the same way they allow my-malicious-website.com to launch HTTP requests to say mail.google.com . They can _request_ a resource but that's about it -- everything else, even many things you would expect to be able to do with the downloaded resource, are blocked by the same origin policy. [1] Heck, we have a million problems already where file:/// websites cannot access resources from http://localhost , and viceversa.

      So what's the attack vector exactly? Why it would be able to attack a local device but not attack your Gmail account ( with your browser happily sending your auth cookies) or file:///etc/passwd ?

      The only attack I can imagine is that _the mere fact_ of a webserver existing on your local IP is a disclosure of information for someone, but ... what's the attack scenario here again? The only thing they know is you run a webserver, and maybe they can check if you serve something at a specified location.

      Does this even allow identifying the router model you use? Because I can think of a bazillion better ways to do it -- including the simple "just assume is the default router of the specific ISP from that address".

      [1] https://developer.mozilla.org/en-US/docs/Web/Security/Same-o...

      In fact, [1] literally says

      > [Same-origin policy] prevents a malicious website on the Internet from running JS in a browser to read data from [...] a company intranet (which is protected from direct access by the attacker by not having a public IP address) and relaying that data to the attacker.

      • AnthonyMouse a day ago

        This is specifically in response to the recent Facebook chicanery where their app was listening on localhost and spitting out a unique tracking ID to anything that connects, allowing arbitrary web pages to get the tracking ID and correspondingly identify the user visiting the page.

        But this is trying to solve the problem in the wrong place. The problem isn't that the browser is making the connection, it's that the app betraying the user is running on the user's device. The Facebook app is malware. The premise of app store curation is that they get banned for this, right? Make everyone who wants to use Facebook use the web page now.

spr-alex 16 hours ago

The existing PNA is easily defeated for bugs that can be triggered with standard cross origin requests. For example PNA does nothing to stop a website from exploiting some EOL devices I have with POST requests and img tags.

This is a much better approach.

foota 2 days ago

The alternative proposal sounds much nicer, but unfortunately was paused due to concerns about devices not being able to support it.

I guess once this is added maybe the proposed device opt in mechanism could be used for applications to cooperatively support access without a permission prompt?

qbane 21 hours ago

IIRC Flash has a similar design. One Flash app can access the internet, or local network, but not both.

geekodour a day ago

Just wanted to confirm something, this only works for HTTP right? browser dont allow arbitrary TCP reqs right?

G_o_D a day ago

Browser should just allow per-site settings or global allow/deny all to allow deny permission to localhost

So thats user will be in control

cant just write a extension that blocks access to domains based on origin

So user can just add facebook.com as origin to block all facebook* sites from sending any request to any registered url in these case localhost/127.0.0.1 domains

DNR api allows blocking based on initiatorDomains

qwertox a day ago

Proposing this in 2025. While probably knowing about this problem since Chrome was released (2008).

Why not treat any local access as if it were an access to a microphone?

  • A4ET8a8uTh0_v2 a day ago

    I would love for someone with more knowledge to opine on this, because, to me, it seems like it would actually be the most sane default state.

  • dadrian 19 hours ago

    That is literally what this proposal is suggesting.

andromaton 9 hours ago

This used to cause malicious sites to reboot home internet routers around 2013.

profmonocle 2 days ago

Assuming that RFC1918 addresses mean "local" network is wrong. It means "private". Many large enterprises use RFC1918 for private, internal web sites.

One internal site I spend hours a day using has a 10.x.x.x IP address. The servers for that site are on the other side of the country and are many network hops away. It's a big company, our corporate network is very very large.

A better definition of "local IP" would be whether the IP is in the same subnet as the client, i.e. look up the client's own IP and subnet mask and determine if a packet to a given IP would need to be routed through the default gateway.

  • kccqzy 2 days ago

    The article spends a lot of effort defining the words "local" and "private" here. It then says:

    > Note that local -> local is not a local network request

    So your use case won't be affected.

    • ale42 2 days ago

      The computer I use at work (and not only mine, many many of them) has a public IP address. Many internal services are on 10.0.0.0/8. How is this being taken into account?

      • numpad0 2 days ago

        10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16 are all private addresses per RFC1918 and documents superseding it(5735?). If it's like 66.249.73.128/27 or 164.13.12.34/12, those are "global" IP.

        1: https://www.rfc-editor.org/rfc/rfc1918

        2: https://www.rfc-editor.org/rfc/rfc5735

        3: https://en.wikipedia.org/wiki/Private_network

        • ale42 a day ago

          Yes that's the point: many of our work PCs have global public IPs from something like 128.130.0.0/15 (not this actual block, but something similar), and many internal services are on 10.0.0.0/8. I'm not sure I get exactly how the proposal is addressing this. How does it know that 128.130.0.0/15 is actually internal and should be considered for content loaded from an external site?

          • kccqzy 20 hours ago

            The proposal doesn't need to address this because it doesn't even consider the global public IP of 128.130.0.0/15 in your example. If you visit a site on 10.0.0.0/8 that accesses resources on 10.0.0.0/8 it's allowed. But if you visit a random other site on the internet it will be (by default) forbidden to access the internal resource at 10.0.0.0/8.

          • numpad0 a day ago

            My reading is this just adds a dialog box before browser loads RFC1918 ranges. At IP layer, a laptop with 128.130.0.123 on wlan0 should not be able to access 10.0.10.123:80, but I doubt they bother to sanity check that. Just blindly assuming all RFC1918 and only RFC1918 are local should do the job for quite a while.

            btw, I've seen that kind of network. I was young, and it took me a while to realize that they DHCP assign global IPs and double NAT it. That was weird.

      • lilyball 2 days ago

        Your computer's own IP address is completely irrelevant. What matters is the site hostname and the IP address it resolves to.

        • AStonesThrow a day ago

          People believe that "my computer" or "my smartphone" has an Internet address, but this is a simplification of how it's really working.

          The reality is that each network interface has at least one Internet address, and these should usually all be different.

          An ordinary computer at home could be plugged into Ethernet and active on WiFi at the same time. The Ethernet interface may have an IPv4 address and a set of IPv6 addresses, and belong to their home LAN. The WiFi adapter and interface may have a different IPv4 address, and belongs to the same network, or some other network. The latter is called "multi-homing".

          If you visit a site that reveals your "public" IP address(es), you may find that your public, routable IPv4 and/or IPv6 addresses differ from the ones actually assigned to your interfaces.

          In order to be compliant with TCP/IP standards, your device always needs to respond on a "loopback" address in 127.0.0.0/8, and typically this is assigned to a "loopback" interface.

          A network router does not identify with a singular IP address, but could answer to dozens, when many interface cards are installed. Linux will gladly add "alias" IPv4 addresses to most interface devices, and you'll see SLAAC or DHCPv6 working when there's a link-local and perhaps multiple routable IPv6 addresses on each interface.

          The GP says that their work computer has a [public] routable IP address. But the same computer could have another interface, or even the same interface has additional addresses assigned to it, making it a member of that private 10.0.0.0/8 intranet. This detail may or may not be relevant to the services they're connecting to, in terms of authorization or presentation. It may be relevant to the network operators, but not to the end-user.

          So as a rule of thumb: your device needs at least one IP address to connect to the Internet, but that address is associated with an interface rather than your device itself, and in a functional system, there are multiple addresses being used for different purposes, or held in reserve, and multiple interfaces that grant the device membership on at least one network.

      • jaywee 2 days ago

        Ideally, in an organization this should be a centrally pushed group policy defining CIDRs.

        Like, at home, I have 10/8 and public IPv6 addresses.

      • kccqzy 2 days ago

        As far as I understand that doesn't matter. What matters is the frame's origin and the request.

  • JdeBP a day ago

    Many years ago, before it was dropped, IP version 6 had a concept of "site local" addresses, which (if it had applied to version 4) would have encompassed the corporate intranet addresses that you are talking about. Routed within the corporate intranet; but not routed over corporate borders.

    Think of this proposal's definition of "local" (always a tricky adjective in networking, and reportedly the proposers here have bikeshedded it extensively) as encompassing both Local Area Network addresses and non-LAN "site local" addresses.

    • aaronmdjones a day ago

      fd00::/8 (within fc00::/7) is still reserved for this purpose (site-local IPv6 addressing).

      fc00::/8 (a network block for a registry of organisation-specific assignments for site-local use) is the idea that was abandoned.

      Roughly speaking, the following are analogs:

      169.254/16 -> fe80::/64 (within fe80::/10)

      10/8, 172.16/12, 192.168/16 -> a randomly-generated network (within fd00::/8)

      For example, a service I maintain that consists of several machines in a partial WireGuard mesh uses fda2:daf7:a7d4:c4fb::/64 for its peers. The recommendation is no larger than a /48, so a /64 is fine (and I only need the one network, anyway).

      fc00::/7 is not globally routable.

      • rerdavies a day ago

        So in my case, I guess I need to blame the unconfigurable cable router my ISP provided me with? Since there's no way to provide reservations for IPv6 addresses. :-/

        • aaronmdjones a day ago

          Right. OpenWRT, for example, will automatically generate a random /48 within fd00::/8 to use as a ULA (unique local addressing) prefix for its LAN interfaces, and will advertise those prefixes to its clients. You can also manually configure a specific prefix instead.

          e.g. Imagine the following OpenWRT setup:

          ULA: fd9e:c023:bb5f::/48

          (V)LAN 1: IPv6 assignment hint 1, suffix 1

          (V)LAN 2: IPv6 assignment hint 2, suffix ffff

          Clients on LAN 1 would be advertised the prefix fd9e:c023:bb5f:1::/64 and automatically configure addresses for themselves within it. The router itself would be reachable at fd9e:c023:bb5f:1::1.

          Clients on LAN 2 would be advertised the prefix fd9e:c023:bb5f:2::/64 and automatically configure addresses for themselves within it. The router itself would be reachable at fd9e:c023:bb5f:2::ffff.

          Clients on LAN 1 could communicate with clients on LAN 2 (firewall permitting) and vice versa by using these ULA addresses, without any IPv6 WAN connectivity or global-scope addresses.

  • xp84 2 days ago

    Is it a gross generalization to say that if you're visiting a site whose name resolves to a private IP address, it's a part of the same organizational entity as your computer is?

    The proposal here would consider that site local and thus allowed to talk to local. What are the implications? Your employer whose VPN you're on, or whose physical facility you're located in, can get some access to the LAN where you are.

    In the case where you're a remote worker and the LAN is your private home, I bet that the employer already has the ability to scan your LAN anyway, since most employers who are allowing you onto their VPN do so only from computers they own, manage, and control completely.

    • EvanAnderson 20 hours ago

      > Is it a gross generalization to say that if you're visiting a site whose name resolves to a private IP address, it's a part of the same organizational entity as your computer is?

      Yes. That's a gross generalization.

      I support applications delivered via site-to-site VPN tunnels hosted by third parties. In the Customer site the application is accessed via an RFC 1918 address. It is is not part of the Customer's local network, however.

      Likewise, I support applications that are locally-hosted but Internet facing and appear on a non-RFC1918 IP address even though the server is local and part of the Customer's network.

      Access control policy really should be orthogonal to network address. Coupling those two will enivtably lead to mismatches to work around. I would prefer some type of user-exposed (and sysadmin-exposed, centrally controllable) method for declaring the network-level access permitted by scripts (as identified by the source domain, probably).

    • rjmunro a day ago

      Don't some internet providers to large scale NAT (CGNAT), so customers each get a 10.x address instead of a public one? I'm not sure if this is a problem or not. It sounds like it could be.

      • xp84 20 hours ago

        It wouldn’t be important in this scenario, because what your own IP address is doesn’t matter (and most of us are sitting behind a NAT router too, after all).

        It would block a site from scanning your other 10.x peers on the same network segment, thinking they’re “on your LAN” but that’s not a problem in my humble opinion.

thesdev 2 days ago

Off-topic: Is the placement of the apostrophe right in the title? Should it be "a users' local network" (current version) or "a user's local network"?

  • IshKebab 2 days ago

    It should be "from accessing a user's local network", or "from accessing users' local networks".

    • AndriyKunitsyn 13 hours ago

      Why do you think so? How is "a users' local network" significantly different from "a children's book"?

rs186 a day ago

Why is this a Chrome thing, not an Android thing?

I get that this could happen on any OS, and the proposal is from browser maker's perspective. But what about the other side of things, an app (not necessarily browser) talking to arbitrary localhost address?

  • will4274 6 hours ago

    Basically any inter-process communication (IPC). https://en.wikipedia.org/wiki/Inter-process_communication . There are fancier IPC mechanisms, but none as widely supported as just sending arbitrary data over a socket. It wouldn't surprise me if e.g. this is how Chrome processes communicate with each other.

AStonesThrow 2 days ago

Chris Siebenmann weighs in with thoughts on:

Browers[sic] can't feasibly stop web pages from talking to private (local) IP addresses (2019)

https://utcc.utoronto.ca/~cks/space/blog/web/BrowsersAndLoca...

  • kccqzy 2 days ago

    The split horizon DNS model mentioned in that article is to me insane. Your DNS responses should not change based on what network you are connected to. It breaks so many things. For one, caching breaks because DNS caching is simplistic and is only cached with a TTL: no way to tell your OS to associate a DNS cached response to a network.

    I understand why some companies want this, but doing it on the DNS level is a massive hack.

    If I were the decision maker I would break that use case. (Chrome probably wouldn't though.)

    • parliament32 2 days ago

      > Your DNS responses should not change based on what network you are connected to.

      GeoDNS and similar are very broadly used by services you definitely use every day. Your DNS responses change all the time depending on what network you're connecting from.

      Further: why would I want my private hosts to be resolvable outside my networks?

      Of course DNS responses should change depending on what network you're on.

      • kccqzy 2 days ago

        > but if you're inside our network perimeter and you look up their name, you get a private IP address and you have to use this IP address to talk to them

        In the linked article using the wrong DNS results in inaccessibility. GeoDNS is merely a performance concern. Big difference.

        > why would I want my private hosts

        Inaccessibility is different. We are talking about accessible hosts requiring different IP addresses to be accessed in different networks.

        • dwattttt a day ago

          If you have two interfaces connected to two separate networks, you can absolutely have another host connected to the same two networks. That host will have a different IP for each of their interfaces, you could reach it on either, and DNS on each network should resolve to the IP it's reachable on on that network.

        • parliament32 18 hours ago

          Correct, and this is by design. Keeping in mind "hairpin"-style connections often don't work, also by design (leaving a network then hairpinning back into the same network).

          Let's say you have an internal employee portal. Accessing it from somewhere internal goes to an address in private space, while accessing it from home gives you the globally routable address. The external route might have more firewalls / WAFs / IPSes etc in the way. There's no other way you could possibly achieve this than by serving a different IP for each of the two networks, and you can do that through DNS, by having an internal resolver and an external resolver.

          > but you could just have two different fqdns

          Good luck training your employees to use two different URLs depending on what network they originate from.

    • kuschku 2 days ago

      I'm surprised you've never seen this before.

      Especially for universities it's very common to have the same hostname resolve to different servers, and provide different results, depending on whether you're inside the university network or not.

      Some sites may require login if you're accessing them from the internet, but are freely accessible from the intranet.

      Others may provide read-write access from inside, but limited read-only access from the outside.

      Similar situations with split-horizon DNS are also common in corporate intranets or for people hosting Plex servers.

      Ultimately all these issues are caused by NAT and would disappear if we switched to IPv6, but that would also circumvent the OP proposal.

      • kccqzy a day ago

        No I haven't seen this before. I have seen however the behavior where login is required from the Internet but not on the university network; I had assumed this is based on checking the source IP of the request.

        Similarly the use case of read-write access from inside, but limited read-only access from the outside is also achievable by checking the source IP.

        • kuschku a day ago

          But how do you check the source IP if everyone is behind NAT?

          Take the following example (all IPs are examples):

          1. University uses 10./8 internally, with 10.1./16 and 10.2./16 being students, 10.3./16 being admin, 10.4. being natsci institute, 10.5. being tech institute, etc.

          2. You use radius to assign users to IP ranges depending on their group membership

          3. If you access the website from one of these IP ranges, group membership is implied, otherwise you'll have to log in.

          4. The website is accessible at 10.200.1.123 internally, and 205.123.123.123 externally with a CDN.

          Without NAT, this would just work, and many universities still don't use NAT.

          But with NAT, the website wont see my internal IP, just the gateway's IP, so it can't verify group membership.

          In some situations I can push routes to end devices so they know 205.123.123.123 is available locally, but that's not always an option.

          In this example the site is available externally through Cloudflare, with many other sites on the same IP.

          So I'll have to use split horizon DNS instead.

          • AStonesThrow 21 hours ago

            Ohh, your Example Documentation was sooo close to being RFC-compliant! Except for those unnecessary abbreviations of CIDR notation, and...

            You can use 203.0.113.0/24 in your examples because it is specifically reserved for this purpose by IETF/IANA: https://en.wikipedia.org/wiki/Reserved_IP_addresses#IPv4

            • kuschku 20 hours ago

              I usually try to write comments with proper notation and proper example values, but if — like in this instance — I'm interrupted IRL and lose my draft, I'll focus on getting my idea across at all rather than writing the perfect comment. Even if that leads to excessive abbreviations, slightly off example values, inconsistency between you/I/passive voice or past/present/future tense.

              In this case the comment you see is the third attempt, ultimately written on a phone (urgh), but I hope the idea came across nonetheless.

moktonar a day ago

The web is currently just “controlled code execution” on your device. This will never work if not done properly. We need a real “web 3.0” where web apps can run natively and containerized, but done correctly, where they are properly sandboxed. This will bring performance and security.

  • graemep a day ago

    The underlying problem is that we are trying to run untrusted code safel, with very few restrictions on its capabilities.

    • klabb3 a day ago

      Disagree. Untrusted code was thought to be a meaningful term 20-30 years ago when you ran desktop OSs with big name software like Microsoft Word and Adobe, and games. What happened in reality is that this fence had false positives (ie Meta being one of your main adversaries) and an enormous amount of false negatives (all indie or small devs that would have their apps classified as viruses).

      The model we need isn’t a boolean form of trust, but rather capabilities and permissions on a per-app, per-site or per-vendor basis. We already know this, but it’s incredibly tricky to design, retrofit and explain. Mobile OSs did a lot here, even if they are nowhere near perfect. For instance, they allow apps (by default even) to have private data that isn’t accessible from other apps on the same device.

      Whether the code runs in an app or on a website isn’t actually important. There is no fundamental reason for the web to be constrained except user expectations and the design of permission systems.

zajio1am 2 days ago

This seems like a silly solution, considering we are in the middle of IPv6 transition, where local networks use public addresses.

  • jeroenhd 2 days ago

    Even IPv6 has local devices. Determining whether that's a /64 or a /56 network may need some work, but the concept isn't all that different. Plus, you have ::1 and fe80::, of course.

  • rerdavies a day ago

    Whatever happened to IPv6 site-local and link local address ranges (address ranges that were specifically defined as address ranges that would not cross router or WAN boundaries? They were in the original IPv6 standards, but don't seem to be implemented or supported. Or at least they aren't implemented or supported by my completely uconfigurable home cable router povided by my ISP.

    • fulafel 20 hours ago

      IPv6 in normal ethernet/wlan like uses requires link-local to for functioning neighbour discovery (equivalent for v4's ARP) so it's very likely it works. Not meant for normal application usage though. Site local was phased out in favour of ULA etc.

      But if you're not using global addresses you're probably doing it wrong. Global addressing doesn't mean you're globally reachable, confusing addressing vs reachability is the source of a lot of misunderstandings. You can think of it as "everyone gets their own piece of unique address space, not routed unless you want it to be".

  • MBCook a day ago

    So because IPv6 exists we shouldn’t even try?

    It’s insane to me that random internet sites can try to poke at my network or local system for any purpose without me knowing and approving it.

    With all we do for security these days this is such a massive hole it defies belief. Ever since I first saw an enterprise thing that just expected end users to run a local utility (really embedded web server) for their website to talk to I’ve been amazed this hasn’t been shut down.

  • mbreese 2 days ago

    Even in this case, it could be useful to limit the access websites have to local servers within your subnet (/64, etc), which might be a better way to define the “local” network.

    (And then corporate/enterprise managed Chrome installs could have specific subnets added to the allow list)

eternityforest a day ago

I really hope this gets implemented, and more importantly, I really hope they have the ability to access an HTTP local site from an HTTPS domain.

There are so many excellent home automation and media/entertainment use cases for something like this.

b0a04gl a day ago

this thing’s leaking. localhost ain’t private if random sites can hit it and get responses. devices still exposing ports like it’s 2003. prompts don’t help, people just just click through till it goes away. cors not doing much, it’s just noise now.issue’s been sitting there forever, everyone patches on top but none of these local services even check who’s knocking. just answers. every time.

similar thread: https://news.ycombinator.com/item?id=44179276

elansx 20 hours ago

As sooner this happens, the better.

calibas 21 hours ago

Choose one:

Web browsers use sandboxing to keep you safe.

Web browsers can portscan your local network.

Hnrobert42 a day ago

Ironic given that on my Mac, Chrome always asks to find other devices on my network by Firefox never does.

parliament32 2 days ago

Won't this break every local-device oauth flow?

otherayden a day ago

This seems like such a no-brainer, I’m shocked this isn’t already something sites need explicit permission to do

grahamj 17 hours ago

I don’t see this mentioned anywhere but Safari on iOS already does this. If you try to access a local network endpoint you’ll be asked to allow it by Safari, and the permission is per-site.

fulafel a day ago

A browser can't tell if a site is on the local network. Ambiguous addresses may not be on the local network and conversely a local network may use global addresses especially with v6.

gostsamo 2 days ago

What is so hard in blocking apps on android from listening on random ports without permission?

  • jeroenhd 2 days ago

    The same thing that makes blocking ports on iOS and macOS so hard: there's barely any firewall on these devices by default, and the ones users may find cause more problems than users will ever think they solve.

    Listening on a specific port is one of the most basic things software can possibly do. What's next, blocking apps from reading files?

    Plus, this is also about blocking your phone's browser from accessing your printer, your router, or that docker container you're running without a password.

    • elric a day ago

      That doesn't seem right. Can't speak to macOS, but on Android every application is sandboxed. Restricting its capabilities is trivial. Android apps certainly ARE blocked from reading files, except for some files in its storage and files the user grants it access to.

      Adding two Android permissions would fix this entire class of exploits: "run local network service", and "access local network services" (maybe with a whitelist).

  • zb3 2 days ago

    It's not only about android, it's about exploiting local services too..

phkahler a day ago

This should not be possible in the first place. There is no legitimate reason for it. Having users grant "concent" is just a way to make it more OK, not to stop it.

  • auxiliarymoose a day ago

    There are definitely legitimate reasons—for example, a browser-based CAD system communicating with a 3D mouse.

cwilby a day ago

Is it just me or am I not seeing any example that isn't pure theory?

And if it is just me, fine I'll jump in - they should also make it so that users have to approve local network access three times. I worry about the theoretical security implications that come after they only approve local network access once.

numpad0 2 days ago

Personally I had completely forgotten that anyone and anything can do this right now.

TLDR, IIUC, right now, random websites can try accessing contents on local IPs. You can try to blind load e.g. http://192.168.0.1/cgi-bin/login.cgi from JavaScript, iterating through a gigantic malicious list of such known useful URLs, then grep and send back whatever you want to share with advertisers or try POSTing backdoors to printer update page. No, we don't need that.

Of course, OTOH, many webapps today use localhost access to pass tokens and to talk to cooperating apps, but you only need access to 127.0.0.0/8 for that which is harder to abuse, so that range can be default exempted.

Disabling this, as proposed, does not affect your ability to open http://192.168.0.1/login.html, as that's just another "web" site. If JS on http://myNAS.local/search-local.html wants to access http://myLaptop.local:8000/myNasDesktopAppRemotingApi, only then you have to click some buttons to allow it.

Edit: uBlock Origin has filter for it[1]; was unchecked in mine.

1: https://news.ycombinator.com/item?id=44184799

  • MBCook a day ago

    > so that range can be default exempted

    I disagree. I know it’s done, but I don’t think that makes it safe or smart.

    Require the user to OK it and require the server to send a header with the one _exact_ port it will access. Require that the local server _must_ use CORS and allow that server.

    No website not loaded from localhost should ever be allowed to just hit random local/private IPs and ports without explicit permission.

  • reassess_blind a day ago

    The server has to allow cross origin requests for it to return a response though, right?

Pxtl a day ago

Honestly I think cross-site requests were a mistake. Tracking cookies, hacks, XSS attacks, etc.

My relationship is with your site. If you want to outsource that to some other domain, do that on your servers, not in my browser.

  • elric a day ago

    The mistake was putting CORS on the server side. It should have been part of the browser. "Facebook.com wants to access foo.example.com: y/n?"

    But then we would have had to educate users, and ad peddlers would have lost revenue.

  • AStonesThrow a day ago

    Cross-site requests have been built in to the design of the WWW since the beginning. The whole idea of hyperlinking from one place to another, and amalgamating media from multiple sites into a single page, is the essence of the World Wide Web that Tim Berners-Lee conceived at CERN, based on the HyperCard stacks and Gopher and Wais services that had preceded it.

    Of course it was only later that cookies and scripting and low-trust networks were introduced.

    The WWW was conceived as more of a "desktop publishing" metaphor, where pages could be formatted and multimedia presentations could be made and served to the public. It was later that the browser was harnessed as a cross-platform application delivery front-end.

    Also, many sites do carefully try to guard against "linking out" or letting the user escape their walled gardens without a warning or disclaimer. As much as they may rely on third-party analytics and ad servers, most web masters want the users to remain on their site, interacting with the same site, without following an external link that would end their engagement or web session.

    • elric a day ago

      CORS != hyperlinks. CORS is about random websites in your browser accessing other domains without your say-so. Websites doing stuff behind your back does feel antithetical to Tim Berners Lee's ideals...

    • Pxtl a day ago

      I'm aware of that but obviously there's a huge difference between the user clicking a link and navigating to a page on another domain and the site making that request on the user's behalf for a blob of JS.

owebmaster 2 days ago

I propose restricting android apps, not websites.

  • jeroenhd 2 days ago

    Android apps need UDP port binding to function. You can't do QUIC without UDP. Of course you can (should) restrict localhost bound ports to the namespaces of individual apps, but there is no easy solution to this problem at the moment.

    If you rely on users having to click "yes", then you're just making phones harder to use because everyone still using Facebook or Instagram will just click whatever buttons make the app work.

    On the other hand, I have yet to come up with a good reason why arbitrary websites need to set up direct connections to devices within the local network.

    There's the IPv6 argument against the proposed measures, which requires work to determine if an address is local or global, but that's also much more difficult to enumerate than the IPv4 space that some websites try to scan. That doesn't mean IPv4 address shouldn't be protected at all, either. Even with an IPv6-shaped hole, blocking local networks (both IPv4 and local IPv6) by default makes sense for websites originating from outside.

    IE did something very similar to this decades ago. They also had a system for displaying details about websites' privacy policies and data sharing. It's almost disheartening to see we're trying to come up with solutions to these problems again.

  • bmacho 20 hours ago

    Android apps obviously shouldn't be able to just open or read a global communication channel on your device. But this applies to websites too.

Joel_Mckay 21 hours ago

Thus ignoring local private web servers, and bypassing local network administered zone policy.

Seems like a sleazy move to draw down even more user DNS traffic data, and a worse solution than the default mitigation policy in NoScript =3

naikrovek a day ago

Why can browsers do the kinds of things they do at all?

Why does a web browser need USB or Bluetooth support? They don’t.0

Browsers should not be the universal platform. They’ve become the universal attack vector.

  • auxiliarymoose a day ago

    With WebUSB, you can program a microcontroller without needing to install local software. With Web Bluetooth, you can wirelessly capture data from + send commands to that microcontroller.

    As a developer, these standards prevent you from needing to maintain separate implementations for Windows/macOS/Linux/Android.

    As a user, they let you grant and revoke sandbox permissions in a granular way, including fully removing the web app from your computer.

    Browsers provide a great cross-platform sandbox and make it much easier to develop secure software across all platforms.

    WebUSB and Web Bluetooth are opt-in when the site requests a connection/permission, as opposed to unlimited access by default for native apps. And if you don't want to use them, you can choose a browser that doesn't implement those standards.

    What other platform (outside of web browsers) is a good alternative for securely developing cross-platform software that interacts with hardware?

    • naikrovek a day ago

      I’m ok with needing non-browser software for those things.

      > Browsers provide a great cross-platform sandbox and make it much easier to develop secure software across all platforms.

      Sure, until advertising companies find ways around and through those sandboxes because browser authors want the browsers be capable of more, in the name of a cross platform solution. The more a browser can do, the more surface area the sandbox has. (An advertising company makes the most popular browser, by the way.)

      > What other platform (outside of web browsers) is a good alternative for securely developing cross-platform software that interacts with hardware?

      There isn’t one, other than maybe video game engines, but it doesn’t matter. OS vendors need to work to make cross-platform software possible; it’s their fault we need a cross-platform solution at all. Every OS is a construct, and they were constructed to be different for arbitrary reasons.

      A good app-permission model in the browser is much more likely to happen, but I don’t see that really happening, either. “Too inconvenient for users [and our own in-house advertisers/malware authors]” will be the reason.

      MacOS handles permissions pretty well, but it could do better. If something wants local network permission, the user gets prompted. If the user says no, those network requests fail. Same with filesystem access. Linux will never have anything like this, nor will Windows, but it’s what security looks like, probably.

      Users will say yes to those prompts ultimately, because as soon as users have the ability to say “no” on all platforms, sites will simply gate site functionality behind the granting of those permissions because the authors of those sites want that data so badly.

      The only thing that is really going to stop behavior like this is law, and that is NEVER going to happen in the US.

      So, short of laws, browsers themselves must stop doing stupid crap like allowing local network access from sites that aren’t on the local network, and nonsense stuff like WebUSB. We need to give up on the idea that anyone can be safe on a platform when we want that platform to be able to do anything. Browsers must have boundaries.

      Operating systems should be the police, probably, and not browsers. Web stuff is already slow as hell, and browsers should be less capable, not more capable for both security reasons and speed reasons.

xyst a day ago

Advertising firms hate this.

hulitu 2 days ago

> A proposal to restrict sites from accessing a users' local network

A proposal to treat webbrowsers as malware ? Why would a webbrowser connect to a socket/internet ?

  • loumf 2 days ago

    The proposal is directed at the websites in the browser (using JS, embedded images or whatever), not the code that implements the browser.

hello_computer 2 days ago

just the fact that this comes from google is a hard pass for me. they sell so many adwords scams that they clearly do not give a damn about security. “security” from google is just another one of their trojan horses.

  • fn-mote 2 days ago

    Don't post shallow dismissals. The same company runs Project Zero, which has a major positive security impact.

    [1]: https://googleprojectzero.blogspot.com/

    • hello_computer a day ago

      project zero is ZERO compared to the millions of little old ladies around the world getting scammed through adwords. only security big g cares about is its own. they have the tools to laser-in on and punish the subtlest of wrongthink on youtube, yet it’s just too tall of an order to focus the same laser on tech support scammers…

themikesanto 2 days ago

Google loves wreaking havoc on web standards. Is there really anything anyone can do about it at this point? The number of us using alternative browsers are a drop in the bucket when compared to Chrome's market share.

  • charcircuit a day ago

    Google open source the implementation of them which any other browser is free to use.

bethekidyouwant 20 hours ago

make this malicious website and show me that it works. I have doubts.

gnarbarian 2 days ago

I don't like the implications of this. say you want to host a game that has a lan play component. that would be illegal.

neuroelectron a day ago

Cia isn't going to like this. I bet that google monopoly case suddenly reaches a new resolution.

zelon88 2 days ago

I understand the idea behind it and am still kinda chewing on the scope of it all. It will probably break some enterprise applications and cause some help desk or group policy/profile headaches for some.

It would be nice to know when a site is probing the local network. But by the same token, here is Google once again putting barriers on self sufficiency and using them to promote their PaaS goals.

They'll gladly narc on your self hosted application doing what it's supposed to do, but what about the 23 separate calls to Google CDN, ads, fonts, ect that every website has your browser make?

I tend to believe the this particular functionality is no longer of any use to Google, which is why they want to deprecate it to raise the barrier of entry for others.

  • iforgotpassword 2 days ago

    Idk, I like the idea of my browser warning me when a random website I visit tries to talk to my network. if there's a legitimate reason I can still click yes. This is orthogonal to any ads and data collection.

    • Henchman21 2 days ago

      I have this today from macOS. To me it feels more appropriate to have the OS attempt to secure running applications.

      • happyopossum 2 days ago

        No you don’t - you get a single permission prompt for the entire browser. You definitely don’t get any permission-site permission options from the OS

        • Henchman21 2 days ago

          Ah I misunderstood, thank you

  • Henchman21 2 days ago

    I agree that any newly proposed standards for the web coming from Google should be met with a skeptical eye — they aren’t good stewards IMO and are usually self-serving.

    I’d be interested in hearing what the folks at Ladybird think of this proposal.

jenny91 a day ago

On a quick look, isn't this a bit antithetical to the concept of the internet as a decentralized and hierarchical system? You have to route through the public internet to interoperate with the rest of the public internet?