amluto 2 days ago

Just reading the description, I guessed it would be a fountain code. And, indeed, it’s a very very thin wrapper around RaptorQ. So it gets data through a link that drops each packet, independently, with moderate probability, at approximately the maximum rate in the sense of number of packets transmitted divided by total message size.

What it does _not_ do is anything resembling intelligent determination of appropriate bandwidth, let alone real congestion control. And it does not obviously handle the part that fountain codes don’t give for free: a way to stream out data that the sender wasn’t aware of from the very beginning.

  • commandersaki 2 days ago

    Would a protocol like this be useful for a data diode?

    • Beretta_Vexee 2 days ago

      It seems to me that there are exchanges between the client and the server. So it wouldn't be usable in its current state with a network diode.

      Network diodes have very low packet loss rates, for the simplest ones they are literally fibre to ethernet transducers with a disconnected channel and a few centimetres of fibre. There is no external disturbance that could generate significant bit flip. The more complex diodes use multiple transducers, power supply, fibres for redundancy and protocols to detect when one of the channels is out of order.

      The diode manufacturers don't give details but we're probably looking at rather simple error detection and parity codes that can be easily implemented on an ASIC or FPGA to obtain high data rates.

      On top of this you can use unidirectional protocols in UDP. But the diode works mainly at the physical and data layers of the OSI model.

      Some diode manufacturers include proxies to simulate an FTP server and make it easier for users who just want to retrieve logs from critical infrastructure.

nu11ptr 2 days ago

This is neat, but I'm a little confused on those benchmark numbers and what they mean exactly. For example, with 10% or 50% packet loss you aren't going to get a TCP stream to do anything reasonable. They will seem to just "pause" and make very, very slow progress. When we talk about loss scenarios, we are typically talking about single digit loss and more often well under 1%. Scenarios of 10 to 50% loss are catastrophic where TCPs effectively cease to function, so if this protocol works well in that that environment, it is an impressive feat.

EDIT: It probably also needs to be clarified which TCP algorithm they are using. TCP standards just dictate framing/windowing, etc. but algorithms are free to use their own strategies for retransmissions and bursting, and the algorithm used makes a big difference in differing loss scenarios.

EDIT 2: I just noticed the number in parens is transfer success rate. Seeing 0% for 10 and 50% loss for TCP sounds about right. I'm not sure I still understand their UDP #'s as UDP isn't a stream protocol, so raw transferred data would be 100% minus loss %, unless they are using some protocol on top of it.

  • AdamJacobMuller 2 days ago

    With the # in parens being success rate, the timing makes little sense to me.

    A TCP connection with 10% loss will work and transfer (it's gonna suck) and be very very very slow but their TCP 10% loss example is faster somehow?

    TCP being reliable will just get slower with loss until it eventually can't transmit any successful packets within some TCP timeout (or some application-level timeout).

    Even a 50% packet loss connection will work, within some definitions of the word "work" and also this brings up the biggest missing point in that chart: this all depends heavily on latency.

    50% loss on a connection with 1ms latency is much more tolerable than 1% loss on a connection with 1000ms latency and will transfer faster (caveats around algorithms and other things apply, but this is directionally correct).

    A real chart for this would be a graph where X is the % of packet loss and Y is the latency amount with distinct lines per protocol (really one line per defined protocol configuration, eg tcp cubic w/nagle vs without and with/without some device doing RED in the middle or different RED configurations, etc, many parameters to test here).

    If this sounds negative, it's not, I think the research around effective high-latency protocols is very interesting and important. I was thinking recently (probably due to all the SpaceX news) about what the internet will look like for people on the moon or on mars. The current internet will just not work for them at all. We will require very creative solutions to make a useful open internet connections which isn't locked down to Apple/Facebook/Google/X/Netflix/etc.

    • saurik 2 days ago

      A big problem with TCP's loss avoidance is that it assumes the cause of loss is congestion, so if you use it on a network with random packet loss it can just get slower and slower and slower until it effectively stops working entirely.

      • AdamJacobMuller 2 days ago

        Agreed that it's a problem but probably not a huge one today on the internet since most loss is actually a result of congestion.

        It has led to some weird things whereby the L2 protocols (thinking wifi and LTE/cellular) have their own reliability layer to combat the problem you're describing. I'm not sure if things would be better or worse if they didn't do this and TCP was responsible, the iteration of solutions for it would be much slower and probably could never be as good as the current situation where the network presents a less-lossy layer to TCP.

        We have to completely rethink things for interplanetary networking.

        I thought I recognized your username, I remember jailbreaking the original iPhone on IRC with you helping :)

        • saurik a day ago

          Sure, but this is a simulator with random packet loss that has nothing to do with congestion, isn't it?

      • singron 2 days ago

        Similarly, I wonder if nyxpsi has congestion control? It's probably tricky to implement if you are trying work with a crap network. I guess you could see how packet loss or latency responds to throughput, but then you need to change the throughput periodically while transmitting, which slows down the transfer.

  • daxdev 2 days ago

    not sure what ur asking then, it uses the standard UDP implementation and the benchmarks have to do with transferring the entirety of the sample data not by individual packet loss but rather receiving the whole context of the message. There is no retransmission. Its all or nothing...

  • fulafel 2 days ago

    Yes, esp comparing against "UDP" this way doesn't make sense.

    • klempner 2 days ago

      What's particularly confusing is that this appears to be coming from the dev team themselves -- this isn't some rando pointing at an interesting unprepared github they found.

      Like, if they're looking for publicity by sharing their github page, I'd expect the readme to have a basic elevator pitch, but their benchmarking section is a giant category error and it's missing even the most high level of summary as to what it is doing to achieve good throughput at high packet loss rates.

iczero 2 days ago

Hi, respectfully, you do not appear to understand how networking works.

> https://github.com/nyxpsi/nyxpsi/blob/bbe84472aa2f92e1e82103...

This is not how you "simulate packet loss". You are not "dropping TCP packets". You are never giving your data to the TCP stack in the first place.

UDP is incomparable to TCP. Your protocol is incomparable to TCP. Your entire benchmark is misguided and quite frankly irrelevant.

As far as I can tell, absolutely no attempt is made whatsoever to retransmit lost packets. Any sporadic failure (for example, wifi dropout for 5 seconds) will result in catastrophic data loss. I do not see any connection logic, so your protocol cannot distinguish between connections other than hoping that ports are never reused.

Have you considered spending less time on branding and more time on technical matters? Or was this supposed to be a troll?

edit: There's no congestion control nor pacing. Every packet is sent as fast as possible. The "protocol" is entirely incapable of streaming operation, and hence message order is not even considered. The entire project is just a thin wrapper over the raptorq crate. Why even bother comparing this to TCP?

  • acheong08 2 days ago

    The project seems to have been made for their portfolio rather than any practical purpose (nothing wrong with that). Just a few hundred lines of hard coded server/client wrapping around raptorQ

    • iczero 2 days ago

      They list their email as:

      > For more information or to contact us open a PR or email us at nyxpsi@skill-issue.dev

      This sounds to me like a troll. In any case, I don't think blatantly advertising your lack of expertise in networking is a good way of improving one's portfolio.

      • jeroenhd 2 days ago

        skill-issue.dev has no A or AAAA records, but it does seem to have MX records, so the email address is valid. Not very professional perhaps, but it's not unroutable at least.

      • acheong08 a day ago

        > I don't think blatantly advertising your lack of expertise in networking is a good way of improving one's portfolio.

        Who’s reading code these days?

        [x] Sounds cool [x] Good number of stars

        Hired

  • vitus 2 days ago

    I'm assuming you're getting downvoted for your tone, but your technical points are spot on.

vitus 2 days ago

Are we assuming that ~all packet loss is due to the physical medium, and almost none due to congestion?

The reason why TCP will usually fall over at high loss rates is because many commonly-used congestion controllers assume that loss is likely due to congestion. If you were to replace the congestion control algorithm with a fixed send window, it'd do just fine under these conditions, with the caveat that you'd either end up underutilizing your links, or you'd run into congestive collapse.

I'm also not at all sure that the benchmark is even measuring what you'd want to. I cannot see any indications of attempting to retransmit any packets under TCP -- we're just sometimes writing the bytes to a socket (and simulating the delay), and declaring that the transfer is incomplete when not all the bytes showed up at the other side? You can see that there's something especially fishy in the benchmark results -- the TCP benchmark at 50% loss finishes running in 50% of the time... because you're skipping all the logic in those cases.

https://github.com/nyxpsi/nyxpsi/blob/main/benches/network_b...

Mathis equation suggests that even with 50% packet loss at 1ms RTT, the max achievable TCP [edit: Reno, IIRC] throughput is about 16.5 Mbps.

HippoBaro 2 days ago

It would be great to know a bit more about the protocol itself in the readme. I’m left wondering if it’s reliable connection-oriented, stream or message based, etc.

dools 2 days ago

Since it is so stable under lossy conditions it might be a good candidate for VoIP applications. The protocol du jour for VoIP is UDP because you really want to just drop your packets and move on rather than try to retransmit most of the time, but since the transfer speed in this case seems to be immune to the effects of packet loss and it performs just as well as UDP in a 0% packet loss environment it seems like it would produce superior call quality more consistently than either TCP or UDP.

ggm 2 days ago

What are the conditions leading to extreme packet loss in layers 2&3 in the first place?

I can imagine noisy RF, industrial, congested links, new queueing at the extremes in densely loaded switches, but the thing is: usually out there are strategies to reduce the congestion. External noise, factory/industrial/adversarial, sure. This is going to exist.

  • AdamJacobMuller 2 days ago

    At 0-1%, basically anything.

    Your wifi and home network probably are closer to 1% than 0%.

    at 1-10%, you're probably on some kind of shared connection (CMTS or GPON with TDMA) and your provider has an overloaded network design (and you're at the end of the upgrade/split queue).

    at anything above 10% you're in the realm of weird broken stuff. Congestion on the internet between major providers is a thing but is far less common than it was a decade ago, it does still happen though. Major providers who have NECMP or NLAG links inside their backbones or between providers where one of the links breaks in a way which doesn't remove it from the ECMP/LAG and suddenly you drop 1/N of the traffic (where N might be 2 == 50% loss). More common was finding LAG/ECMP where 1/N of the links was oversubscribed but the others were fine due to unequal traffic distribution.

    > noisy RF, industrial

    Pretty uncommon in my experience as most of these environments are not as cost optimized and know already that they will be in weird RF/electrical environments so they just use wired ethernet (and even shielded cable for really janky situations).

    While there are strategies to reduce congestion on backbone links, they are not commonly implemented and even less commonly implemented well and sometimes implemented intentionally poorly.

    • ggm 2 days ago

      There isn't much point in wierd algo solutions to 1% packetloss, and most people are living under 5% most of the time. I would say on an 80/20 class decision most of us either have layer2 loss in the link to the provider, and then not much loss, or have some corner case of congestion like zoom inter-continentally.

      Remember, 75% of content now comes from a DC within 1 AS hop of your ISP.

      For non-western economies reliant on mobile IP, there is a level of loss and congestion being seen in the radio packet layer but that layer is usually dealing with it.

      Starlink has worked out how to manage it's TCP loss issues despite steering dishy between sats in orbit in a 15 second timeslot model.

      I'm not trying to dismiss this work: I don't understand the use-case because from my perspective (admittedly in the western economy, but in asia and exposed to the other kind of internet for less developed places) this level of sustained packetloss isn't usual, And when it is, the link layer is usually doing something like FEC to take care of it.

      • zamalek 2 days ago

        Even though 1% loss is relatively common, it is a catastrophic amount of loss, and goes to show the miracle of TCP. 1% loss does not mean 1% less goodput; it means substantially more than 1%. TCP's (and QUIC, and what-have-you) control packets are also subject to that loss, as are individual packet fragments - the probability of retransmissions explodes.

      • AdamJacobMuller 2 days ago

        I think the use-case for the author is "this is cool" which is the best reason for anything to exist.

  • vasilvv 2 days ago

    Generally, L2 networks are engineered with the assumption that they will carry TCP, and TCP performs really poorly with high loss rates (depends on the specific congestion control used, but the boundary can be anywhere between 1% and 25%), so they try to make sure on L2 level that losses are minimal. There are some scenarios in which a network can be engineered around high loss rates (e.g. some data center networks), but those don't use TCP, at least with traditional loss recovery.

    Error correction codes on L4 level are generally only useful for very low latency situations, since if you can wait for one RTT, you can just have the original sender retransmit the exact packets that got lost, which is inherently more efficient than any ECC.

e-dant 2 days ago

What’s the protocol actually?

All I can see are hardcoded ping/pong “meow” messages going over a hardcoded client and server.

But maybe the ping/pong is part of the protocol?

It’s not clear.

Anyway, this redundancy-based protocol doesn’t seem to take into account that too many packets over the network can be a cause of bad, “overloaded” network conditions.

Raptorq is a nice addition, though.

  • daxdev 2 days ago

    We are getting there, this was hacked together over the weekend. So for now enjoy ur cat sounds. MEOW

mmastrac 2 days ago

Went looking for fountain codes, was not disappointed [1]. It's a shame these have been locked up for so long -- there's a lot of network infrastructure that could be improved by them.

The patents are hopefully expiring soon.

Generic tornado codes are likely patent free, having been expired for a few years now: https://en.wikipedia.org/wiki/Tornado_code

EDIT: looking a bit deeper into this repo, it's really just a wrapper over the raptorq crate with a basic UDP layer on top. Nothing really novel here that I can see over `raptorq` itself.

[1] https://crates.io/crates/raptorq

  • bb88 2 days ago

    > There's a lot of network infrastructure that could be improved by them.

    One killer feature was multicast streaming of data. The streaming could do an extra 5% of broadcasting packets instead of several round trips and retransmissions.

    Now that I think about it, I wish we actually used multicast.

    • zamadatix 2 days ago

      The hard part about multicast is the scaling overhead of coordinating which streams should go where when it's more than "these couple things on these couple networks". Even then, the way it's a dedicated range of IPs instead of parts of public assignments pretty much shut down the idea it could ever be coordinated at scale outside of a few private networks doing it together.

      It has found good success in IPTV delivery inside provider's own networks though. Cameras at casinos/hotels/Transit systems and the like too.

      • elcritch 2 days ago

        Ipv6 works much better with multicast. I learned about it a few years back and it's actually core to the ipv6 protocol. That means all ipv6 routers must support multicast.

        There's 2^112 possible global multicast addresses with ipv6 as well (1). Though yeah, you'll still have queuing overloads as well and other issues.

        1. https://learningnetwork.cisco.com/s/question/0D53i00000Kt0EK...

        • zamadatix 2 days ago

          I'm a big IP v6 fan but the multicast improvements are oft overstated. The meat and potatoes changes are about getting rid of broadcasts in a LAN (in a way where a dumb switch will still treat them as broadcast anyways), not about actual routing of multicast between LANs. There is no requirement IPv6 routers must support routing multicast outside of the LAN (a completely different task). There is no public assignment of ff00::/8, it's still a free for all for generic streams. There's nothing that makes routing it across LANs easier to scale and orchestrate (in fact the protocols for this are separate from IP anyways).

          Effectively, the only real "improvement" for the routed multicast case is you have more private multicast addresses to pick from.

          • elcritch 10 hours ago

            Yeah, it’s unfortunate multicast outside your own LAN isn’t more well supported. It makes sense as it’d require more orchestration above IP.

            Though I’d argue that 112 bits of random addresses makes the need for global registration largely unnecessary. Similarly to the rest of IPv6, the address is intentionally so large that it allows random IP generation with very low collision probability.

        • wmf 2 days ago

          all ipv6 routers must support multicast

          That doesn't help if all routers have it turned off.

      • xxpor 2 days ago

        Copying packets is brutal in software as well.

        • zamadatix 2 days ago

          Pushing packets in software is generally brutal but multicast/broadcast should be inherently easier. It's less "copy this packet 27 times" and more "instead of receiving 27 packets and sending 27 packets you receive 1 packet and send it 27 times before you remove it from memory". The "hard" part becomes dealing with the queues filling up because you're inherently able to churn out so much more data than you're able to receive vs unicast.

    • ddingus 2 days ago

      Me too.

      SGI had good support for it in IRIX and I am pretty sure I saw one video streaming solution using it on a LAN to maximize the throughput possible.

      Seems like a good tech that fell into disuse.

      • vermilingua 2 days ago

        IP CCTV and IP TV (think hotels) make extensive use of multicast, it's not completely disused; but only practical on LAN.

        • ddingus 21 hours ago

          Good to know. That was basically the use case for it back in the day too. Multiple video streams.

  • xarope 2 days ago

    TIL about fountain codes and raptor. Thanks!

    I wonder if it opens up new opportunities erasure coding replacements, e.g. RAID for disks, PARQ2 for data recovery, or as Reed-Solomon replacements for comms?

    • nullc 2 days ago

      These sorts of codes are not really the thing you want for RAID-N. They work best with coding groups of thousands of packets or more.

      For adding redundant blocks to a read-mostly/only file (e.g. to correct for sector errors) they could be useful indeed. as that's a case where you might have a few dozen correction packets protecting millions. I'd really like to see some FS develop support for file protection because on SSDs I'm seeing a LOT more random sector failures that disk failures.

      RS codes are optimal for erasures so you really only want to use something else where there are so many packets in the group that RS code performance would be poor... or where a rateless code would be useful.

    • bb88 2 days ago

      For comms it's better since raptorq can generate a large number of symbols. One use would be a TCP replacement. If a file can be transmitted in 100 TCP packets, and there's 10% loss on the link, then the client needs to retry the lost packets in order.

      With raptorq, the client needs to receive 102 packets (I believe). And this can be any combination of original 100 packets and the large number of potential recovery packets.

rasengan 2 days ago

First time I'm hearing of raptor codes which seems a lot more efficient for this use case which I had until now been using KCP for.

vlovich123 2 days ago

IIUC this would work well to tunnel the TCP/UDP WiFi traffic and cellular traffic over so that you get the advantages without needing to migrate software. But then again FEC is already employed by these (maybe not raptor codes but that’s a relatively simple update). Or does tunneling not work?

Is 10% loss common for backbone networks? Maybe if you’re dealing with horribly underprovisioned routers that don’t have enough RAM for how much traffic they’re processing? Not sure otherwise the use case…

  • aidenn0 2 days ago

    I wish many of these protocols would expose non-FEC versions so that the FEC and retry logic could happen at a higher layer. In particular G.hn powerline is terrible in the implementations I've seen, since it will retransmit infinitely, a sudden loss of bandwidth during a large bulk transfer can increase ping times to over a minute.

    • vlovich123 2 days ago

      I’ve thought about this before since there’s lossy data you can just discard instead of delivering (ie use FEC to recover video and otherwise throw away bad frames).

      The challenge of course is that this doesn’t work so well:

      1. Packets come in discrete chunks but FEC works best on a stream of data at the bit level (at least in a transmission context)

      2. Lossy links will have their own understanding of the PHY that can’t be explained to applications / needs to happen much more quickly.

      3. There’s information that can’t be corrupted in the packet full stop (ie the IP and maybe TCP framing).

      2 is really the killer - applications can’t respond to noise issues like the MAC can to issues with the WiFi link for example. And higher layers can’t know about all the intermediary hops that might exist and how to characterize that to tune the FEC parameters.

      3 is also a killer because it means you still need to apply FEC to the PHY to guarantee the framing packet and TCP requires checksums to pass on the packet. It’s really really difficult to use FEC at the application layer in an end to end way. It’s really something that works best at the PHY level for transmissions, at least from the brief time I’ve spent thinking on it. Possible I messed something up in my analysis of course.

      • aidenn0 a day ago

        I realized afterwards that what I want isn't all FEC left to higher levels, but the retransmission logic. It is absolutely necessary to detect packet corruption at the link-layer, for the MAC address, if nothing else. But it is not necessary to infinitely re-transmit all data. There are very few protocols that gracefully handle an RTT of over 60s, so buffering data for more than a few seconds is bonkers.

        I do also think that exposing some knobs for the quantity of FEC to apply to a higher level might make sense; some packets are 100% fine to drop, others are very much not. Standards trying to do this exist, but don't seem to be widely supported.

RobinHirst11 a day ago

This could be great for high paying places that could utilize this quite well

Deep Space Communications, Satellite Network Reliability, First Responder Communication in Disaster Zones, Search and Rescue Operations Communication, Disaster Relief Network Communication, Underwater Sensors

adamch 2 days ago

This is interesting. But how would you use it? You'd need to open up a new type of socket (neither TCP nor UDP but nyxpsi) and everything along your network route would need to support it. So it wouldn't be useful with existing mobile networks (middle boxes won't speak the protocol) nor within the data center (because it's used for high packet loss situations). So what's the use case? Custom wireless networks with embedded devices?

  • wmf 2 days ago

    It's using UDP.

  • fragmede 2 days ago

    > middle boxes won't speak the protocol

    Middle boxes only need to speak IP, which they do already, and not block nyxpsi packets (which they probably do).

    • zamadatix 2 days ago

      You'll still run into trouble with anything wanting to allow everything but trying to do NAT. I'd hazard to guess this actually still uses UDP under the covers for that reason (but haven't bothered to verify). QUIC and HTTP/3 went that route for the same reason.

      Edit: It does.

      • wyager 2 days ago

        There's not really a sufficiently compelling reason to use anything except UDP for future protocols aiming to replace TCP

nodeshiftcloud 2 days ago

Interesting project! It's great to see efforts to tackle high packet loss scenarios using fountain codes like RaptorQ. However, the benchmarks could use more clarity. Comparing Nyxpsi to TCP and UDP without considering factors like congestion control, latency, and retransmission strategies might not give a complete picture. It would be helpful to see how Nyxpsi performs under different network conditions, especially with varying latencies and in real-world environments. Also, providing more detailed documentation about the protocol's operation and its handling of issues like streaming and congestion control would be beneficial. Looking forward to seeing how this evolves!

  • PhilipRoman 2 days ago

    AI bros will talk about AI content being indistinguishable from real humans and then post things like this...