> But it does not nearly approach the level of systematic prevention of memory unsafety that rust achieves.
Unless I gravely misunderstood Zig when I learned it, the Zig approach to memory safety is to just write a ton of tests fully exercising your functions and let the test allocators find and log all your bugs for you. Not my favorite approach, but your article doesn't seem to take into account this entirely different mechanism.
Yes, testing is Zig's answer. But that quote is right. Testing doesn't achieve the same kind of systematic prevention of memory bugs that rust does. (Or GC based languages like Go, Java, JS, etc.).
You can write tests to find bugs in any language. C + Valgrind will do most of the same thing for C that the debug allocator will do for zig. But that doesn't stop the avalanche of memory safety bugs in production C code.
I used to write a lot of javascript. At the time I swore by testing. "You need testing anyway - why not use it to find other kinds of bugs too?". Eventually I started writing more and more typescript, and now I can't go back. One day I ported a library I wrote for JSON based operational transform from javascript to typescript. That library has an insane 3:1 test:code ratio or something, and deterministic constraint testing. Despite all of that testing, the typescript type checker still found a previously unknown bug in my codebase.
As the saying goes, tests can only prove the presence of bugs. They cannot prove your code does not have bugs. For that, you need other approaches - like rust's borrow checker or a runtime GC.
There's also no reason to have a separate borrow checker if it could just be integrated in the compiler.
When a compiler has a borrow checker that means the language was already designed to enable borrow checking in the first place. And if a language can let you do borrow checking why would you use a separate tool?
because it gets it out of the fast path compile cycle. do you need a borrow checker for `ls`? Probably not. don't use it. do you need it every time you work through intermediate ideas in a refactor? probably not. just turn it on in CI.
The borrow checker is not the slow part of the Rust compiler and lets me avoid bugs, why would I not always want to use it?
And if you put the borrow checker in the CI you massively increased the latency between writing the code and getting all relevant feedback from the compiler/tooling. This would do the opposite of what you intended.
I don't. I think compile time static analysis is great. Upthread you said this:
> there's no reason why the borrow checker must be in the compiler proper.
On a technical front, I completely agree. But there's an ecosystem benefit to having the borrow checker as part of the compiler. If the borrow checker wasn't in the compiler proper, lots of people would "accidentally forget" to run it. As a result, lots of libraries would end up on crates.io which fail the borrow checker's checks. And that would be a knock on disaster for the ecosystem.
But yes, there's nothing stopping you writing a rust compiler without a borrow checker. It wouldn't change the resulting binaries at all.
> lots of people would "accidentally forget" to run it
yeah, like how the sel4 guys accidentally forget to run their static analysis all the time.
You put a badge on CI. If you "forget to run" the static analysis, then people get on you for not running it. Or people get on you if you don't have the badge. Just like how people get on people for not writing programs in rust.
"But seatbelts would also work if everybody was just choosing to use them rather than us mandating their fitment and use, so I don't understand why facts are true"
Amusingly this is even true for the linter, nobody ran the C linter, more or less everybody runs the Rust linter, the resulting improvement in code quality is everything you'd hope. All humans love to believe they're above average, most are not and average is by definition a mediocre aspiration. Do better.
what the hell are you talking about. if you are writing security conscious software you should turn on a static checker and proudly show a badge that says "this code is memory safe". if youre writing a custom data pipeline to be used in a niche scientific field where the consumers are you and anyone that wants to repro your pipeline, and everything is in arenas, who the fuck cares. don't bother with static analysis.
If everything is in arenas, lifetimes get much easier.
But, the borrow checker doesn't just check lifetimes. It also checks ownership, and that variables either have a single mutable reference or immutable references. The optimizer assumes those invariants are maintained in the code. Many of its optimizations wouldn't be sound otherwise.
So, if you could compile code which fails the borrow checker, there's all sorts of weird and wonderful sources of UB eagerly waiting to give you a really bad day - from aliasing issues to thread safety problems to use-after-free bugs. The borrow checker has been around forever in rust. So I don't think anyone has any idea what the implications would be of compiling "bad" code.
Point being, there are many many individual programs where none of those things you talk about exist. So why not have a programming system where you can actually turn those things off for development velocity.
I'm rejecting the idea that "opt-in" is bad. Opt-out is of course better, but "no choice" is not good.
I suppose you can even ship the test/logging allocator with your production build, and instruct your users to run your program with some option / env var set to activate it. This would allow to repro a problem right where it happens, hopefully with some info helpful for debugging attached.
Not a great approach for critical software, but may be much better than what C++ normally offers for e.g. game software, where the development speed definitely trumps correctness.
What that means, though, is that you have a choice between defining memory unsafely away completely with Rust or Swift, or trying to catch memory problems by a writing a bunch of additional code in Zig.
I’d argue that ‘a bunch of additional code’ to solve for memory safety is exactly what you’re doing in the ‘defining memory safety away’ example with Rust or Swift.
It’s just code you didn’t write and thus likely don’t understand as well.
This can potentially lead to performance and/or control flow issues that get incredibly difficult to debug.
That sounds a bit unfair. All that code that we neither wrote nor understood, I think in the case of Rust, it’s either the borrow checker or the compiler itself doing something it does best - i.e., “defining memory safety away”. If that’s the case, then labeling such tooling and language-enforced memory safety mechanisms as “a bunch of additional code…you didn’t write and…don’t understand” appears somewhat inaccurate, no?
So? That wasn't the claim. The GP poster said this:
> This can potentially lead to performance and/or control flow issues that get incredibly difficult to debug.
Writing a linked list in rust isn't difficult because of control flow issues, or because rust makes code harder to debug. (If you've spent any time in rust, you quickly learn that the opposite is true.) Linked lists are simply a bad match up for the constraints rust's borrow checker puts on your code.
In the same way, writing an OS kernel or a high performance b-tree is a hard problem for javascript. So what? Every language has things its bad at. Design your program differently or use a different language.
> This can potentially lead to performance and/or control flow issues that get incredibly difficult to debug.
The borrow checker only runs at compile-time. It doesn't change the semantic meaning - or the resulting performance - of your code.
The borrow checker makes rust a much more difficult and frustrating language to learn. The compiler will refuse to compile your code entirely if you violate its rules. But there's nothing magical going on in the compiler that changes your program. A rust binary is almost identical to the equivalent C binary.
Weird that Swift is your totem for "managed/collected runtime" and not Java (or C#/.NET, or Go, or even Javascript). I mean, it fits the bill, but it's hardly the best didactic choice.
The point was that basically no one knows Swift, and everyone knows Java. If you want to point out a memory safe language in the "managed garbage-collected runtime" family, you probably shouldn't pick Swift.
I wouldn’t put Swift in the same ‘managed garbage-collected runtime’ family as Java, C#/.NET, Go, and Javascript, so maybe they weren’t trying to do what you think.
Swift is more like a native systems programming language that makes it easy to trade performance for ergonomics (and does so by default).
What if -- stay with me now -- what if we solved it by just writing vastly less code, and having actually reusable code, instead of reinventing every type of wheel in every project? Maybe that's the real secret to sound code. Actual code reuse. I know it's a pipedream, but a man can dream, can't he?
The way we've done code reuse up to this point rarely lives up to its promises.
I don't know what the solution is, but these days I'm a lot more likely to simply copy code over to a new project rather than try to build general purpose libraries.
I feel like that's part of the mess Rust/Swift are getting themselves tangled up in, everything depends on everything which turns evolution into more and more of an uphill struggle.
Why? In C I'd understand. But cargo and the swift package manager work great.
By all means, rewrite little libraries instead of pulling in big ones. But if you're literally copy+pasting code between projects, it doesn't take much work to pull that code out into a shared library.
Yeah that is the opposite take of recent posts that the Cargo/npm package dependence is way too heavy.
Saying we should rely on reusable modules is great and all, but that reusable code is going to be maintained by who now?
There's no sustainable pattern for this yet, most things are good graces of businesses or free time development, many become unmaintained over time- people who actually want to survive on developing and supporting reusable modules alone might actually be more rare than the unicorn devs.
False. Fil-C secures C and C++. It’s more comprehensively safe than Rust (Fil-C has no escape hatches). And it’s compatible enough with C/C++ that you can think of it as an alternate clang target.
Fil-C is impressive and neat, but it does add a runtime to enforce memory safety which has a (in most cases acceptable) cost. That's a reasonable strategy, Java and many other langs took this approach. In research, languages like Dala are applying this approach to safe concurrency.
Rust attempts to enforce its guarantees statically which has the advantage of no runtime overhead but the disadvantage of no runtime knowledge.
> but in practice fails, because of pervasive use of `unsafe`.
Yes, in `unsafe` code typically dynamic checks or careful manual review is needed. However, most code is not `unsafe` and `unsafe` code is wrapped in safe APIs.
I'm aware C already has a runtime, this adds to it.
> Yes, in `unsafe` code typically dynamic checks or careful manual review is needed. However, most code is not `unsafe` and `unsafe` code is wrapped in safe APIs.
Those are the excuses I heard from C++ programmers for years.
Memory safety is about guarantees enforced by the compiler. `unsafe` isn't that.
The stuff Fil-C adds is on the same footing as `unsafe` code in Rust- its implementation isn't checked, but its surface area is designed so that (if the implementation is correct) the rest of the program can't break it.
Whether the amount and quality of this kind of code is comparable between the two approaches depends on the specific programs you're writing. Static checking, which can also be applied in more fine-grained ways to parts of the runtime (or its moral equivalent) is an interesting approach, depending on your goals.
> The stuff Fil-C adds is on the same footing as `unsafe` code in Rust- its implementation isn't checked, but its surface area is designed so that (if the implementation is correct) the rest of the program can't break it.
It’s not the same.
The Fil-C runtime is the same runtime in every client of Fil-C. It’s a single common trusted compute base and there’s no reason for it to grow.
On the other hand Rust programmers use unsafe all over the place, not just in some core libraries.
Yeah, that's what I meant by "depends on the specific programs you're writing." Confining unsafe Rust to core libraries is totally something people do.
I mean, again, yeah. I specifically compared the safe API/unsafe implementation aspect, not who writes the unsafe implementation.
To me the interesting thing about Rust's approach is precisely this ability to compose unrelated pieces of trusted code. The type system and dynamic semantics are set up so that things don't just devolve into a yolo-C-style free-for-all when you combine two internally-unsafe APIs: if they are safe independently, they are automatically safe together as well.
The set of internally-unsafe APIs you choose to compose is a separate question on top of that. Maybe Rust, or its ecosystem, or its users, are too lax about this, but I'm not really trying to have that argument. Like I mentioned in my initial comment, I find this interesting even if you just apply it within a single trusted runtime.
One of these days, a project will catch on that's vastly simpler than any memory solution today, yet solves all the same problems, and more robustly too, just like how it took humanity thousands of years to realize how to use levers to build complex machines. The solution is probably sitting right under our noses. I'm not sure it's your project (maybe it is) but I bet this will happen.
That’s a really great attitude! And I think you’re right!
I think in addition to possibly being the solution to safety for someone, Fil-C is helping to elucidate what memory safe systems programming could look like and that might lead to someone building something even better
> Fil-C achieves this using a combination of concurrent garbage collection and invisible capabilities (each pointer in memory has a corresponding capability, not visible to the C address space)
In almost all uses of C and C++, the language already has a runtime. In the Gnu universe, it's the combination of libgcc, the loader, the various crt entrypoints, and libc. In the Apple version, it's libcompiler_rt and libSystem.
Fil-C certainly adds more to the runtime, but it's not like there was no runtime before.
It makes it a lot less performant and there is no avoiding or mitigating that downside. C++ is often selected as a language instead of safer options for its unusual performance characteristics even among systems languages in practice.
Fil-C is not a replacement for C++ generally, that oversells it. It might be a replacement for some C++ software without stringent performance requirements or a rigorously performance-engineered architecture. There is a lot of this software, often legacy.
> It makes it a lot less performant and there is no avoiding or mitigating that downside.
You can’t possibly know that.
> C++ is often selected as a language instead of safer options for its unusual performance characteristics even among systems languages in practice.
Is that why sudo, bash, coreutils, and ssh are written in C?
Of course not.
C and C++ are often chosen because they make systems programming possible at all due to their direct access to syscall ABI.
> Fil-C is not a replacement for C++ generally, that oversells it.
I have made no such claim.
Fil-C means you cannot claim - as TFA claims - that it’s impossible to make C and C++ safe. You have to now hedge that claim with additional caveats about performance. And even then you’re on thin ice since the top perf problems in Fil-C are due to immaturity of its implementation (like the fact that linking is hella cheesy and the ABI is even cheesier).
> It might be a replacement for some C++ software without stringent performance requirements or a rigorously performance-engineered architecture. There is a lot of this software, often legacy.
It’s the opposite in my experience. For example, xzutils and simdutf have super lower overhead in Fil-C. In the case of SIMD code it’s because using SIMD amortizes Fil-C’s overheads.
> C and C++ are often chosen because they make systems programming possible at all due to their direct access to syscall ABI.
Surely Fil-C cannot provide direct access to syscalls without violating the safety guarantee. There must be something ensuring that what the kernel interprets as a pointer is actually a valid pointer.
> Fil-C means you cannot claim - as TFA claims - that it’s impossible to make C and C++ safe. You have to now hedge that claim with additional caveats about performance. And even then you’re on thin ice since the top perf problems in Fil-C are due to immaturity of its implementation (like the fact that linking is hella cheesy and the ABI is even cheesier).
The world of compilers is littered with corpses of projects that spent years claiming faster performance was right around the corner.
I believe you can make it faster, but how much faster? We'll see.
I think these types of compatibility layers will be a great option moving forward for legacy software. But I have a hard time seeing the case for using Fil-C for new code: all the known disadvantages of C and C++, now combined with performance closer to Java than Rust (if not worse), and high difficulty interoperating with other native code (normally C and C++'s strength!), in exchange for marginal safety improvements over Rust (minus Rust's more general safety culture).
edit: I feel bad writing such a dismissive comment, but it's hard to avoid reacting that way when I see unrealistically rosy portrayals of projects.
> Surely Fil-C cannot provide direct access to syscalls without violating the safety guarantee. There must be something ensuring that what the kernel interprets as a pointer is actually a valid pointer.
This is exactly what Fil-C does.
> all the known disadvantages of C and C++
The main disadvantage of C and C++ is unsafety and fil-C comprehensively fixes that.
> edit: I feel bad writing such a dismissive comment, but it's hard to avoid reacting that way when I see unrealistically rosy portrayals of projects.
How is my portrayal unrealistically rosy?
Even the fact that you know what the current perf costs are is the result of me being brutally honest about its perf.
Okay, I just checked. It does not. I wrote: "There must be something ensuring that what the kernel interprets as a pointer is actually a valid pointer." And sure enough, your runtime manually wraps each Linux syscall to do exactly that:
ioctl is even harder because some ioctls take pointers to structs that themselves contain pointers; on Linux that includes v4l2 and mmc. It looks like you don't handle that properly, judging by:
My point is: having to go through wrapper functions is not what I'd call "direct" access to "ABI". (Also, the wrappers don’t even wrap the syscall ABI directly; they wrap the libc ABI that in turn wraps syscalls.)
You might object that the wrappers are thin enough that they still count as direct. While that's a matter of definitions, I think my previous comment made it clear what _I_ meant when I questioned "direct", given my followup sentence about "actually a valid pointer".
But beyond quibbles about who meant what, this lack of directness matters because it implicates the portability of your approach.
At least as currently implemented, you rely on compiling almost everything (even libc) inside the sandbox, while having a ‘narrow waist’ of syscall wrappers mediating access between the sandbox and the outside world. That should work for most use cases on Linux, where there's already an assumption that different processes can have completely different library stacks, and static linking is common. Even if you want to do a GUI application you should be able to recompile the entire GTK or Qt stack inside the sandbox, and it doesn’t matter if other apps are using different versions or GTK or Qt.
But what about other operating systems? For server and CLI software you can probably still get away with exposing a small syscall/libc API, similar to Cosmopolitan (though that will still require significant effort for each OS). But for GUIs and platform integration more broadly, you’re expected to use the platform-provided libraries that live in your address space. They are usually proprietary, and even when they’re not, the system isn’t designed for multiple versions of the libraries to coexist.
I know I’m not telling you anything you don’t already know. But it’s an important point, because aside from performance, the _other_ big reason that Rust relies on user-written unsafe code is for FFI. If anyone can write their own FFI bindings, as opposed to making all FFI bindings live in a centralized runtime, then it becomes more feasible to scale the mammoth task of writing safe wrappers for all those ABIs. Your approach explicitly rejects user-written unsafe code, so I don’t know how you can possibly end up with reasonable OS library coverage.
Now sure, you didn’t claim anything about GUIs or portability. Perhaps this is more on topic for my previous comment’s point about “high difficulty interoperating with other native code” which you didn’t rebut. But it’s also relevant to “direct access to syscall ABI”, because if there _were_ some way to provide direct access to syscall ABI while remaining memory-safe, then the same approach would probably extend to other system ABIs. For example, a fully CHERI-aware system actually would allow for that. It’s an unfair comparison, because CHERI assumes cooperation from both the hardware and the OS, while you’re trying to run on existing hardware and OSes. I have no idea if we’ll ever see CHERI in general purpose systems. But in exchange, CHERI achieves something that’s otherwise impossible: combining direct system access and memory safety. And I originally read your comment as claiming to do the impossible.
Fil-C’s perf sucks on some workloads. And it doesn’t suck on others.
Extreme examples to give you an idea:
- xzutils had about 1.2x overhead. So slower but totally usable.
- no noticeable overhead in shells, systems utilities, ssh, curl, etc. But that’s because they’re IO bound.
- 4x or sometimes maybe even higher overheads for things like JS engines, CPython, Lua, Tcl, etc. Also OpenSSL perf tests are around 4x I think.
But you’re on thin ice if you say that this is a reason why Fil-C will fail. So much of Fil-C’s overhead is due to known issues that I will fix eventually, like the function call ABI (which is hella cheesy right now because I just wanted to have something that works and haven’t actually made it good yet).
There is a third category of memory and other software safety mechanisms: model checking. While it does involve compiling software to a different target -- typically an SMT solver -- it is not a compile-time mechanism like in Rust.
Kani is a model checker for Rust, and CBMC is a model checker for C. I'm not aware of one (yet!) for Zig, but it would not be difficult to build a port. Both Kani and CBMC compile down to goto-c, which is then converted to formulas in an SMT solver.
There isn't a real one yet, but to scratch an itch I tried to build one for Zig. It's not complete nor do I have plans to complete it. https://github.com/ityonemo/clr
If zig locks down the AIR (intermediate representation at the function level) it would be ideal for running model checking of various sorts. Just by looking at AIR I found it possible to:
- identify stack pointer leakage
- basic borrow checking
- detect memory leaks
- assign units to variables and track when units are incompatible
If you're filling uninitialized pointers with AAAAAAAA, it might be best to also reserve that memory page and mark it as no-access.
I'm not even joking. Any pattern used by magic numbers that fill pointers (such as HeapFree filling memory with FEEEEEEE on Windows) should have a corresponding no-access page just to ensure that the program will instantly fail, and not have a valid memory allocation mapped in there. For 32-bit programs, everything past 0x8000000 used to be reserved as kernel memory, and have an access violation when you access it, so the magic numbers were all above 0x80000000. But with large address aware programs, you don't get that anymore, only manually reserving the 4K memory pages containing the magic numbers will give you the same effect.
Maybe not Zig the language, but the fact that all allocating functions in the standard library accept an allocator (and community libraries follow this precedent) does give you much more control in practice.
For example, how would you use a Vec using stack memory for elements, instead of the heap? For the equivalent data structure in Zig (std.ArrayList), it's just a matter of using a stack allocator instead of using a heap allocator, which is an explicit decision either way.
> But it does not nearly approach the level of systematic prevention of memory unsafety that rust achieves.
Unless I gravely misunderstood Zig when I learned it, the Zig approach to memory safety is to just write a ton of tests fully exercising your functions and let the test allocators find and log all your bugs for you. Not my favorite approach, but your article doesn't seem to take into account this entirely different mechanism.
Yes, testing is Zig's answer. But that quote is right. Testing doesn't achieve the same kind of systematic prevention of memory bugs that rust does. (Or GC based languages like Go, Java, JS, etc.).
You can write tests to find bugs in any language. C + Valgrind will do most of the same thing for C that the debug allocator will do for zig. But that doesn't stop the avalanche of memory safety bugs in production C code.
I used to write a lot of javascript. At the time I swore by testing. "You need testing anyway - why not use it to find other kinds of bugs too?". Eventually I started writing more and more typescript, and now I can't go back. One day I ported a library I wrote for JSON based operational transform from javascript to typescript. That library has an insane 3:1 test:code ratio or something, and deterministic constraint testing. Despite all of that testing, the typescript type checker still found a previously unknown bug in my codebase.
As the saying goes, tests can only prove the presence of bugs. They cannot prove your code does not have bugs. For that, you need other approaches - like rust's borrow checker or a runtime GC.
static code analysis tools can also do it. there's no reason why the borrow checker must be in the compiler proper.
There's also no reason to have a separate borrow checker if it could just be integrated in the compiler.
When a compiler has a borrow checker that means the language was already designed to enable borrow checking in the first place. And if a language can let you do borrow checking why would you use a separate tool?
because it gets it out of the fast path compile cycle. do you need a borrow checker for `ls`? Probably not. don't use it. do you need it every time you work through intermediate ideas in a refactor? probably not. just turn it on in CI.
The borrow checker is not the slow part of the Rust compiler and lets me avoid bugs, why would I not always want to use it?
And if you put the borrow checker in the CI you massively increased the latency between writing the code and getting all relevant feedback from the compiler/tooling. This would do the opposite of what you intended.
The borrow checker is very fast.
Also it’s a great way to make sure every library in the ecosystem passes the borrow checker.
why do you expect compile time static analysis to fail at this? unless youre loading a precompiled asset?
I don't. I think compile time static analysis is great. Upthread you said this:
> there's no reason why the borrow checker must be in the compiler proper.
On a technical front, I completely agree. But there's an ecosystem benefit to having the borrow checker as part of the compiler. If the borrow checker wasn't in the compiler proper, lots of people would "accidentally forget" to run it. As a result, lots of libraries would end up on crates.io which fail the borrow checker's checks. And that would be a knock on disaster for the ecosystem.
But yes, there's nothing stopping you writing a rust compiler without a borrow checker. It wouldn't change the resulting binaries at all.
> lots of people would "accidentally forget" to run it
yeah, like how the sel4 guys accidentally forget to run their static analysis all the time.
You put a badge on CI. If you "forget to run" the static analysis, then people get on you for not running it. Or people get on you if you don't have the badge. Just like how people get on people for not writing programs in rust.
Because you're always going to write some code that the tools can't reason about.
thats true for rust too (hence "unsafe")
"But seatbelts would also work if everybody was just choosing to use them rather than us mandating their fitment and use, so I don't understand why facts are true"
Amusingly this is even true for the linter, nobody ran the C linter, more or less everybody runs the Rust linter, the resulting improvement in code quality is everything you'd hope. All humans love to believe they're above average, most are not and average is by definition a mediocre aspiration. Do better.
what the hell are you talking about. if you are writing security conscious software you should turn on a static checker and proudly show a badge that says "this code is memory safe". if youre writing a custom data pipeline to be used in a niche scientific field where the consumers are you and anyone that wants to repro your pipeline, and everything is in arenas, who the fuck cares. don't bother with static analysis.
If everything is in arenas, lifetimes get much easier.
But, the borrow checker doesn't just check lifetimes. It also checks ownership, and that variables either have a single mutable reference or immutable references. The optimizer assumes those invariants are maintained in the code. Many of its optimizations wouldn't be sound otherwise.
So, if you could compile code which fails the borrow checker, there's all sorts of weird and wonderful sources of UB eagerly waiting to give you a really bad day - from aliasing issues to thread safety problems to use-after-free bugs. The borrow checker has been around forever in rust. So I don't think anyone has any idea what the implications would be of compiling "bad" code.
Point being, there are many many individual programs where none of those things you talk about exist. So why not have a programming system where you can actually turn those things off for development velocity.
I'm rejecting the idea that "opt-in" is bad. Opt-out is of course better, but "no choice" is not good.
I suppose you can even ship the test/logging allocator with your production build, and instruct your users to run your program with some option / env var set to activate it. This would allow to repro a problem right where it happens, hopefully with some info helpful for debugging attached.
Not a great approach for critical software, but may be much better than what C++ normally offers for e.g. game software, where the development speed definitely trumps correctness.
What that means, though, is that you have a choice between defining memory unsafely away completely with Rust or Swift, or trying to catch memory problems by a writing a bunch of additional code in Zig.
I’d argue that ‘a bunch of additional code’ to solve for memory safety is exactly what you’re doing in the ‘defining memory safety away’ example with Rust or Swift.
It’s just code you didn’t write and thus likely don’t understand as well.
This can potentially lead to performance and/or control flow issues that get incredibly difficult to debug.
That sounds a bit unfair. All that code that we neither wrote nor understood, I think in the case of Rust, it’s either the borrow checker or the compiler itself doing something it does best - i.e., “defining memory safety away”. If that’s the case, then labeling such tooling and language-enforced memory safety mechanisms as “a bunch of additional code…you didn’t write and…don’t understand” appears somewhat inaccurate, no?
It is quite fair as far as rust is concerned. For simple data structures, like doubly linked list,are hard problems for rust
So? That wasn't the claim. The GP poster said this:
> This can potentially lead to performance and/or control flow issues that get incredibly difficult to debug.
Writing a linked list in rust isn't difficult because of control flow issues, or because rust makes code harder to debug. (If you've spent any time in rust, you quickly learn that the opposite is true.) Linked lists are simply a bad match up for the constraints rust's borrow checker puts on your code.
In the same way, writing an OS kernel or a high performance b-tree is a hard problem for javascript. So what? Every language has things its bad at. Design your program differently or use a different language.
> This can potentially lead to performance and/or control flow issues that get incredibly difficult to debug.
The borrow checker only runs at compile-time. It doesn't change the semantic meaning - or the resulting performance - of your code.
The borrow checker makes rust a much more difficult and frustrating language to learn. The compiler will refuse to compile your code entirely if you violate its rules. But there's nothing magical going on in the compiler that changes your program. A rust binary is almost identical to the equivalent C binary.
Weird that Swift is your totem for "managed/collected runtime" and not Java (or C#/.NET, or Go, or even Javascript). I mean, it fits the bill, but it's hardly the best didactic choice.
I don't think they said anything about that?
The point was that basically no one knows Swift, and everyone knows Java. If you want to point out a memory safe language in the "managed garbage-collected runtime" family, you probably shouldn't pick Swift.
I wouldn’t put Swift in the same ‘managed garbage-collected runtime’ family as Java, C#/.NET, Go, and Javascript, so maybe they weren’t trying to do what you think.
Swift is more like a native systems programming language that makes it easy to trade performance for ergonomics (and does so by default).
What if -- stay with me now -- what if we solved it by just writing vastly less code, and having actually reusable code, instead of reinventing every type of wheel in every project? Maybe that's the real secret to sound code. Actual code reuse. I know it's a pipedream, but a man can dream, can't he?
The way we've done code reuse up to this point rarely lives up to its promises.
I don't know what the solution is, but these days I'm a lot more likely to simply copy code over to a new project rather than try to build general purpose libraries.
I feel like that's part of the mess Rust/Swift are getting themselves tangled up in, everything depends on everything which turns evolution into more and more of an uphill struggle.
Why? In C I'd understand. But cargo and the swift package manager work great.
By all means, rewrite little libraries instead of pulling in big ones. But if you're literally copy+pasting code between projects, it doesn't take much work to pull that code out into a shared library.
No, this doesn't solve the problem. Libraries have security issues like every other codebase.
Yeah that is the opposite take of recent posts that the Cargo/npm package dependence is way too heavy.
Saying we should rely on reusable modules is great and all, but that reusable code is going to be maintained by who now?
There's no sustainable pattern for this yet, most things are good graces of businesses or free time development, many become unmaintained over time- people who actually want to survive on developing and supporting reusable modules alone might actually be more rare than the unicorn devs.
I meant in programming in general, not specific to Rust or Cargo.
> it seems impossible to secure c or c++
False. Fil-C secures C and C++. It’s more comprehensively safe than Rust (Fil-C has no escape hatches). And it’s compatible enough with C/C++ that you can think of it as an alternate clang target.
Fil-C is impressive and neat, but it does add a runtime to enforce memory safety which has a (in most cases acceptable) cost. That's a reasonable strategy, Java and many other langs took this approach. In research, languages like Dala are applying this approach to safe concurrency.
Rust attempts to enforce its guarantees statically which has the advantage of no runtime overhead but the disadvantage of no runtime knowledge.
Rust attempts to enforce guarantees statically, but in practice fails, because of pervasive use of `unsafe`.
Fil-C doesn't "add a runtime". C already has a runtime (loader, crt, compiler runtime, libc, etc)
> but in practice fails, because of pervasive use of `unsafe`.
Yes, in `unsafe` code typically dynamic checks or careful manual review is needed. However, most code is not `unsafe` and `unsafe` code is wrapped in safe APIs.
I'm aware C already has a runtime, this adds to it.
> Yes, in `unsafe` code typically dynamic checks or careful manual review is needed. However, most code is not `unsafe` and `unsafe` code is wrapped in safe APIs.
Those are the excuses I heard from C++ programmers for years.
Memory safety is about guarantees enforced by the compiler. `unsafe` isn't that.
The stuff Fil-C adds is on the same footing as `unsafe` code in Rust- its implementation isn't checked, but its surface area is designed so that (if the implementation is correct) the rest of the program can't break it.
Whether the amount and quality of this kind of code is comparable between the two approaches depends on the specific programs you're writing. Static checking, which can also be applied in more fine-grained ways to parts of the runtime (or its moral equivalent) is an interesting approach, depending on your goals.
> The stuff Fil-C adds is on the same footing as `unsafe` code in Rust- its implementation isn't checked, but its surface area is designed so that (if the implementation is correct) the rest of the program can't break it.
It’s not the same.
The Fil-C runtime is the same runtime in every client of Fil-C. It’s a single common trusted compute base and there’s no reason for it to grow.
On the other hand Rust programmers use unsafe all over the place, not just in some core libraries.
Yeah, that's what I meant by "depends on the specific programs you're writing." Confining unsafe Rust to core libraries is totally something people do.
You're equating a core runtime that doesn't grow with libraries written by anyone.
There's no world in which a Fil-C user would write unsafe code. That's not a thing you can do in Fil-C.
Rust users write unsafe code a lot and the language allows it and encourages it even.
> Rust users write unsafe code a lot
This isn't the case.
I mean, again, yeah. I specifically compared the safe API/unsafe implementation aspect, not who writes the unsafe implementation.
To me the interesting thing about Rust's approach is precisely this ability to compose unrelated pieces of trusted code. The type system and dynamic semantics are set up so that things don't just devolve into a yolo-C-style free-for-all when you combine two internally-unsafe APIs: if they are safe independently, they are automatically safe together as well.
The set of internally-unsafe APIs you choose to compose is a separate question on top of that. Maybe Rust, or its ecosystem, or its users, are too lax about this, but I'm not really trying to have that argument. Like I mentioned in my initial comment, I find this interesting even if you just apply it within a single trusted runtime.
I love this shameless self-promotion. ;)
Fil-C is in the cards for my next project.
Thank you for considering it :-)
Hit me up if you have questions or issues. I’m easy to find
One of these days, a project will catch on that's vastly simpler than any memory solution today, yet solves all the same problems, and more robustly too, just like how it took humanity thousands of years to realize how to use levers to build complex machines. The solution is probably sitting right under our noses. I'm not sure it's your project (maybe it is) but I bet this will happen.
That’s a really great attitude! And I think you’re right!
I think in addition to possibly being the solution to safety for someone, Fil-C is helping to elucidate what memory safe systems programming could look like and that might lead to someone building something even better
> It's more comprehensively safe than Rust
Yeah. By adding a runtime.
> Fil-C achieves this using a combination of concurrent garbage collection and invisible capabilities (each pointer in memory has a corresponding capability, not visible to the C address space)
https://github.com/pizlonator/llvm-project-deluge/tree/delug...
> Yeah. By adding a runtime.
So? That doesn't make it any less safe or useful.
In almost all uses of C and C++, the language already has a runtime. In the Gnu universe, it's the combination of libgcc, the loader, the various crt entrypoints, and libc. In the Apple version, it's libcompiler_rt and libSystem.
Fil-C certainly adds more to the runtime, but it's not like there was no runtime before.
It makes it a lot less performant and there is no avoiding or mitigating that downside. C++ is often selected as a language instead of safer options for its unusual performance characteristics even among systems languages in practice.
Fil-C is not a replacement for C++ generally, that oversells it. It might be a replacement for some C++ software without stringent performance requirements or a rigorously performance-engineered architecture. There is a lot of this software, often legacy.
> It makes it a lot less performant and there is no avoiding or mitigating that downside.
You can’t possibly know that.
> C++ is often selected as a language instead of safer options for its unusual performance characteristics even among systems languages in practice.
Is that why sudo, bash, coreutils, and ssh are written in C?
Of course not.
C and C++ are often chosen because they make systems programming possible at all due to their direct access to syscall ABI.
> Fil-C is not a replacement for C++ generally, that oversells it.
I have made no such claim.
Fil-C means you cannot claim - as TFA claims - that it’s impossible to make C and C++ safe. You have to now hedge that claim with additional caveats about performance. And even then you’re on thin ice since the top perf problems in Fil-C are due to immaturity of its implementation (like the fact that linking is hella cheesy and the ABI is even cheesier).
> It might be a replacement for some C++ software without stringent performance requirements or a rigorously performance-engineered architecture. There is a lot of this software, often legacy.
It’s the opposite in my experience. For example, xzutils and simdutf have super lower overhead in Fil-C. In the case of SIMD code it’s because using SIMD amortizes Fil-C’s overheads.
> C and C++ are often chosen because they make systems programming possible at all due to their direct access to syscall ABI.
Surely Fil-C cannot provide direct access to syscalls without violating the safety guarantee. There must be something ensuring that what the kernel interprets as a pointer is actually a valid pointer.
> Fil-C means you cannot claim - as TFA claims - that it’s impossible to make C and C++ safe. You have to now hedge that claim with additional caveats about performance. And even then you’re on thin ice since the top perf problems in Fil-C are due to immaturity of its implementation (like the fact that linking is hella cheesy and the ABI is even cheesier).
The world of compilers is littered with corpses of projects that spent years claiming faster performance was right around the corner.
I believe you can make it faster, but how much faster? We'll see.
I think these types of compatibility layers will be a great option moving forward for legacy software. But I have a hard time seeing the case for using Fil-C for new code: all the known disadvantages of C and C++, now combined with performance closer to Java than Rust (if not worse), and high difficulty interoperating with other native code (normally C and C++'s strength!), in exchange for marginal safety improvements over Rust (minus Rust's more general safety culture).
edit: I feel bad writing such a dismissive comment, but it's hard to avoid reacting that way when I see unrealistically rosy portrayals of projects.
> Surely Fil-C cannot provide direct access to syscalls without violating the safety guarantee. There must be something ensuring that what the kernel interprets as a pointer is actually a valid pointer.
This is exactly what Fil-C does.
> all the known disadvantages of C and C++
The main disadvantage of C and C++ is unsafety and fil-C comprehensively fixes that.
> edit: I feel bad writing such a dismissive comment, but it's hard to avoid reacting that way when I see unrealistically rosy portrayals of projects.
How is my portrayal unrealistically rosy?
Even the fact that you know what the current perf costs are is the result of me being brutally honest about its perf.
I suspect something else is going on.
> This is exactly what Fil-C does.
Okay, I just checked. It does not. I wrote: "There must be something ensuring that what the kernel interprets as a pointer is actually a valid pointer." And sure enough, your runtime manually wraps each Linux syscall to do exactly that:
https://github.com/pizlonator/llvm-project-deluge/blob/6804d...
For harder cases like fcntl, where arguments can be either pointers or integers, you have to enumerate the possible fcntl arguments:
https://github.com/pizlonator/llvm-project-deluge/blob/6804d...
ioctl is even harder because some ioctls take pointers to structs that themselves contain pointers; on Linux that includes v4l2 and mmc. It looks like you don't handle that properly, judging by:
https://github.com/pizlonator/llvm-project-deluge/blob/6804d...
--
My point is: having to go through wrapper functions is not what I'd call "direct" access to "ABI". (Also, the wrappers don’t even wrap the syscall ABI directly; they wrap the libc ABI that in turn wraps syscalls.)
You might object that the wrappers are thin enough that they still count as direct. While that's a matter of definitions, I think my previous comment made it clear what _I_ meant when I questioned "direct", given my followup sentence about "actually a valid pointer".
But beyond quibbles about who meant what, this lack of directness matters because it implicates the portability of your approach.
At least as currently implemented, you rely on compiling almost everything (even libc) inside the sandbox, while having a ‘narrow waist’ of syscall wrappers mediating access between the sandbox and the outside world. That should work for most use cases on Linux, where there's already an assumption that different processes can have completely different library stacks, and static linking is common. Even if you want to do a GUI application you should be able to recompile the entire GTK or Qt stack inside the sandbox, and it doesn’t matter if other apps are using different versions or GTK or Qt.
But what about other operating systems? For server and CLI software you can probably still get away with exposing a small syscall/libc API, similar to Cosmopolitan (though that will still require significant effort for each OS). But for GUIs and platform integration more broadly, you’re expected to use the platform-provided libraries that live in your address space. They are usually proprietary, and even when they’re not, the system isn’t designed for multiple versions of the libraries to coexist.
I know I’m not telling you anything you don’t already know. But it’s an important point, because aside from performance, the _other_ big reason that Rust relies on user-written unsafe code is for FFI. If anyone can write their own FFI bindings, as opposed to making all FFI bindings live in a centralized runtime, then it becomes more feasible to scale the mammoth task of writing safe wrappers for all those ABIs. Your approach explicitly rejects user-written unsafe code, so I don’t know how you can possibly end up with reasonable OS library coverage.
Now sure, you didn’t claim anything about GUIs or portability. Perhaps this is more on topic for my previous comment’s point about “high difficulty interoperating with other native code” which you didn’t rebut. But it’s also relevant to “direct access to syscall ABI”, because if there _were_ some way to provide direct access to syscall ABI while remaining memory-safe, then the same approach would probably extend to other system ABIs. For example, a fully CHERI-aware system actually would allow for that. It’s an unfair comparison, because CHERI assumes cooperation from both the hardware and the OS, while you’re trying to run on existing hardware and OSes. I have no idea if we’ll ever see CHERI in general purpose systems. But in exchange, CHERI achieves something that’s otherwise impossible: combining direct system access and memory safety. And I originally read your comment as claiming to do the impossible.
> a lot less performant
Is this just you speculating? How much is "a lot"? Where's the data? Let's get some benchmarks!
I mean bro isn’t totally wrong.
Fil-C’s perf sucks on some workloads. And it doesn’t suck on others.
Extreme examples to give you an idea:
- xzutils had about 1.2x overhead. So slower but totally usable.
- no noticeable overhead in shells, systems utilities, ssh, curl, etc. But that’s because they’re IO bound.
- 4x or sometimes maybe even higher overheads for things like JS engines, CPython, Lua, Tcl, etc. Also OpenSSL perf tests are around 4x I think.
But you’re on thin ice if you say that this is a reason why Fil-C will fail. So much of Fil-C’s overhead is due to known issues that I will fix eventually, like the function call ABI (which is hella cheesy right now because I just wanted to have something that works and haven’t actually made it good yet).
Related:
How safe is Zig? - https://news.ycombinator.com/item?id=31850347 - June 2022 (254 comments)
How Safe Is Zig? - https://news.ycombinator.com/item?id=26537693 - March 2021 (274 comments)
How Safe Is Zig? - https://news.ycombinator.com/item?id=26527848 - March 2021 (1 comment)
How Safe Is Zig? - https://news.ycombinator.com/item?id=26521539 - March 2021 (1 comment)
There is a third category of memory and other software safety mechanisms: model checking. While it does involve compiling software to a different target -- typically an SMT solver -- it is not a compile-time mechanism like in Rust.
Kani is a model checker for Rust, and CBMC is a model checker for C. I'm not aware of one (yet!) for Zig, but it would not be difficult to build a port. Both Kani and CBMC compile down to goto-c, which is then converted to formulas in an SMT solver.
There isn't a real one yet, but to scratch an itch I tried to build one for Zig. It's not complete nor do I have plans to complete it. https://github.com/ityonemo/clr
If zig locks down the AIR (intermediate representation at the function level) it would be ideal for running model checking of various sorts. Just by looking at AIR I found it possible to:
- identify stack pointer leakage
- basic borrow checking
- detect memory leaks
- assign units to variables and track when units are incompatible
Any good primers on SMT solvers?
Start with this.
https://smt.st/SAT_SMT_by_example.pdf
The algorithms behind SAT / SMT are actually pretty straight-forward. One of these days, I'll get around to publishing an article to demystify them.
If you're filling uninitialized pointers with AAAAAAAA, it might be best to also reserve that memory page and mark it as no-access.
I'm not even joking. Any pattern used by magic numbers that fill pointers (such as HeapFree filling memory with FEEEEEEE on Windows) should have a corresponding no-access page just to ensure that the program will instantly fail, and not have a valid memory allocation mapped in there. For 32-bit programs, everything past 0x8000000 used to be reserved as kernel memory, and have an access violation when you access it, so the magic numbers were all above 0x80000000. But with large address aware programs, you don't get that anymore, only manually reserving the 4K memory pages containing the magic numbers will give you the same effect.
that only happens in debug-builds.
https://ziglang.org/documentation/master/#undefined
I don't know why we are still having this topic going on. Zig is not safe, period.
Zig gives you the control you need if that is what you want, safety isn't something Zig is chasing.
Safer than C, yeah, but not safe.
Rust = safe Zig = control
Pick your weapon for the foe in front of you.
I don't think Zig gives you significantly more control than Rust.
Maybe not Zig the language, but the fact that all allocating functions in the standard library accept an allocator (and community libraries follow this precedent) does give you much more control in practice.
For example, how would you use a Vec using stack memory for elements, instead of the heap? For the equivalent data structure in Zig (std.ArrayList), it's just a matter of using a stack allocator instead of using a heap allocator, which is an explicit decision either way.