I've been writing a metaverse client in Rust for almost five years now, which is too long.[1]
Someone else set out to do something similar in C#/Unity and had something going in less than two years.
This is discouraging.
Ecosystem problems:
The Rust 3D game dev user base is tiny.
Nobody ever wrote an AAA title in Rust. Nobody has really pushed the performance issues.
I find myself having to break too much new ground, trying to get things to work that others doing first-person shooters should have solved years ago.
The lower levels are buggy and have a lot of churn
The stack I use is Rend3/Egui/Winit/Wgpu/Vulkan. Except for Vulkan, they've all had hard to find bugs.
There just aren't enough users to wring out the bugs.
Also, too many different crates want to own the event loop.
These crates also get "refactored" every few months, with breaking API changes, which breaks the stack for months at a time until everyone gets back in sync.
Language problems:
Back-references are difficult
A owns B, and B can find A, is a frequently needed pattern, and one that's hard to do in Rust. It can be done with Rc and Arc, but it's a bit unwieldy to set up and adds run-time overhead.
There are three common workarounds:
- Architect the data structures so that you don't need back-references. This is a clean solution but is hard. Sometimes it won't work at all.
- Put everything in a Vec and use indices as references. This has most of the problems of raw pointers, except that you can't get memory corruption outside the Vec. You lose most of Rust's safety. When I've had to chase down difficult bugs in crates written by others, three times it's been due to errors in this workaround.
- Use "unsafe". Usually bad. On the two occasions I've had to use a debugger on Rust code, it's been because someone used "unsafe" and botched it.
Rust needs a coherent way to do single owner with back references. I've made some proposals on this, but they require much more checking machinery at compile time and better design. Basic concept: works like "Rc::Weak" and "upgrade", with compile time checking for overlapping upgrade scopes to insure no "upgrade" ever fails.
"Is-a" relationships are difficult
Rust traits are not objects. Traits cannot have associated data. Nor are they a good mechanism for constructing object hierarchies. People keep trying to do that, though, and the results are ugly.
I caveat my remarks with although I've have studed the Rust specification, I have not written a line of Rust code.
I was quite intrigued with the borrow checker, and set about learning about it. While D cannot be retrofitted with a borrow checker, it can be enhanced with it. A borrow checker has nothing tying it to the Rust syntax, so it should work.
So I implemented a borrow checker for D, and it is enabled by adding the `@live` annotation for a function, which turns on the borrow checker for that function. There are no syntax or semantic changes to the language, other than laying on a borrow checker.
Yes, it does data flow analysis, has semantic scopes, yup. It issues errors in the right places, although the error messages are rather basic.
In my personal coding style, I have gravitated towards following the borrow checker rules. I like it. But it doesn't work for everything.
It reminds me of OOP. OOP was sold as the answer to every programming problem. Many OOP languages appeared. But, eventually, things died down and OOP became just another tool in the toolbox. D and C++ support OOP, too.
I predict that over time the borrow checker will become just another tool in the toolbox, and it'll be used for algorithms and data structures where it makes sense, and other methods will be used where it doesn't.
I've been around to see a lot of fashions in programming, which is most likely why D is a bit of a polyglot language :-/
I can also say confidently that the #1 method to combat memory safety errors is array bounds checking. The #2 method is guaranteed initialization of variables. The #3 is stop doing pointer arithmetic (use arrays and ref's instead).
The language can nail that down for you (D does). What's left are memory allocation errors. Garbage collection fixes that.
pjmlp 43 minutes ago [-]
As discussed multiple times, I see automatic resouce management (written this way on purpose), coupled with effects/linear/affine/dependent types for lowlevel coding as the way to go.
At least until we get AI driven systems good enough to generate straight binaries.
Rust is to be celebrated for bringing affine types into mainstream, but it doesn't need to be the only way, productivity and performance can be made into the same language.
The way Ada, D, Swift, Chapel, Linear Haskell, OCaml effects and modes, are being improved, already show the way forward.
There there is the whole formal verification and dependent type languages, but that goes even beyond Rust, in what most mainstream developers are willing to learn, the development experience is still quite ruff.
amelius 10 minutes ago [-]
> Someone else set out to do something similar in C#/Unity and had something going in less than two years.
But in that case doesn't the garbage collector ruin the experience for the user? Because that's the argument I always hear in favor of Rust.
_bin_ 9 hours ago [-]
I saw a good talk, though I don't remember the name, that went over the array-index approach. It correctly pointed out that by then, you're basically recreating your own pointers without any of the guarantees rust, or even C++ smart pointers, provide.
josephg 4 hours ago [-]
> It correctly pointed out that by then, you're basically recreating your own pointers without any of the guarantees rust, or even C++ smart pointers, provide.
I've gone back and forth on this, myself.
I wrote a custom b-tree implementation in rust for a project I've been working on. I use my own implementation because I need it to be an order-statistic tree, and I need internal run length encoding. The original version of my b-tree works just like how you'd implement it in C. Each internal node / leaf is a raw allocations on the heap.
Because leaves need to point back up the tree, there's unsafe everywhere, and a lot of raw pointers. I ended up with separate Cursor and CursorMut structs which held different kinds of references to the tree itself. Trying to avoid duplicating code for those two cursor types added a lot of complex types and trait magic. The implementation works, and its fast. But its horrible to work with, and it never passed MIRI's strict checks. Also, rust has really bad syntax for interacting with raw pointers.
Recently I rewrote the b-tree to simply use a vec of internal nodes, and a vec of leaves. References became array indexes (integers). The resulting code is completely safe rust. Its significantly simpler to read and work with - there's way less abstraction going on. I think its about 40% less code. Benchmarks show its about 25% faster than the raw pointer version. (I don't know why - but I suspect the reason is due to better cache locality.)
I think this is indeed peak rust.
It doesn't feel like it, but using an array-index style still preserves many of rust's memory safety guarantees because all array lookups are bounds checked. What it doesn't protect you from is use-after-free bugs.
Interestingly, I think this style would also be significantly more performant in GC languages like javascript and C#, because a single array-of-objects is much simpler for the garbage collector to keep track of than a graph of nodes & leaves which all reference one another. Food for thought!
lenkite 1 minutes ago [-]
One can also use this array-index approach in C++, utilize the `at` methods and have "memory safety guarantees", no ?
pjmlp 39 minutes ago [-]
GC languages like C# don't need these tricks, because it is feature rich enough to do C++ style low level programming, and has value types.
akoboldfrying 5 minutes ago [-]
> Recently I rewrote the b-tree to simply use a vec of internal nodes
Doesn't this also require you to correctly and efficiently implement (equivalents of C's) malloc() and free()? IIUC your requirements are more constrained, in that malloc() will only ever be called with a single block size, meaning you could just maintain a stack of free indices -- though if tree nodes are comparable in size to integers this increases memory usage by a significant fraction.
(I just checked and Rust has unions, but they require unsafe. So, on pain of unsafe, you could implement a "traditional" freelist-based allocator that stores the index of the next free block in-place inside the node.)
ycombinatrix 2 hours ago [-]
Could std::rc::Weak solve the backreference problem?
Animats 2 hours ago [-]
Weak is very helpful in preventing ownership loops which prevent deallocation.
Weak plus RefCell lets you do back pointers cleanly. You call ".borrow()" to get access to the data protected by a RefCell. The run-time borrow panics if someone else is using the data item. This prevents two mutable pointers to the same data, which Rust requires.
Static analysis could potentially check for those potential panics at compile time. If that was implemented, the run time check, and the potential for a panic, would go away. It's not hard to check, provided that all borrows have limited scope. You just have to determine, conservatively, that no two borrow scopes for the same thing overlap.
If you had that check, it would be possible to have something that behaves like RefCell, but is checked entirely at compile time. Then you know you're free of potential double-borrow panics.
I started a discussion on this on a Rust forum. A problem is that you have to perform that check after template expansion, and the Rust compiler is not set up to do global analysis after template expansion. This idea needs further development.
This check belongs to the same set of checks which prevent deadlocking a mutex against itself.
There's been some work on Rust static deadlock analysis, but it's still a research topic.
josephg 1 hours ago [-]
I didn't consider that. Looking at how weak references work, that might work. It would reduce the need for raw pointers and unsafe code. But in exchange, it would add 16 bytes of overhead to every node in my data structure. That's pure overhead - since the reference count of all nodes should always be exactly 1.
However, I'm not sure what the implications are around mutability. I use a Cursor struct which stores a reference to a specific leaf node in the tree. Cursors can walk forward in the tree (cursor.next_entry()). The tree can also be modified at the cursor location (cursor.insert(item)). Modifying the tree via the cursor also updates some metadata all the way up from the leaf to the root.
If the cursor stored a Rc<Leaf> or Weak<Leaf>, I couldn't mutate the leaf item because rc.get_mut() returns None if there are other strong or weak pointers pointing to the node. (And that will always be the case!). Maybe I could use a Rc<Cell<Leaf>>? But then my pointers down the tree would need the same, and pointers up would be Weak<Cell<Leaf>> I guess? I have a headache just thinking about it.
Using Rc + Weak would mean less unsafe code, worse performance and code thats even harder to read and reason about. I don't have an intuitive sense of what the performance hit would be. And it might not be possible to implement this at all, because of mutability rules.
Switching to an array improved performance, removed all unsafe code and reduced complexity across the board. Cursors got significantly simpler - because they just store an array index. (And inserting becomes cursor.insert(item, &mut tree) - which is simple and easy to reason about.)
I really think the Vec<Node> / Vec<Leaf> approach is the best choice here. If I were writing this again, this is how I'd approach it from the start.
pcwalton 9 hours ago [-]
But Unity game objects are the same way: you allocate them when they spawn into the scene, and you deallocate them when they despawn. Accessing them after you destroyed them throws an exception. This is exactly the same as entity IDs! The GC doesn't buy you much, other than memory safety, which you can get in other ways (e.g. generational indices, like Bevy does).
_bin_ 8 hours ago [-]
But in rust you have to fight the borrow checker a lot, and sometimes concede, with complex referential stuff. I say this as someone who writes a good bit of rust and enjoys doing so.
pcwalton 8 hours ago [-]
I just don't, and even less often with game logic which tends to be rather simple in terms of the data structures needed. In my experience, the ownership and borrowing rules are in no way an impediment to game development. That doesn't invalidate your experience, of course, but it doesn't match mine.
Animats 4 hours ago [-]
That's a good comment.
The difference is that I'm writing a metaverse client, not a game. A metaverse client is a rare beast about halfway between an MMO client and a web browser.
It has to do most of the the graphical things a 3D MMO client does. But it gets all its assets and gameplay instructions from a server.
From a dev perspective, this means you're not making changes to gameplay by recompiling the client. You make changes to objects in the live world while you're connected to the server. So client compile times (I'm currently at about 1 minute 20 seconds for a recompile in release mode) aren't a big issue.
Most of the level and content building machinery of Bevy or Unity or Unreal Engine is thus irrelevant. The important parts needed for performance are down at the graphics level. Those all exist for Rust, but they're all at the My First Renderer level. They don't utilize the concurrency of Vulkan or multiple CPUs. When you get to a non-trivial world, you need that. Tiny Glade is nice, but it works because it's tiny.
What does matter is high performance and reliability while content is coming in at a high rate and changing. Anything can change at any time, but usually doesn't. So cache type optimizations are important, as is multithreading to handle the content flood.
Content is constantly coming in, being displayed, and then discarded as the user moves around the big world.
All that dynamism requires more complex data structures than a game that loads everything at startup.
Rust's "fearless multiprogramming" is a huge win for performance. I have about 20 threads running, and many are doing quite different things. That would be a horror to debug in C++. In Rust, it's not hard.
(There's a school of thought that says that fast, general purpose renderers are impossible. Each game should have its own renderer. Or you go all the way to a full game engine and integrate gameplay control and the scene graph with the renderer. Once the scene graph gets big enough that (lights x objects) becomes too large to do by brute force, the renderer level needs to cull based on position and size, which means at least a minimal scene graph with a spatial data structure. So now there's an abstraction layering problem - the rendering level needs to see the scene graph. No one in Rust land has solved this problem efficiently. Thus, none of the four available low-level renderers scale well.
I don't think it's impossible, just moderately difficult. I'm currently looking at how to do this efficiently, with some combination of lambdas which access the scene graph passed into the renderer, and caches. I really wish someone else had solved this generic problem, though. I'm a user of renderers, not a rendering expert.)
Meta blew $40 billion dollars on this problem and produced a dud virtual world, but some nice headsets. Improbable blew upwards of $400 million and produced a limited, expensive to run system. Metaverses are hard, but not that hard. If you blow some of the basic architectural decisions, though, you never recover.
charlotte-fyi 7 hours ago [-]
The dependency injection framework provided by Bevy also particularly elides a lot of the problems with borrow checking that users might run into and encourages writing data oriented code that generally is favorable to borrow checking anyway.
_bin_ 7 hours ago [-]
This is a valid point. I've played a little with Bevy and liked it. I have also not written a triple-A game in Rust, with any engine, but I'm extrapolating the mess that might show up once you have to start using lots of other libraries; Bevy isn't really a batteries-included engine so this probably becomes necessary. Doubly so if e.g. you generate bindings to the C++ physics library you've already licensed and work with.
These are all solvable problems, but in reality, it's very hard to write a good business case for being the one to solve them. Most of the cost accrues to you and most of the benefit to the commons. Unless a corporate actor decides to write a major new engine in Rust or use Bevy as the base for the same, or unless a whole lot of indie devs and part-time hackers arduously work all this out, it's not worth the trouble if you're approaching it from the perspective of a studio with severe limitations on both funding and time.
pcwalton 7 hours ago [-]
Thankfully my studio has given me time to be able to submit a lot of upstream code to Bevy. I do agree that there's a bootstrapping problem here and I'm glad that I'm in a situation where I can help out. I'm not the only one; there are a handful of startups and small studios that are doing the same.
jokethrowaway 7 hours ago [-]
Given my experience with Bevy this doesn't happen very often, if ever.
The only challenge is not having an ecosystem with ready made everything like you do in "batteries included" frameworks.
You are basically building a game engine and a game at the same time.
We need a commercial engine in Rust or a decade of OSS work. But what features will be considered standard in Unreal Engine 2035?
ArthurStacks 5 hours ago [-]
Nobody is going to be writing code in 2035
Ygg2 4 hours ago [-]
> fight the borrow checker
I see this and I am reminded when I had to fight the 0 indexing, when I was cutting my teeth in C, for class.
I wonder why no one complains about 0 indexing anymore. Isn't it weird how you have to go 0 to length - 1, and implement algorithm differently than in a math book?
nl 1 hours ago [-]
The ground floor in lifts isn't "1", it is "G". Same thing.
otikik 2 hours ago [-]
Not in Lua.
Most languages have abstractions for iterating over an array so that you don’t need to use 0 or length-1 these days
Maths books aren't being weird. They are counting in a way most people learn to count. One apple, two apples, three apples. You don't start zeroth apple, one apple, two apples, then respond the set of apple contains three apples.
tacitusarc 4 hours ago [-]
I believe it’s a practicality to simplify pointer arithmetic
Ygg2 4 hours ago [-]
Yes but why does no one talk here about fighting the 0 indices. Or how they are switching to Lua, because 0 indices are hard?
Am I the only person that remembers how hard it was to wrap your head around numbers starting at 0, rather than 1?
m-schuetz 3 hours ago [-]
I find indices starting from zero much easier. Especially when index/pointer arithmetic is involved like converting between pixel or voxel indices and coordinates, or indexing in ring buffers. 1-based indexing is one of the reasons I eventuallz abandoned Mathematica, because it got way too cumbersome.
So the reason why you don't see many people fighting 0-indexing is because they actually prefer it.
jthill 3 hours ago [-]
For languages with 0-based array element numbering, say what the numbers are: they're offsets. 0-based arrays have offsets, 1-based arrays have indices.
WalterBright 2 hours ago [-]
> 0 indices are hard?
I started out with BASIC and Fortran, which use 1 based indices. Going to C was a small bump in the road getting used to that, and then it's Fortran which is the oddball.
Ygg2 1 hours ago [-]
Interesting path. I went Basic, and Pascal and then C in college. Honestly it was such a mind twist.
scott_w 3 hours ago [-]
Yes, I think you are. The challenges people describe with Rust look more difficult than remembering to start from 0 instead of 1…
Ygg2 1 hours ago [-]
I don't think so. One based numbering is barring few particular (spoken) languages the default. You have to had to change your counting strategies when going from regular world to 0 based indices.
Maybe you had the luck of learning 0 based language first. Then most of them were a smooth ride.
My point is you forgot how hard it is because it's now muscle memory (if you need a recap of the difficulty learn a program with arbitrary array indexing and set you first array index to something exciting like 5 or -6). It also means if you are "fighting the borrow checker" you are still at pre-"muscle memory" stage of learning Rust.
8 hours ago [-]
jayd16 6 hours ago [-]
You can't do possibly-erroneous pointer math on a C# object reference. You don't need to deal with the game life cycle AND the memory life cycle with a GC. In Unity they free the native memory when a game object calls Destroy() but the C# data is handled by the GC. Same with any plain C# objects.
To say it's the same as using array indices is just not true.
pjmlp 4 minutes ago [-]
While we don't need, we can, that is the beauty of languages like C#, that offer the productivity of automatic memory management, and the tools to go low level if desired/needed.
pcwalton 5 hours ago [-]
> You can't do possibly-erroneous pointer math on a C# object reference.
Bevy entity IDs are opaque and you have to try really hard to do arithmetic on them. You can technically do math on instance IDs in Unity too; you might say "well, nobody does that", which is my point exactly.
> You don't need to deal with the game life cycle AND the memory life cycle with a GC.
I don't know what this means. The memory for a `GameObject` is freed once you call `Destroy`, which is also how you despawn an object. That's managing the memory lifecycle.
> In Unity they free the native memory when a game object calls Destroy() but the C# data is handled by the GC. Same with any plain C# objects.
Is there a use for storing data on a dead `GameObject`? I've never had any reason to do so. In any case, if you really wanted to do that in Bevy you could always use an `EntityHashMap`.
saghm 5 hours ago [-]
At least in terms of doing math on indices, I have to imagine you could just wrap the type to make indices opaque. The other concerns seem valid though.
dundarious 8 hours ago [-]
Yes but regarding use of uninitialized/freed memory, neither GC nor memory safety really help. Both "only" help with totally incidental and unintentional and small scale violations.
delusional 3 hours ago [-]
That sounds like Jonathan blow's "rant" on the subject. You can watch it on YouTube https://youtu.be/4t1K66dMhWk
nicman23 2 hours ago [-]
pointers sure are useful
janalsncm 9 hours ago [-]
> These crates also get "refactored" every few months, with breaking API changes
I am dealing with similar issues in npm now, as someone who is touching Node dev again. The number of deprecations drives me nuts. Seems like I’m on a treadmill of updating APIs just to have the same functionality as before.
christophilus 8 hours ago [-]
I’ve found the key to the JS ecosystem is to be very picky about what dependencies you use. I’ve got a number of vanilla Bun projects that only depend on TypeScript (and that is only a dev dependency).
It’s not always possible to be so minimal, but I view every dependency as lugging around a huge lurking liability, so the benefit it brings had better far outweigh that big liability.
So far, I’ve only had one painful dependency upgrade in 5 years, and that was Tailwind 3-4. It wasn’t too painful, but it was painful enough to make me glad it’s not a regular occurrence.
whstl 30 minutes ago [-]
I'm finding most of the modern React ecosystem to be made of liabilities.
The constant update cycles of some libraries (hello Router) is problematic in itself, but there's too many fashionable things that sound very good in theory but end up being a huge problem when used in fast-moving projects, like headless UI libraries.
lostb1t 14 minutes ago [-]
the Js ecosystem is by far the worst offender in this area
schneems 8 hours ago [-]
I wish for ecosystems that would let maintainers ship deprecations with auto-fixing lint rules.
photonthug 8 hours ago [-]
Yeah, not only is the structure of business workflows often resistant to mature software dev workflows, developers themselves increasingly lack the discipline, skills or interest in backwards compatibility or good initial designs anyway. Add to this the trend that fast changing software is actually a decent strategy to keep LLMs befuddled, and it’s probably going to become an unofficial standard to maintain support contracts.
On that subject, ironically code gen by ai for ai related work is often least reliable due to fast churn. Langchain is a good example of this and also kind of funny, they suggest / integrate gritql for deterministic code transforms rather than using AI directly: https://python.langchain.com/docs/versions/v0_3/.
Overall.. mastering things like gritql, ast grep, and CST tools for code transforms still pays off. For large code bases, No matter how good AI gets, it is probably better to get them to use formal/deterministic tools like these rather than trust them with code transformations more directly and just hope for the best..
I occasionally notice libraries or frameworks including OpenRewrite rules in their releases. I've never tried it, though!
molszanski 7 hours ago [-]
Hmmm.. strange. Don’t have issues like that. Can you show us your package json?
harles 9 hours ago [-]
I’ve found such changes can actually be a draw at first. “Hey look, progress and activity!”. Doubly so as a primarily C++ dev frustrated with legacy choices in stl. But as you and others point out, living with these changes is a huge pain.
the__alchemist 6 hours ago [-]
Great write-up. I do the array indexing, and get runtime errors by misindexing these more often than I'd like to admit!
I also hear you on the winit/wgpu/egui breaking changes. I appreciate that the ecosystem is evolving, but keeping up is a pain. Especially when making them work together across versions.
serbuvlad 2 hours ago [-]
I've always thought about this. In my mind there are two ways a language can guarantee memory safety:
* Simply check all array accesses and pointer de references and panic if we are out of bounds and panic/throw an exception/etc. if we are doing something wrong.
* Guarantee at compile-time that we are always accessing valid memory, to prevent even those panics.
Rust makes a lot of effort to reach the second goal, but, since it gives you integers and arrays, it makes the problem fundamentally insoluble.
The memory it wants so hard to regulate access to is just an array, and a pointer is just an index.
ogoffart 2 hours ago [-]
If you're looking for a stable GUI toolkit, there is Slint
Charon77 7 hours ago [-]
A owns B, and B can find A
I think you should think less like Java/C# and more like database.
If you have a Comment object that has parent object, you need to store the parent as a 'reference', because you can't put the entire parent.
So I'll probably use Box here to refer to the parent
aystatic 6 hours ago [-]
?? the whole point of Box<T> is to be an owning reference, you can’t have multiple children refer to the same parent object if you use a Box
echelon 10 hours ago [-]
We've got another one on our end. It's much more to do with Bevy than Rust, though. And I wonder if we would have felt the same if we had chosen Fyrox.
> Migration - Bevy is young and changes quickly.
We were writing an animation system in Bevy and were hit by the painful upgrade cycle twice. And the issues we had to deal with were runtime failures, not build time failures. It broke the large libraries we were using, like space_editor, until point releases and bug fixes could land. We ultimately decided to migrate to Three.js.
> The team decided to invest in an experiment. I would pick three core features and see how difficult they would be to implement in Unity.
This is exactly what we did! We feared a total migration, but we decided to see if we could implement the features in Javascript within three weeks. Turns out Three.js got us significantly farther than Bevy, much more rapidly.
pcwalton 10 hours ago [-]
> We were writing an animation system in Bevy and were hit by the painful upgrade cycle twice.
I definitely sympathize with the frustration around the churn--I feel it too and regularly complain upstream--but I should mention that Bevy didn't really have anything production-quality for animation until I landed the animation graph in Bevy 0.15. So sticking with a compatible API wasn't really an option: if you don't have arbitrary blending between animations and opt-in additive blending then you can't really ship most 3D games.
seivan 9 hours ago [-]
[dead]
10 hours ago [-]
pcwalton 10 hours ago [-]
> Nobody has really pushed the performance issues.
This is clearly false. The Bevy performance improvements that I and the rest of the team landed in 0.16 speak for themselves [1]: 3x faster rendering on our test scenes and excellent performance compared to other popular engines. It may be true that little work is being done on rend3, but please don't claim that there isn't work being done in other parts of the ecosystem.
I read the original post as saying that no one has pushed the engine to the extent a completed AAA game would in order to uncover performance issues, not that performance is bad or that Bevy devs haven’t worked hard on it.
sapiogram 10 hours ago [-]
Wonderful work!
...although the fact that a 3x speed improvement was available kind of proves their point, even if it may be slightly out of date.
pcwalton 10 hours ago [-]
Most game engines other than the latest in-house AAA engines are leaving comparable levels of performance on the table on scenes that really benefit from GPU-driven rendering (that's not to say all scenes, of course). A Google search for [Unity drawcall optimization] will show how important it is. GPU-driven rendering allows developers to avoid having to do all that optimization manually, which is a huge benefit.
milesrout 6 hours ago [-]
[dead]
seivan 9 hours ago [-]
[dead]
hedora 5 hours ago [-]
Pin and unpin handle circular references, sort of.
SkiFire13 2 hours ago [-]
Not really, they are just tools to expose circular references (or self-references) that are *already managed by unsafe code*.
ycombinatrix 2 hours ago [-]
std::rc::Weak?
tonyedgecombe 2 hours ago [-]
The GP does mention Rc/Arc.
ycombinatrix 3 minutes ago [-]
Rc & Arc don't have the same behavior as Weak
12_throw_away 12 hours ago [-]
More than anything else, this sounds like a good lesson in why commercial game engines have taken over most of game dev. There are so many things you have to do to make a game, but they're mostly quite common and have lots of off-the-shelf solutions.
That is, any sufficiently mature indie game project will end up implementing an informally specified, ad hoc, bug-ridden implementation of Unity (... or just use the informally specified, ad hoc and bug-ridden game engine called "Unity")
pcwalton 9 hours ago [-]
> More than anything else, this sounds like a good lesson in why commercial game engines have taken over most of game dev. There are so many things you have to do to make a game, but they're mostly quite common and have lots of off-the-shelf solutions.
> That is, any sufficiently mature indie game project will end up implementing an informally specified, ad hoc, bug-ridden implementation of Unity (... or just use the informally specified, ad hoc and bug-ridden game engine called "Unity")
But using Bevy isn't writing your own game engine. Bevy is 400k lines of code that does quite a lot. Using Bevy right now is more like taking a game engine and filling in some missing bits. While this is significantly more effort than using Unity, it's an order of magnitude less work than writing your own game engine from scratch.
demaga 2 hours ago [-]
But it also doesn't have even 10% of Unity features. Bevy docs themselves warn you that you are probably better off with something like Godot, at least while Bevy is still in early development.
pcwalton 1 hours ago [-]
Over the past year I've been working at my studio to add enough features to Bevy to ship real apps, and Bevy is at the point where one can reasonably do that, depending on your needs.
milesrout 6 hours ago [-]
Please don't quote the entire comment you are replying to.
4 hours ago [-]
doctorpangloss 12 hours ago [-]
And yet, if making your own game engine makes it intellectually stimulating enough to actually make and ship a game, usually for near free, going 10x slower is still better than going at a speed of zero.
xandrius 25 minutes ago [-]
Making an actual indie game can take from 6 months (tiny) to 4-5years. If you multiply that by 10x, the upper bound would be 40-50 years. Of course, that's not how it would be but one has to consider whether their goal is to build a game engine OR a game, doing both at the same is almost guaranteed failure (statistically speaking).
spullara 11 hours ago [-]
I would bet that if you want to build a game engine and not the game, the game itself is probably not that compelling. Could still break out, like Minecraft, but if someone has an amazing game idea I would think they would want to ship it as fast as possible.
qustrolabe 11 hours ago [-]
If anything, making your own game engine makes process more frustrating, time consuming and leads to burnout quicker than ever, especially when your initial goal was just to make a game but instead you stuck figuring out your own render pipeline or inventing some other wheel. I have a headache just from thinking that at some point in engine development person would have to spend literal weeks figuring out export to Android with proper signage and all, when, again, all they wanted is to just make a game.
lolinder 10 hours ago [-]
This seems entirely subjective, most importantly hinging on this part here: "all they wanted is to just make a game".
If you just want to make a game, yes, absolutely just go for Unity, for the same reason why if you just want to ship a CRUD app you should just use an established batteries-included web framework. But indie game developers come in all shapes and some of them don't just want to make a game, some of them actually do enjoy owning every part of the stack. People write their own OSes for fun, is it so hard to believe that people (who aren't you) might enjoy the process of building a game engine?
turtledragonfly 10 hours ago [-]
Speaking as someone who has made their own game engine for their indie game: it really depends on the game, and on the developer's personality and goals. I think you're probably right for the majority of cases, since the majority of games people want to make are reasonably well-served by general-purpose game engines.
But part of the thing that attracted me to the game I'm making is that it would be hard to make in a standard cookie-cutter way. The novelty of the systems involved is part of the appeal, both to me and (ideally) to my customers. If/when I get some of those (:
mjr00 11 hours ago [-]
> And yet, if making your own game engine makes it intellectually stimulating enough to actually make and ship a game, usually for near free, going 10x slower is still better than going at a speed of zero.
Generally, I've seen the exact opposite. People who code their own engines tend to get sucked into the engine and forget that they're supposed to be shipping a game. (I say this as someone who has coded their own engine, multiple times, and ended up not shipping a game--though I had a lot of fun working on the engine.)
The problem is that the fun, cool parts about building your own game engine are vastly outnumbered by the boring parts: supporting level and save data loading/storage, content pipelines, supporting multiple input devices and things like someone plugging in an XBox controller while the game is running and switching all the input symbols to the new input device in real time, supporting various display resolutions and supporting people plugging in new displays while the game is running, and writing something that works on PC/mobile/Switch(2)/XBox/Playstation... all solved problems, none of which are particularly intellectually stimulating to solve correctly.
If someone's finances depend on shipping a game that makes money, there's really no question that you should use Unity or Unreal. Maybe Godot but even that's a stretch. There's a small handful of indie custom game engine success stories, including some of my favorites like The Witness and Axiom Verge, but those are exceptions rather than the rule. And Axiom Verge notably had to be deeply reworked to get a Switch release, because it's built on MonoGame.
whstl 2 minutes ago [-]
After 30 years participating in Gamedev communities I feel like the "don't build an engine" was always an empty strawman aimed at nobody really.
The Venn diagram between the people interested in technical aspects of an engine and in also shipping a game is probably composed of a few hundred individuals, most of them working for studios.
The "kid that wants to make an engine to make an MMO" is gonna do neither.
pornel 9 hours ago [-]
Indeed there are people who want to make games, and there are people who think they want to make games, but want to make game engines (I'm speaking from experience, having both shipped games and keeping a junk drawer of unreleased game engines).
Shipping a playable game involves so so many things beyond enjoyable programming bits that it's an entirely different challenge.
I think it's telling that there are more Rust game engines than games written in Rust.
whartung 6 hours ago [-]
This does not apply just to games, but to most any application designed to be used by human beings, particularly complete strangers.
Typically the “itch is scratched” long before the application is done.
otikik 2 hours ago [-]
This person develops
CooCooCaCha 10 hours ago [-]
My experience is the opposite. Plenty of intellectual stimulation comes from actually making the game. Designing and refining gameplay mechanics, level design, writing shaders, etc.
What really drags you down in games is iteration speed. It can be fun making your own game engine at first but after awhile you just want the damn thing to work so you can try out new ideas.
palata 11 hours ago [-]
I really like Rust as a replacement for C++, especially given that C++ seems to become crazier every year. When reasonable, nowadays I always use Rust instead of C++.
But for the vast majority of projects, I believe that C++ is not the right language, meaning that Rust isn't, either.
I feel like many people choose Rust because is sounds like it's more efficient, a bit as if people went for C++ instead of a JVM language "because the JVM is slow" (spoiler: it is not) or for C instead of C++ because "it's faster" (spoiler: it probably doesn't matter for your project).
It's a bit like choosing Gentoo "because it's faster" (or worse, because it "sounds cool"). If that's the only reason, it's probably a bad choice (disclaimer: I use and love Gentoo).
lolinder 10 hours ago [-]
I have a personal-use app that has a hot loop that (after extensive optimization) runs for about a minute on a low-powered VPS to compute a result. I started in Java and then optimized the heck out of it with the JVM's (and IntelliJ's) excellent profiling tools. It took one day to eliminate all excess allocations. When I was confident I couldn't optimize the algorithm any further on the JVM I realized that what I'd boiled it down to looked an awful lot like Rust code, so I thought why not, let's rewrite it in Rust. I took another day to rewrite it all.
The result was not statistically different in performance than my Java implementation. Each took the same amount of time to complete. This surprised me, so I made triply sure that I was using the right optimization settings.
Lesson learned: Java is easy to get started with out of the box, memory safe, battle tested, and the powerful JIT means that if warmup times are a negligible factor in your usage patterns your Java code can later be optimized to be equivalent in performance to a Rust implementation.
internetter 9 hours ago [-]
I'd rather write rust than java, personally
noisy_boy 6 hours ago [-]
If I have all the time in the world, sure. When I'm racing against a deadline, I don't want to wrestle with the borrow checker too. Sure, it's objections help with the long term quality of the code and reduce bugs but that's hard to justify to a manager/process driven by Agile and Sprints. Quite possible that an experienced Rust dev can be very productive but there aren't tons of those going around.
Java has the stigma of ClassFactoryGeneratorFactory sticking to it like a nasty smell but that's not how the language makes you write things. I write Java professionally and it is as readable as any other language. You can write clean, straightforward and easy to reason code without much friction. It's a great general purpose language.
im3w1l 2 hours ago [-]
I have found that the ClassFactoryGeneratorFactories sneak up on you. Even if you don't want to the ecosystem slowly but surely nudges you that way.
noisy_boy 1 hours ago [-]
That has not been my experience. Sure, you don't have any control over the third-party stuff but I haven't seen this issue being widespread in the mainstream third-party libraries I've used e.g. logback, jackson, junit, jedis, pgJDBC etc which are very well known/widely used. The only place I've actually seen proliferation of this was by a contractor, who I suspect, was trying to ensure job security behind impenetrability.
willtemperley 4 hours ago [-]
Java is incredibly productive - it's fast and has the best tooling out there IMO.
Unfortunately it's not a good gaming language. GC pauses aren't really acceptable (which C# also suffers from) and GPU support is limited.
Miguel de Icaza probably has more experience than anyone building game engines on GC platforms and is very vocally moving toward reference counted languages [1]
Java has made great progress with low-pause (~1 ms) garbage collectors like ZGC and Shenandoah since ~5 years ago.
BigJono 1 hours ago [-]
People have 240hz monitors these days, you have a bit over 4ms to render a frame. If that 1ms can be eliminated or amortised over a few frames it's still a big deal, and that's assuming 1ms is the worst case scenario and not the best.
lolinder 9 hours ago [-]
I'd have said the same thing 10 years ago (or, I would have if I were comparing 10-year-old Java with modern Rust), but Java these days is actually pretty ergonomic. Rust's borrow checker balances out the ML-style niceties to bring it down to about Java's level for me, depending on the application.
vips7L 7 hours ago [-]
I’d rather write Java than Rust, personally
dullcrisp 3 hours ago [-]
Wow, way to be un-hip.
ycombinatrix 2 hours ago [-]
>I realized that what I'd boiled it down to looked an awful lot like Rust code
you're no longer writing idiomatic java at this point - probably with zero object oriented programming. so might as well write it in Rust from the get-go.
jamessinghal 2 hours ago [-]
Yes but it would just be the hot loop in this case; the rest of the app can still be in idiomatic Java, and you still get the GC.
somenameforme 2 hours ago [-]
> "I really like Rust as a replacement for C++, especially given that C++ seems to become crazier every year."
I don't understand this argument, which I've also seen it used against C#, quite frequently. When a language offers new features, you're not forced to use them. You generally don't even need to learn them if you don't want. I do think some restrictions in languages can be highly beneficial, like strong typing, but the difference is that in a weakly typed language that 'feature' is forced upon you, whereas random new feature in C++ or C# is near to always backwards compatible and opt-in only.
For instance, to take a dated example - consider move semantics in C++. If you never used it anywhere at all, you'd have 0 problems. But once you do, you get lots of neat things for free. And for these sort of features, I see no reason to ever oppose their endless introduction unless such starts to imperil the integrity/performance of the compiler, but that clearly is not happening.
tonyedgecombe 2 hours ago [-]
You can't avoid a lot of this stuff, once libraries start using it or colleagues add it to your codebase then you need to know it. I'd argue you need to know it well before you decide to exclude it.
wffurr 10 hours ago [-]
>> a bit as if people went for C++ instead of a JVM language "because the JVM is slow" (spoiler: it is not)
The OP is doing game development. It’s possible to write a performant game in Java but you end up fighting the garbage collector the whole way and can’t use much library code because it’s just not written for predictable performance.
palata 10 hours ago [-]
I didn't mean that the OP should use Java. BTW the OP does not use C++, but Rust.
This said, they moved to Unity, which is C#, which is garbage collected, right?
jayd16 10 hours ago [-]
C# also has "Value Types" which can be stack allocated and passed by value. They're used extensively in game dev.
vips7L 7 hours ago [-]
Hopefully that changes once Java releases their value types.
elabajaba 9 hours ago [-]
The core unity game engine is c++ that you can't access, but all unity games are written in c#.
Narishma 6 hours ago [-]
Unity games are C#, the engine itself is C++.
neonsunset 10 hours ago [-]
C#/.NET has huge feature area for low-level/hands-on memory manipulation, which is highly relevant to gamedev.
WalterBright 2 hours ago [-]
The advantage C has over C++ is it won't let you use templates.
wyager 8 hours ago [-]
I write a lot of Rust, but as you say, it's basically a vastly improved version of C++. C++ is not always the right move!
For all my personal projects, I use a mix of Haskell and Rust, which I find covers 99% of the product domains I work in.
Ultra-low level (FPGA gateware): Haskell. The Clash compiler backend lets you compile (non-recursive) Haskell code directly to FPGA. I use this for audio codecs, IO expanders, and other gateware stuff.
Very low-level (MMUless microcontroller hard-realtime) to medium-level (graphics code, audio code): Rust dominates here
High-level (have an MMU, OS, and desktop levels of RAM, not sensitive to ~0.1ms GC pauses): Haskell becomes a lot easier to productively crank out "business logic" without worrying about memory management. If you need to specify high-level logic, implement a web server, etc. it's more productive than Rust for that type of thing.
Both languages have a lot of conceptual overlap (ADTs, constrained parametric types, etc.), so being familiar with one provides some degree of cross-training for the other.
goku12 1 hours ago [-]
What do you mean by 'a mix of Haskell and Rust'? Is that a per-project choice or do you use both in a single project? I'm interested in the latter. If so, could you point me to an example?
Another question is about Clash. Your description sounds like the HLS (high level synthesis) approach. But I thought that Clash used a Haskell -based DSL, making it a true HDL. Could you clarify this? Thanks!
jandrewrogers 9 hours ago [-]
> C instead of C++ because "it's faster" (spoiler: it probably doesn't matter for your project)
If your C is faster than your C++ then something has gone horribly wrong. C++ has been faster than C for a long time. C++ is about as fast as it gets for a systems language.
haberman 8 hours ago [-]
> C++ has been faster than C for a long time.
What is your basis for this claim? C and C++ are both built on essentially the same memory and execution model. There is a significant set of programs that are valid C and C++ both -- surely you're not suggesting that merely compiling them as C++ will make them faster?
There's basically no performance technique available in C++ that is not also available in C. I don't think it's meaningful to call one faster than the other.
jandrewrogers 6 hours ago [-]
This is really an “in theory” versus “in practice” argument.
Yes, you can write most things in modern C++ in roughly equivalent C with enough code, complexity, and effort. However, the disparate economics are so lopsided that almost no one ever writes the equivalent C in complex systems. At some point, the development cost is too high due to the limitations of the expressiveness and abstractions. Everyone has a finite budget.
I’ve written the same kinds of systems I write now in both C and modern C++. The C equivalent versions require several times the code of C++, are less safe, and are more difficult to maintain. I like C and wrote it for a long time but the demands of modern systems software are a beyond what it can efficiently express. Trying to make it work requires cutting a lot of corners in the implementation in practice. It is still suited to more classically simple systems software, though I really like what Zig is doing in that space.
I used to have a lot of nostalgia for working in C99 but C++ improved so rapidly that around C++17 I kind of lost interest in it.
haberman 40 minutes ago [-]
None of this really supports your claim that "C++ has been faster than C for a long time."
You can argue that C takes more effort to write, but if you write equivalent programs in both (ie. that use comparable data structures and algorithms) they are going to have comparable performance.
In practice, many best-in-class projects are written in C (Lua, LuaJIT, SQLite, LMDB). To be fair, most of these projects inhabit a design space where it's worth spending years or decades refining the implementation, but the combination of performance and code size you can get from these C projects is something that few C++ projects I have seen can match.
For code size in particular, the use of templates makes typical C++ code many times larger than equivalent C. While a careful C++ programmer could avoid this (ie. by making templated types fall back to type-generic algorithms to save on code size), few programmers actually do this, and in practice you end up with N copies of std::vector, std::map, etc. in your program (even the slow fallback paths that get little benefit from type specialization).
WalterBright 2 hours ago [-]
> What is your basis for this claim?
Great question! Here's one answer:
Having written a great deal of C code, I made a discovery about it. The first algorithm and data structure selected for a C program, stayed there. It survives all the optimizations, refactorings and improvements. But everyone knows that finding a better algorithm and data structure is where the big wins are.
Why doesn't that happen with C code?
C code is not plastic. It is brittle. It does not bend, it breaks.
This is because C is a low level language that lacks higher level constructs and metaprogramming. (Yes, you can metaprogram with the C preprocessor, a technique right out of hell.) The implementation details of the algorithm and data structure are distributed throughout the code, and restructuring that is just too hard. So it doesn't happen.
A simple example:
Change a value to a pointer to a value. Now you have to go through your entire program changing dots to arrows, and sprinkle stars everywhere. Ick.
Or let's change a linked list to an array. Aarrgghh again.
Higher level features, like what C++ and D have, make this sort of thing vastly simpler. (D does it better than C++, as a dot serves both value and pointer uses.) And so algorithms and data structures can be quickly modified and tried out, resulting in faster code. A traversal of an array can be changed to a traversal of a linked list, a hash table, a binary tree, all without changing the traversal code at all.
C and C++ do have very different memory models, C essentially follows the "types are a way to decode memory" model while C++ has an actual object model where accessing memory using the wrong type is UB and objects have actual lifetimes. Not that this would necessarily lead to performance differences.
When people claim C++ to be faster than C, that is usually understood as C++ provides tools that makes writing fast code easier than C, not that the fastest possible implementation in C++ is faster than the fastest possible implementation in C, which is trivially false as in both cases the fastest possible implementation is the same unmaintainable soup of inline assembly.
The typical example used to claim C++ is faster than C is sorting, where C due to its lack of templates and overloading needs `qsort` to work with void pointers and a pointer to function, making it very hard on the optimiser, when C++'s `std::sort` gets the actual types it works on and can directly inline the comparator, making the optimiser work easier.
ryao 7 hours ago [-]
Try putting objects into two linked lists in C using sys/queue.h and in C++ using the STL. Try sorting the linked lists. You will find C outperforms C++. That is because C’s data structures are intrusive, such that you do not have external nodes pointing to the objects to cause an extra random memory access. The C++ STL requires an externally allocated node that points to the object in at least one of the data structures, since only 1 container can manage the object lifetimes to be able to concatenate its node with the object as part of the allocation. If you wish to avoid having object lifetimes managed by containers, things will become even slower, because now both data structures will have an extra random memory access for every object. This is not even considering the extra allocations and deallocations needed for the external nodes.
That said, external comparators are a weakness of generic C library functions. I once manually inlined them in some performance critical code using the C preprocessor:
It seems like your argument is predicated on using the C++ STL. Most people don’t for anything that matters and it is trivial to write alternative implementations that have none of the weaknesses you are arguing. You have created a bit of a strawman.
One of the strengths of C++ is that it is well-suited to compile-time codegen of hyper-optimized data structures. In fact, that is one of the features that makes it much better than C for performance engineering work.
ryao 4 hours ago [-]
Most C++ code I have seen uses the STL. As for “hyper optimized” data structures, you already have those in C. See the B-Tree code those binary search routine I patched to run faster. Nothing C++ adds improves upon what you can do performance wise in C.
You have other sources of slow downs in C++, since the abstractions have a tendency to hide bloat, such as excessive dynamic memory usage, use of exceptions and code just outright compiling inefficiently compared to similar code in C. Too much inlining can also be a problem, since it puts pressure on CPU instruction caches.
mawww 2 hours ago [-]
C and C++ can be made to generate pretty much the same assembly, sure. I find it much easier to maintain a template function than a macro that expands to a function as you did in the B-Tree code, but reasonable people can disagree on that.
Abstractions can hide bloat for sure, but the lack of abstraction can also push coders towards suboptimal solutions. For example C code tends to use linked lists just because its easy to implement when a dynamic array such as std::vector would have been more performant.
Too much inlining can of course be a problem, the optimizer has loads of heuristics to decide if inlinining is worth it or not, and the programmer can always mark the function as `[[gnu::noinline]]` if necessary. It is not because C++ makes it possible for the sort comparator to be inlined that it will.
In my experience, exceptions have a slightly positive impact on codegen (compared to code that actually checks error return values, not code that ignores them) because there is no error checking on the happy path at all. The sad path is greatly slowed down though.
Having worked in highly performance sensitive code all of my career (video game engines and trading software), I would miss a lot of my toolbox if I limited myself to plain C and would expect to need much more effort to achieve the same result.
jandrewrogers 3 hours ago [-]
This is not a convincing argument for C. None of this matches my experience across many companies. In particular, the specific things you cite — excessive dynamic memory usage, exceptions, bloat — are typically only raised by people who don’t actually use C++ in the kinds of serious applications where C++ is the tool of choice. Sure, you could write C++ the way you describe but that is just poor code. You can do that in any language.
For example, exceptions have been explicitly disabled on every C++ code base I’ve ever worked on, whether FAANG or a smaller industrial company. It isn’t compatible with some idiomatic high-performance software architectures so it would be weird to even turn it on. C++ allows you to strip all bloat at compile-time and provides tools to make it easy in a way that C could only dream of, a standard metaprogramming optimization. Excessive dynamic allocation isn’t a thing in real code bases unless you are naive. It is idiomatic for many C++ code bases to never do any dynamic allocation at runtime, never mind “excessive”.
C++ has many weaknesses. You are failing to identify any that a serious C++ practitioner would recognize as valid. In all of this you also failed to make an argument for why anyone should use C. It isn’t like C++ can’t use C code.
uecker 7 hours ago [-]
In my experience, templates usually cause a lot of bloat that slows things down. Sure, in microbenchmarks it always looks good to specialize everything at compile time, whether this is what you want in a larger project is a different question. And then, also a C compiler can specialize a sort routine for your types just fine. It just needs to be able too look into it, i.e. it does not work for qsort from the libc. I agree to your point that C++ comes with fast implementations of algorithms out-of-the-box. In C you need to assemble a toolbox yourself. But once you have done this, I see no downside.
6 hours ago [-]
krapht 8 hours ago [-]
I know you're going to reply with "BUT MY PREPROCESSOR", but template specialization is a big win and improvement (see qsort vs std::sort).
ryao 7 hours ago [-]
I have used the preprocessor to avoid this sort of slowdown in the past in a binary search function:
The performance gain comes not from eliminating the function overhead, but enabling conditional move instructions to be used in the comparator, which eliminates a pipeline hazard on each loop iteration. There is some gain from eliminating the function overhead, but it is tiny in comparison to eliminating the pipeline hazard.
That said, C++ has its weaknesses too, particularly in its typical data structures, its excessive use of dynamic memory allocation and its exception handling. I gave an example here:
That's not the most general case, but it's better than I expected.
ryao 6 hours ago [-]
Nice catch. I had goofed by omitting optimization when checking this from an iPad.
That said, this brings me to my original reason for checking this, which is to say that it did not use a cmov instruction to eliminate unnecessary branching from the loop, so it is probably slower than a binary search that does:
It should be possible to adapt this to benchmark both the inlined bsearch() against an implementation designed to encourage the compiler to emit a conditional move to skip a branch to see which is faster:
My guess is the cmov version will win. I assume merits a bug report, although I suspect improving this is a low priority much like my last report in this area:
> If your C is faster than your C++ then something has gone horribly wrong. C++ has been faster than C for a long time.
In certain cases, sure - inlining potential is far greater in C++ than in C.
For idiomatic C++ code that doesn't do any special inlining, probably not.
IOW, you can rework fairly readable C++ code to be much faster by making an unreadable mess of it. You can do that for any language (C included).
But what we are usually talking about when comparing runtime performance in production code is the idiomatic code, because that's how we wrote it. We didn't write our code to resemble the programs from the language benchmark game.
ryao 7 hours ago [-]
I doubt that because C++ encourages heavy use of dynamic memory allocations and data structures with external nodes. C encourages intrusive data structures, which eliminates many of the dynamic memory allocations done in C++. You can do intrusive data structures in C++ too, but it clashes with object oriented idea of encapsulation, since an intrusive data structure touches fields of the objects inside it. I have never heard of someone modifying a class definition just to add objects of that class to a linked list for example, yet that is what is needed if you want to use intrusive data structures.
While I do not doubt some C++ code uses intrusive data structures, I doubt very much of it does. Meanwhile, C code using <sys/queue.h> uses intrusive lists as if they were second nature. C code using <sys/tree.h> from libbsd uses intrusive trees as if they were second nature. There is also the intrusive AVL trees from libuutil on systems that use ZFS and there are plenty of other options for such trees, as they are the default way of doing things in C. In any case, you see these intrusive data structures used all over C code and every time one is used, it is a performance win over the idiomatic C++ way of doing things, since it skips an allocation that C++ would otherwise do.
The use of intrusive data structures also can speed up operations on data structures in ways that are simply not possible with idiomatic C++. If you place the node and key in the same cache line, you can get two memory fetches for the price of one when sorting and searching. You might even see decent performance even if they are not in the same cache line, since the hardware prefetcher can predict the second memory access when the key and node are in the same object, while the extra memory access to access a key in a C++ STL data structure is unpredictable because it goes to an entirely different place in memory.
You could say if you have the C++ STL allocate the objects, you can avoid this, but you can only do that for 1 data structure. If you want the object to be in multiple data structures (which is extremely common in C code that I have seen), you are back to inefficient search/traversal. Your object lifetime also becomes tied to that data structure, so you must be certain in advance that you will never want to use it outside of that data structure or else you must do at a minimum, another memory allocation and some copies, that are completely unnecessary in C.
Exception handling in C++ also can silently kill performance if you have many exceptions thrown and the code handles it without saying a thing. By not having exception handling, C code avoids this pitfall.
hedora 5 hours ago [-]
OO (implementation inheritance) is frowned upon in modern C++. Also, all production code bases I’ve seen pass -fno-exceptions to the compiler.
delusional 3 hours ago [-]
Ahh yes, now we are getting somewhere. "C++ is faster because it has all these features, no not those features nobody uses those. The STL, no, you rewrite that"
jandrewrogers 2 hours ago [-]
The poster you are responding to is correct. Modern C++ has established idiomatic code practices that are widely used in industry. Imagining how someone could use legacy language features in the most naive possible way, contrary to industry practice, is not a good faith argument. You can do that with any programming language.
You are arguing against what the language was 30-40 years ago. The language has undergone two pretty fundamental revisions since then.
cantrecallmypwd 9 hours ago [-]
> C++ has been faster than C for a long time.
Citation needed.
zxvkhkxvdvbdxz 8 hours ago [-]
> If your C is faster than your C++ then something has gone horribly wrong. C++ has been faster than C for a long time. C++ is about as fast as it gets for a systems language.
That's interesting, did ChatGPT tell you this?
djmips 11 hours ago [-]
I agree with you except for the JVM bit - but everyone's application varies
palata 10 hours ago [-]
My point is that there are situations where C++ (or Rust) is required because the JVM wouldn't work, but those are niche.
In my experience, most people who don't want a JVM language "because it is slow" tend to take this as a principle, and when you ask why their first answer is "because it's interpreted". I would say they are stuck in the 90s, but probably they just don't know and repeat something they have heard.
Similar to someone who would say "I use Gentoo because Ubuntu sucks: it is super slow". I have many reasons to like Gentoo better than Ubuntu as my main distro, but speed isn't one in almost all cases.
twic 8 hours ago [-]
The JVM is excellent for throughput, once the program has warmed up, but it always has much more jitter than a more systemsy language like C++ or Rust. There are definitely use cases where you need to consistently react fast, where Java is not a good choice.
It also struggles with numeric work involving large matrices, because there isn't good support for that built into the language or standard library, and there isn't a well-developed library like NumPy to reach for.
peterashford 11 hours ago [-]
You think the JVM is slow?
mceachen 11 hours ago [-]
IME large linear algebra algos run like molasses in a jvm compared to compiled solutions. You're always fighting the gc.
za3faran 10 hours ago [-]
Do you have any benchmarks to show, out of curiosity?
light_hue_1 9 hours ago [-]
Ok. But we have plenty of C libraries to bind to that for.
They're far slower in Python but that hasn't stopped anyone.
9 hours ago [-]
bluGill 10 hours ago [-]
Depends. JVM is fast once hotspot figures things out - but that means the first level is slow and you lose your users.
vips7L 7 hours ago [-]
You can always load JIT caches if you can’t wait for warm up.
artursapek 10 hours ago [-]
Install Gentoo
palata 10 hours ago [-]
As I said, I use Gentoo already ;-).
gerdesj 9 hours ago [-]
Quite.
I was a Gentoo user (daily driver) for around 15 years but the endless compilation cycles finally got to me. It is such a shame because as I started to depart, Gentoo really got its arse in gear with things like user patching etc and no doubt is even better.
It has literally (lol) just occurred to me that some sort of dual partition thing could sort out my main issue with Gentoo.
@system could have two partitions - the running one and the next one that is compiled for and then switched over to on a reboot. @world probably ought to be split up into bits that can survive their libs being overwritten with new ones and those that can't.
Errrm, sorry, I seem to have subverted this thread.
fc417fc802 7 hours ago [-]
You have approximately described guix.
curt15 8 hours ago [-]
Gentoo Silverblue?
VWWHFSfQ 11 hours ago [-]
Rust is very easy when you want to do easy things. You can actually just completely avoid the borrow-checker altogether if you want to. Just .clone(), or Arc/Mutex. It's what all the other languages (like Go or Java) are doing anyway.
But if you want to do a difficult and complicated thing, then Rust is going to raise the guard rails. Your program won't even compile if it's unsafe. It won't let you make a buggy app. So now you need to back up and decide if you want it to be easy, or you want it to be correct.
Yes, Rust is hard. But it doesn't have to be if you don't want.
WD-42 10 hours ago [-]
This argument goes only so far. Would you consider querying a database hard? Most developers would say no. But it’s actually a pretty hard problem, if you want to do it safely. In rust, that difficultly leaks into the crates. I have a project that uses diesel and to make even a single composable query is a tangle of uppercase Type soup.
This just isn’t a problem in other languages I’ve used, which granted aren’t as safe.
I love Rust. But saying it’s only hard if you are doing hard things is an oversimplification.
izacus 26 minutes ago [-]
> This just isn’t a problem in other languages I’ve used, which granted aren’t as safe.
Most languages used with DBs are just as safe. This propaganda about Rust being more safe than languages with GC needs a rather big [Citation Needed] by the fans.
ben-schaaf 9 hours ago [-]
Building a proper ORM is hard. Querying a database is not. See the postgres crate for an example.
Querying a database while ensuring type safety is harder, but you still don't need an OEM for that. See sqlx.
dudinax 10 hours ago [-]
My feeling is that rust makes easy things hard and hard things work.
goku12 51 minutes ago [-]
I'm not going to deny your experience. But is Rust really that hard? It's a very smooth experience for me - sometimes enough for me to choose it instead of Python.
I know that the compiler complains a lot. But I code with the help of realtime feedback from tools like the language server (rust-analyzer) and bacon. It feels like 'debug as you code'. And I really love the hand holding it does.
palata 10 hours ago [-]
If you use Rust with `.clone()` and Arc/Mutex, why not just using one of the myriad of other modern and memory safe languages like Go, Scala/Kotlin/Java, C#, Swift?
The whole point of Rust is to bring memory safety with zero cost abstraction. It's essentially bringing memory safety to the use-cases that require C/C++. If you don't require that, then a whole world of modern languages becomes available :-).
mtndew4brkfst 7 hours ago [-]
For me personally, doing the clone-everything style of Rust for a first pass means I still have a graceful incremental path to go pursue the harder optimizations that are possible with more thoughtful memory management. The distinction is that I can do this optimization pass continuing to work in Rust rather than considering, and probably discarding, a potential rewrite to a net-new language if I had started in something like Ruby/Python/Elixir. FFI to optimize just the hot paths in a multi-language project has significant downsides and tradeoffs.
Plus in the meantime, even if I'm doing the "easy mode" approach I get to use all of the features I enjoy about writing in Rust - generics, macros, sum types, pattern matching, Result/Option types. Many of these can't be found all together in a single managed/GC'd languages, and the list of those that I would consider viable for my personal or professional use is quite sparse.
lostb1t 5 minutes ago [-]
Agree in this, i enjoy Rust and use the same approach.
People are saying rust is harsh, i would day its not that much harder then other languages just more verbose and demanding.
echelon 11 hours ago [-]
Rust is actually quite suitable for a number of domains where it was never intended to excel.
Writing web service backends is one domain where Rust absolutely kicks ass. I would choose Rust/(Actix or Axum) over Go or Flask any day. The database story is a little rough around the edges, but it's getting better and SQLx is good enough for me.
edit: The downvoters are missing out.
palata 10 hours ago [-]
To me, web dev really sounds like the one place where everything works and it's more a question of what is in fashion. Java, Ruby, Python, PHP, C, C++, Go, Rust, Scala, Kotlin, probably even Swift? And of course NodeJS was made for that, right?
I am absolutely convinced I can find success story of web backends built with all those languages.
goku12 28 minutes ago [-]
There are 3 cases. The first is that you are comfortable with Rust and you just choose it for that. The second is that you're not comfortable with Rust and you choose something else that works for you.
The third is the interesting one. When your service has a lot of traffic and every bit of inefficiency costs you money (node rents) and energy. Rust is an obvious improvement over the interpreted languages. There are also a few rare cases where Rust has enough advantages over Go to choose the former. In general though, I feel that a lot of energy consumption and emissions can be avoided by choosing an appropriate language like Rust and Go.
This would be a strong argument in favor of these languages in the current environmental conditions, if it weren't for 'AI'. Whether it be to train them or run them, they guzzle energy even for problems that could be solved with a search engine. I agree that LLMs can do much more. But I don't think they do enough for the energy they consume.
echelon 9 hours ago [-]
Perhaps. But a comparable Rust backend stack produces a single binary deployable that can absorb 50,000 QPS with no latency caused by garbage collection. You get all of that for free.
The type system and package manager are a delight, and writing with sum types results in code that is measurably more defect free than languages with nulls.
aquariusDue 7 hours ago [-]
Yep, that's precisely it! When dealing with other languages I miss the "match" keyword and being able to open a block anywhere. Sure, sometimes Rust allows you to write terse abominations if you don't exercise a dose of caution and empathy for future maintainers (you included).
Other than the great developer experience in tooling and language ergonomics (as in coherent features not necessarily ease of use) the reason I continue to put up with the difficulties of Rust's borrow checker is because I feel I can work towards mastering one language and then write code across multiple domains AND at the end I'll have an easy way to share it, no Docker and friends needed.
But I don't shy away from the downsides. Rust loads the cognitive burden at the ends. Hard as hell in the beginning when learning it and most people (me included) bounce from it for the first few times unless they have C++ experience (from what I can tell). At the middle it's a joy even when writing "throwaway" code with .expect("Lol oops!") and friends. But when you get to the complex stuff it becomes incredibly hard again because Rust forces you to either rethink your design to fit the borrow checker rules or deal with unsafe code blocks which seem to have their own flavor of C++ like eldritch horrors.
Anyway, would *I* recommend Rust to everyone? Nah, Go is a better proposition for a most bang for your buck language, tooling and ecosystem UNLESS you're the kind that likes to deal with complexity for the fulfilled promise of one language for almost anything. In even simpler terms Go is good for most things, Rust can be used for everything.
Also stuff like Maud and Minijinja for Rust are delights on the backend when making old fashioned MPA.
Thanks for coming to my TED talk.
tonyedgecombe 1 hours ago [-]
>Anyway, would I recommend Rust to everyone?
For me it's a question of whether I can get away with garbage collection. If I can then pretty much everything else is going to be twice as productive but if I can't then the options are quite limited and Rust is a good choice.
vips7L 7 hours ago [-]
What language are you using that doesn’t have match? Even Java has the equivalent. The only ones I can think of that don’t are the scripting languages.. Python and JS.
tonyedgecombe 1 hours ago [-]
Does Java have sum types now?
lock1 4 hours ago [-]
[dead]
icantcode 7 hours ago [-]
Yeah, anything with nulls ends up with Option<this> and Option<that> which means unwraps or matches. There is a comment above about good bedrock and Rust works OK with nulls but it works really well with unsparse databases (avoiding joins).
ajross 9 hours ago [-]
Yeah, "web services backend" really means "code exercising APIs pioneered by SunOS in 1988". It's easy to be rock solid if your only dependency is the bedrock.
jokethrowaway 7 hours ago [-]
The bar for web services is low, so pretty much anything works as long as it's easy. I wouldn't call them a success story.
When things get complex, you start missing Rust's type system and bugs creep in.
In node.js there was a notable improvement when TS became the de-facto standard and API development improved significantly (if you ignore the poor tooling, transpiling, building, TS being too slow). It's still far from perfect because TS has too many escape hatches and you can't trust TS code; with Rust, if it compiles and there are no unsafe (which is rarely a problem in web services) you get a lot of compile time guarantees for free.
benwilber0 10 hours ago [-]
Tokio + Axum + SQLx has been a total game-changer for me for web dev. It's by far the most productive I've been with any backend web stack.
echelon 9 hours ago [-]
People that haven't tried this are downvoting with prejudice, but they just don't know.
Rust is an absolute gem at web backend. An absolute fucking gem.
efnx 10 hours ago [-]
I think this is a problem of using the right abstractions.
Rust gamedev is the Wild West, and frontier development incurs the frontier tax. You have to put a lot of work into making an abstraction, even before you know if it’s the right fit.
Other “platforms” have the benefit of decades more work sunk into finding and maintaining the right abstractions. Add to that the fact that Rust is an ML in sheep’s clothing, and that games and UI in FP has never been a solved problem (or had much investment even), it’s no wonder Rust isn’t ready. We haven’t even agreed on the best solutions to many of these problems in FP, let alone Rust specifically!
Anyway, long story short, it takes a very special person to work on that frontier, and shipping isn’t their main concern.
klabb3 11 hours ago [-]
The fact that people love the language is an unexpected downside. In my experience the rust ecosystem has an insanely high churn rate. Crates are often abandoned seemingly for no reason, often before even hitting 1.0. My theory is this is because people want to use rust primarily, the domain problem is just a challenge, like a level in a game. Once all the fun parts are solved, they leave it for dead.
Conversely and ironically, this is why I love Go. The language itself is so boring and often ugly, but it just gets out of the way and has the best in class tooling. The worst part is having seen the promised land of eg Rust enums, and not having them in other langs.
meindnoch 11 hours ago [-]
This.
Feeling passionate about a programming language is generally bad for the products made with that language.
frontfor 3 hours ago [-]
Agreed. For the same reason I unironically prefer Java, Go, C++, JS/TS to solve real problems.
11 hours ago [-]
bmitc 11 hours ago [-]
I find it interesting how the software industry has done everything it can to ignore F#. This is me just lamenting how I always come back to it as the best general purpose language.
andrewflnr 5 hours ago [-]
Probably the intersection of people who (a) want an advanced ML-style language and (b) are interested in a CLR-based language is very small. But also, doesn't it do some weird thing where it matters in what order the files are included in the compilation? I remember being interested in F# but being turned off by that, and maybe some other weird details.
frontfor 3 hours ago [-]
I don’t want to use a language with unknown ecosystem. If I need a library to do X, I’m confident I can find it for Go, Java, Python etc. But I don’t know about F#.
I also don’t want to use a language with questionable hireability.
Blackcatmaxy 2 hours ago [-]
Haven't used F# too much myself but one of the strong points is because it shares the CLR with C# you can use any of the many packages meant for C# and it'll work because of the shared runtime.
klabb3 8 hours ago [-]
Huh? Usually languages that are ”ignored” turns out to be for reasons such as poor or proprietary tooling. As an ignorant bystander, how are things like
Cross compilation, package manager and associated infrastructure, async io (epoll, io_uring etc), platform support, runtime requirements, FFI support, language server, etc.
Are a majority of these things available with first party (or best in class) integrated tooling that are trivial to set up on all big three desktop platforms?
For instance, can I compile an F# lib to an iOS framework, ideally with automatically generated bindings for C, C++ or Objective C? Can I use private repo (ie github) urls with automatic overrides while pulling deps?
Generally, the answer to these questions for – let’s call it ”niche” asterisk – languages, are ”there is a GitHub project with 15 stars last updated 3 years ago that maybe solves that problem”.
There are tons of amazing languages (or at the very least, underappreciated language features) that didn’t ”make it” because of these boring reasons.
My entire point is that the older and grumpier I get, the less the language itself matters. Sure, I hate it when my favorite elegant feature is missing, but at the end of the day it’s easy to work around. IMO the navel gazing and bikeshedding around languages is vastly overhyped in software engineering.
andrewflnr 5 hours ago [-]
It's been around for a long time and sponsored by Microsoft. I don't know its exact status, but the only reason for it to lack in any of those areas is lack of will.
zxvkhkxvdvbdxz 7 hours ago [-]
F# compiler is cross os and allows cross compilation (dotnet build --runtime xxx), its packaged in most Linux distros as dotnet.
klabb3 6 hours ago [-]
Ok that helps! So where does F# shine? Any particular domains?
lynndotpy 13 hours ago [-]
I love Rust, but this lines up with my experience roughly. Especially the rapid iteration. Tried things out with Bevy, but I went back to Godot.
There are so many QoL things which would make Rust better for gamedev without revamping the language. Just a mode to automatically coerce between numeric types would make Rust so much more ergonomic for gamedev. But that's a really hard sell (and might be harder to implement than I imagine.)
ChadNauseam 12 hours ago [-]
I wish more languages would lean into having a really permissive compiler that emits a lot of warnings. I have CI so I'm never going to actually merge anything that makes warnings. But when testing, just let me do whatever I want!
GHC has an -fdefer-type-errors option that lets you compile and run this code:
a :: Int
a = 'a'
main = print "b"
Which obviously doesn't typecheck since 'a' is not an Int, but will run just fine since the value of `a` is not observed by this program. (If it were observed, -fdefer-type-errors guarantees that you get a runtime panic when it happens.) This basically gives you the no-types Python experience when iterating, then you clean it all up when you're done.
This would be even better in cases where it can be automatically fixed. Just like how `cargo clippy --fix` will automatically fix lint errors whenever it can, there's no reason it couldn't also add explicit coercions of numeric types for you.
ninkendo 4 hours ago [-]
> I wish more languages would lean into having a really permissive compiler that emits a lot of warnings. I have CI so I'm never going to actually merge anything that makes warnings. But when testing, just let me do whatever I want!
I’d go even further and say I wish my whole development stack had a switch I can use to say “I’m not done iterating on this idea yet, cool it with the warnings.”
Unused imports, I’m looking at you… stop bitching that I’m not using this import line simply because I commented out the line that uses it in order to test something.
Stop complaining about dead code just because I haven’t finished wiring it up yet, I just want to unit test it before I go that far.
Stop complaining about unreachable code because I put a quick early return line in this function so that I could mock it to chase down this other bug. I’ll get around to fixing it later, I’m trying to think!
In rust I can go to lib.rs somewhere and #![allow(unused_imports,dead_code,etc)] and then remember to drop it by the time I get the branch ready for review, but that’s more cumbersome than it ought to be. My whole IDE/build/other tooling should have a universal understanding of “this is a work in progress please let me express my thoughts with minimal obstructions” mode.
zaptheimpaler 11 hours ago [-]
Yeah this is my absolute dream language. Something that lets you prototype as easily as Python but then compile as efficiently and safely as Rust. I thought Rust might actually fit the bill here and it is quite good but it's still far from easy to prototype in - lots of sharp edges with say modifying arrays while iterating, complex types, concurrency. Maybe Rust can be something like this with enough unsafe but I haven't tried. I've also been meaning to try more Typescript for this kind of thing.
FacelessJim 10 hours ago [-]
You should give Julia a shot.
That’s basically that. You can start with super dynamic code in a REPL and gradually hammer it into stricter and hyper efficient code. It doesn’t have a borrow checker, but it’s expressive enough that you can write something similar as a package (see BorrowChecker.jl).
jimbokun 11 hours ago [-]
Some Common Lisp implementations like SBCL have supported this style of development for many years. Everything is dynamically typed by default but as you specify more and more types the compiler uses them to make the generated code more efficient.
fc417fc802 7 hours ago [-]
I quite like common lisp but I don't believe any existing implementation gets you anywhere near the same level of compile time safety. Maybe something like typed racket but that's still only doing a fraction of what rust does.
myaccountonhn 10 hours ago [-]
I think OCaml could be such a language personally. Its like rust-lite or a functional go.
cantrecallmypwd 9 hours ago [-]
Xen and Wall St. folks use it.
tetha 12 hours ago [-]
Yeh, I've been tinkering around a year with a Bevy-competitor, Amethyst until that project shut down. By now, I just don't think Rust is good for client-side or desktop game development.
In my book, Rust is good at moving runtime-risk to compile-time pain and effort. For the space of C-Code running nuclear reactors, robots and missiles, that's a good tradeoff.
For the space of making an enemy move the other direction of the player in 80% of the cases, except for that story choice, and also inverted and spawning impossible enemies a dozen times if you killed that cute enemy over yonder, and.... and the worst case is a crash of a game and a revert to a save at level start.... less so.
And these are very regular requirements in a game, tbh.
And a lot of _very_silly_physics_exploits_ are safely typed float interactions going entirely nuts, btw. Type safety doesn't help there.
zxvkhkxvdvbdxz 7 hours ago [-]
> Yeh, I've been tinkering around a year with a Bevy-competitor, Amethyst until that project shut down. By now, I just don't think Rust is good for client-side or desktop game development.
I don't think your experience with Amethyst merits your conclusion of the state of gamedev in rust, especially given Amethysts own take on Bevy [1, 2].
> Just a mode to automatically coerce between numeric types would make Rust so much more ergonomic for gamedev.
C# is stricter about float vs. double for literals than Rust is, and the default in C# (double) is the opposite of the one you want for gamedev. That hasn't stopped Unity from gaining enormous market share. I don't think this is remotely near the top issue.
lynndotpy 7 hours ago [-]
I have written a lot of C# and I would very much not want to use it for gamedev either. I can only speak for my own personal preference.
__loam 12 hours ago [-]
I used to hate the language but statically typed GDscript feels like the perfect weight for indie development
IshKebab 12 hours ago [-]
Yeah I haven't really used it much but from what I've seen it's kind of what Python should have been. Looks way better than Lua too.
__loam 12 hours ago [-]
I like it better than python now, but it's still got some quirks. The lack of structs and typed callables are the biggest holes right now imo but you can work around those
Seattle3503 12 hours ago [-]
What numeric types typically need conversions?
koakuma-chan 12 hours ago [-]
The fact you need a usize specifically to index an array (and most collections) is pretty annoying.
anticrymactic 12 hours ago [-]
This could be different in game dev, but in the last years of writing rust (outside of learning the language) I very rarely need to index any collection.
There is a very certain way rust is supposed to be used, which is a negative on it's own, but it will lead to a fulfilling and productive programming experience. (My opinion) If you need to regularly index something, then you're using the language wrong.
bunderbunder 12 hours ago [-]
I'm no game dev but I have had friends who do it professionally.
Long story short, yes, it's very different in game dev. It's very common to pre-allocate space for all your working data as large statically sized arrays because dynamic allocation is bad for performance. Oftentimes the data gets organized in parallel arrays (https://en.wikipedia.org/wiki/Parallel_array) instead of in collections of structs. This can save a lot of memory (because the data gets packed more densely) be more cache-friendly, and makes it much easier to make efficient use of SIMD instructions.
This is also fairly common in scientific computing (which is more my wheelhouse), and for the same reason: it's good for performance.
Pet_Ant 12 hours ago [-]
> Oftentimes the data gets organized in parallel arrays (https://en.wikipedia.org/wiki/Parallel_array) instead of in collections of structs. This can save a lot of memory (because the data gets packed more densely) be more cache-friendly, and makes it much easier to make efficient use of SIMD instructions.
That seems like something that could very easily be turned into a compiler optimisation and enabled with something like an annotation. Would have some issue when calling across library boundaries ( a lot like the handling of gradual types), but within the codebase that'd be easy.
crq-yml 9 hours ago [-]
The underlying issue with game engine coding is that the problem is shaped in this way:
* Everything should be random access(because you want to have novel rulesets and interactions)
* It should also be fast to iterate over per-frame(since it's real-time)
* It should have some degree of late-binding so that you can reuse behaviors and assets and plug them together in various ways
* There are no ideal data structures to fulfill all of this across all types of scene, so you start hacking away at something good enough with what you have
* Pretty soon you have some notion of queries and optional caching and memory layouts to make specific iterations easier. Also it all changes when the hardware does.
* Congratulations, you are now the maintainer of a bespoken database engine
You can succeed at automating parts of it, but note that parent said "oftentimes", not "always". It's a treadmill of whack-a-mole engineering, just like every other optimizing compiler; the problem never fully generalizes into a right answer for all scenarios. And realistically, gamedevs probably haven't come close to maxing out what is possible in a systems-level sense of things since the 90's. Instead we have a few key algorithms that go really fast and then a muddle of glue for the rest of it.
rcxdude 9 hours ago [-]
It's not at all easy to implement as an optimisation, because it changes a lot of semantics, especially around references and pointers. It is something that you can e.g. implement using rust procedural macros, but it's far from transparent to switch between the two representations.
(It's also not always a win: it can work really well if you primarily operate on the 'columns', and on each column more or less once per update loop, but otherwise you can run into memory bandwidth limitations. For example, games with a lot of heavily interacting systems and an entity list that doesn't fit in cache will probably be better off with trying to load and update each entity exactly once per loop. Factorio is a good example of a game which is limited by this, though it is a bit of an outlier in terms of simulation size.)
bunderbunder 11 hours ago [-]
Meh. I've tried "SIMD magic wand" tools before, and found them to be verschlimmbessern.
At least on the scientific computing side of things, having the way the code says the data is organized match the way the data is actually organized ends up being a lot easier in the long run than organizing it in a way that gives frontend developers warm fuzzies and then doing constant mental gymnastics to keep track of what the program is actually doing under the hood.
I think it's probably like sock knitting. People who do a lot of sock knitting tend to use double-pointed needles. They take some getting used to and look intimidating, though. So people who are just learning to knit socks tend to jump through all sorts of hoops and use clever tricks to allow them to continue using the same kind of knitting needles they're already used to. From there it can go two ways: either they get frustrated, decide sock knitting is not for them, and go back to knitting other things; or they get frustrated, decide magic loop is not for them, and learn how to use double-pointed needles.
djmips 11 hours ago [-]
Very much agree and love your analogy but there is a third option - make a sock knitting machine.
nonameiguess 12 hours ago [-]
I'm not a game dev, but what's a straightforward way of adjusting some channel of a pixel at coordinate X,Y without indexing the underlying raster array? Iterators are fine when you want to perform some operation on every item in a collection but that is far from the only thing you ever might want to do with a collection.
maccard 10 hours ago [-]
Game dev here. If you’re concerned about performance the only answer to this is a pixel shader, as anything else involves either cpu based rendering or a texture copy back and forth.
fc417fc802 7 hours ago [-]
A compute shader could update some subset of pixels in a texture. It's on the programmer to prevent race conditions though. However that would again involve explicit indexing.
In general I think GP is correct. There is some subset of problems that absolutely requires indexing to express efficiently.
spookie 57 minutes ago [-]
You can manipulate texture coordinate derivatives in order to just sample a subset of the whole texture on a pixel shader and only shade those pixels (basically the same as mipmapping, but you can have the "window" wherever you want really).
This is something you can't do on a compute shader, given you don't have access to the built-in derivative methods (building your own won't be cheaper either).
Still, if you want those changes to persist, a compute shader would be the way to go. You _can_ do it using a pixel shader but it really is less clean and more hacky.
11 hours ago [-]
ChadNauseam 12 hours ago [-]
This is getting downvoted but it's kind of true. Indexing collections all the time usually means you're not using iterators enough. (Although iterators become very annoying for fallible code that you want to return a Result, so sometimes it's cleaner not to use them.)
However this problem does still come up in iterator contexts. For example Iterator::take takes a usize.
bunderbunder 12 hours ago [-]
An iterator works if you're sequentially visiting every item in the collection, in the order they're stored. It's terrible if you need random access, though.
Concrete example: pulling a single item out of a zip file, which supports random access, is O(1). Pulling a single item out of a *.tar.gz file, which can only be accessed by iterating it, is O(N).
cantrecallmypwd 8 hours ago [-]
History lesson for the cheap seats in the back:
Compressed tars are terrible for random access because the compression occurs after the concatenation and so knows nothing about inner file metadata, but it's good for streaming and backups. Uncompressed tars are much better for random access. (Tar was a used as a backup mechanism to tape (tape archive).)
Zips are terrible for streaming because their metadata is stored at the end, but are better for 1-pass creation and on-disk random access. (Remember that zip files and programs were created in an era of multiple floppy disk-based backups.)
When fast tar enumeration is desired, at the cost of compatibility and compression potential, it might be worth compressing files and then taring them when and if zipping alone isn't achieving enough compression and/or decompression performance. FUSE compressed tar mounting gets to be really expensive with terabyte archives.
fc417fc802 7 hours ago [-]
> compressing files and then taring them
Just use squashfs if that is the functionality that you need.
kevincox 12 hours ago [-]
While you maybe "shouldn't" be indexing collections often (which I also don't agree with, there is a reason that we have more collections then linked lists, lookup is important) even just getting the size of a collection which is often very related to business logic can be quite annoying.
AndrewDucker 11 hours ago [-]
For data that needs to be looked up mostly I want a hashtable. Not always, but mostly. It's rare that I want to look up something but its position in a list.
Starlevel004 11 hours ago [-]
The actual problem with this is how to add it without breaking type inference for literal numbers.
lynndotpy 10 hours ago [-]
What I mean is, I want to be able to use i32/i64/u32/u64/f32/f64s interchangeably, including (and especially!) in libraries I don't own.
I'm usually working with positive values, and almost always with values within the range of integers f32 can safely represent (+- 16777216.0).
I want to be able to write `draw(x, y)` instead of `draw(x as u32, y as u32)`. I want to write "3" instead of "3.0". I want to stop writing "as".
It sounds silly, but it's enough to kill that gamedev flow loop. I'd love if the Rust compiler could (optionally) do that work for me.
One of the smartest devs I know built his game from scratch in C. Pretty complex game too - 3D open-world management game. It's now successful on steam.
Thing is, he didn't make the game in C. He built his game engine in C, and the game itself in Lua. The game engine is specific to this game, but there's a very clear separation where the engine ends and the game starts. This has also enabled amazing modding capabilities, since mods can do everything the game itself can do. Yes they need to use an embedded scripting language, but the whole game is built with that embedded scripting language so it has APIs to do anything you need.
I agree that the game is amazing from a technical point of view, but look at the reviews and the pace of development. The updates are sparse and slow, and if there's an update, it's barely an improvement. This is one the of disadvantages of creating a game engine from scratch: more time is spent on the engine than the game itself, which may or may not be bad depending on which perspective you look at it from.
pnathan 6 hours ago [-]
This confused me as well. The scripting / engine divide is old and long standing.
ryao 8 hours ago [-]
Do you know why he supports MacOS, but not Linux?
Rohansi 7 hours ago [-]
Most likely because they don't use Linux. Or because it's kind of a mine field to support with bugs that occur on different distros. Even Unity has their own struggles with Linux support.
They're distributing their game on Steam too so Linux support is next to free via Proton.
fc417fc802 7 hours ago [-]
> it's kind of a mine field to support with bugs that occur on different distros
Non-issue. Pick a single blessed distro. Clearly state that it's the only configuration that you officially support. Let the community sort the rest out.
iFire 6 hours ago [-]
It probably supports Linux via proton. Done. Official valve recommendation a few years ago not sure if still active.
nu11ptr 13 hours ago [-]
I did the same for my project and moved to Go from Rust. My iteration is much faster, but the code a bit more brittle, esp. for concurrency. Tests have become more important.
Still, given the nature of what my project is (APIs and basic financial stuff), I think it was the right choice. I still plan to write about 5% of the project in Rust and call it from Go, if required, as there is a piece of code that simply cannot be fast enough, but I estimate for 95% of the project Go will be more than fast enough.
klabb3 11 hours ago [-]
> but the code a bit more brittle, esp. for concurrency
Obligatory ”remember to `go run -race`”, that thing is a life saver. I never run into difficult data races or deadlocks and I’m regularly doing things like starting multiple threads to race with cancelation signals, extending timeouts etc. It’s by far my favorite concurrency model.
nu11ptr 10 hours ago [-]
Yep, I do use that, but after getting used to Rust's Send/Sync traits it feels wild and crazy there are no guardrails now on memory access between threads. More a feel thing than reality, but I just find I need to be a bit more careful.
akkad33 12 hours ago [-]
Is calling Rust from Go fast? Last time I checked the interface between C and Go is very slow
nu11ptr 11 hours ago [-]
No, it is not all that fast after the CGo call marshaling (Rust would need to compile to the C ABI). I would essentially call in to Rust to start the code, run it in its own thread pool and then call into Rust again to stop it. The time to start and stop don't really matter as this is code that runs from minutes to hours and is embarrassingly parallel.
spiffyk 12 hours ago [-]
I have no experience with FFI between C and Go, could anyone shed some light on this? They are both natively compiled languages – why would calls between them be much slower than any old function call?
atombender 9 hours ago [-]
There are two reasons:
• Go uses its own custom ABI and resizeable stacks, so there's some overhead to switching where the "Go context" must be saved and some things locked.
• Go's goroutines are a kind of preemptive green thread where multiple goroutines share the same OS thread. When calling C, the goroutine scheduler must jump through some hoops to ensure that this caller doesn't stall other goroutines on the same thread.
Calling C code from Go used to be slow, but over the last 10 years much of this overhead has been eliminated. In Go 1.21 (which came with major optimizations), a C call was down to about 40ns [1]. There are now some annotations you can use to further help speed up C calls.
And P/Invoke call can be as cheap as a direct C call, at 1-4ns
In Unity, Mono and/or IL2CPP's interop mechanism also ends up in the ballpark of direct call cost.
fsmv 11 hours ago [-]
There's some type translation and the Go runtime needs to turn some things off before calling out to C
dralley 12 hours ago [-]
Rust is no different from C in that respect.
dangoodmanUT 12 hours ago [-]
it's reasonably fast now
palata 11 hours ago [-]
> I still plan to write about 5% of the project in Rust and call it from Go, if required
And chances are that it won't be required.
ryanisnan 13 hours ago [-]
This seems like the right call. When it comes to projects like these, efficiency is almost everything. Speaking about my own experiences, when I hit a snag in productivity in a project like this, it's almost always a death-knell.
I too have a hobby-level interest in Rust, but doing things in Rust is, in my experience, almost always just harder. I mean no slight to the language, but this has universally been my experience.
mikepurvis 12 hours ago [-]
The advantages of correctness, memory safety, and a rich type system are worth something, but I expect it's a lot less when you're up against the value of a whole game design ecosystem with tools, assets, modules, examples, documentation, and ChatGPT right there to tell you how it all fits together.
Perhaps someday there will be a comparable game engine written in Rust, but it would probably take a major commercial sponsor to make it happen.
ryanisnan 12 hours ago [-]
One of the challenges I never quite got over completely, was that I was always fighting rust fundamentals, which tells me I never fully assimilated into thinking like a rustacean.
This was more of a me-problem, but I was constantly having to change my strategy to avoid fighting the borrow-checker, manage references, etc. In any case, it was a productivity sink.
mikepurvis 11 hours ago [-]
I bet, and that's particularly difficult when so much of modern game dev is just repeating extremely well-worn patterns— moving entities around and providing for scripted and emergent interactions between those entities and the player(s).
That's not to say that games aren't a very cool space to be in, but the challenges have moved beyond the code. Particularly in the indie space, for 10+ years it's been all about story, characters, writing, artwork, visual identity, sound and music design, pacing, unique gameplay mechanics, etc. If you're making a game in 2025 and the hard part is the code, then you're almost certainly doing it wrong.
sieabahlpark 8 hours ago [-]
[dead]
peterashford 10 hours ago [-]
This was my experience with Rust. I've bounced off it a few times and I think I've decided its just not for me.
bionhoward 6 hours ago [-]
Personally, I don’t think of it as fighting, more like “compiler assistance” —
you want to make some change, so you adjust a struct or a function signature, and then your IDE highlights all the places where changes are necessary with red squigglies.
Once you’re done playing whack-a-mole with the red squigglies, and tests pass, you know there’s no weird random crash hiding somewhere
wavemode 12 hours ago [-]
It is a question of tradeoffs. Indie studios should be happy to trade off some performance in exchange for more developer productivity (since performance is usually good enough anyway in an indie game, which usually don't have millions of entities, meanwhile developer productivity is a common failure point).
ChadNauseam 12 hours ago [-]
I love Bevy, but Unity is a weapon when it comes to quickly iterating and making a game. I think the Bevy developers understand that they have a long way to go before they get there. The benefits of Bevy (code-first, Rust, open source) still make me prefer it over Unity, but Unity is ridiculously batteries-included.
Many of the negatives in the post are positives to me.
> Each update brought with it incredible features, but also a substantial amount of API thrash.
This is highly annoying, no doubt, but the API now is just so much better than it used to be. Keeping backwards compatibility is valuable once a product is mature, but like how you need to be able to iterate on your game, game engine developers need to be able to iterate on their engine. I admit that this is a debuff to the experience of using Bevy, but it also means that the API can actually get better (unlike Unity which is filled with historical baggage, like the Text component).
noelwelsh 11 hours ago [-]
Not a game dev, but thought I'd mess around with Bevy and Rust to learn a bit more about both. I was surprised that my code crashed at runtime due to basics I expected the type system to catch. The fancy ECS system may be great for AAA games, but it breaks the basic connections between data and use that type systems rely on. I felt that Bevy was, unfortunately, the worst of both worlds: slow iteration without safety.
rellfy 7 hours ago [-]
I've always liked the concept of ECS, but I agree with this, although I have very limited experience with Bevy. If I were to write a game in Rust, I would most likely not choose ECS and Bevy because of two reasons: 1. Bevy will have lots of breaking changes as pointed in the post, and 2. ECS is almost always not required -- you can make performant games without ECS, and if with your own engine then you retain full control over breaking changes and API design compromises.
I think all posts I have seen regarding migrating away from writing a game in Rust were using Bevy, which is interesting. I do think Bevy is awesome and great, but it's a complex project.
ezekiel68 9 hours ago [-]
This is a personal project that had the specific goal of the person's brother, who was not a coder, being able to contribute to the project. On top of that, they felt the need to continuously upgrade to the latest version of the underlying game engine instead of locking to a version.
I have worked as a professional dev at game studios many would recognize. Those studios which used Unity didn't even upgrade Unity versions often unless a specific breaking bug got fixed. Same for those studios which used DirectX. Often a game shipped with a version of the underlying tech that was hard locked to something several years old.
The other points in the article are all valid, but the two factors above held the greatest weight as to why the project needed to switch (and the article says so -- it was an API change in Bevy that was "the straw that broke the camel's back").
A friend of mine wrote an article 25+ years ago about using C++ based scripting (compiles to C++). My friend is super smart engineer, but I don't think he was thinking of those poor scripters that would have to wait on iteration times. Granted 25 years ago the teams were small, but nowadays the amount of scripters you would have on AAA game is probably dozen if not two or three dozen and even more!
Imagine all of them waiting on compile... Or trying to deal with correctness, etc.
k__ 13 hours ago [-]
Good for them.
From a dev perspective, I think, Rust and Bevy are the right direction, but after reading this account, Bevy probably isn't there yet.
For a long time, Unity games felt sluggish and bloated, but somehow they got that fixed. I played some games lately that run pretty smoothly on decade old hardware.
byearthithatius 13 hours ago [-]
Love to have this comparison analysis. Huge LOC difference between Rust and C# (64k -> 17k!!!) though I am sure that is mostly access to additional external libraries that did things they wrote by hand in Rust.
bob1029 12 hours ago [-]
> I am sure that is mostly access to additional external libraries that did things they wrote by hand in Rust
This is the biggest reason I push for C#/.NET in "serious business" where concerns like auditing and compliance are non-negotiable aspects of the software engineering process. Virtually all of the batteries are included already.
For example, which 3rd party vendors we use to build products is something that customers in sectors like banking care deeply about. No one is going to install your SaaS product inside their sacred walled garden if it depends on parties they don't already trust or can't easily vet themselves. Microsoft is a party that virtually everyone can get on board with in these contexts. No one has to jump through a bunch of hoops to explain why the bank should trust System or Microsoft namespaces. Having ~everything you need already included makes it an obvious choice if you are serious about approaching highly sensitive customers.
bunderbunder 12 hours ago [-]
I worked in a regulated space at one time, and my understanding is that this is a big reason they chose .NET over Java. Java relies a lot more on third-party libraries, which makes getting things certified harder.
Log4shell was a good example of a relative strength of .NET in this area. If a comparable bug had happened in .NET's standard logging tooling, we likely would have seen all of the first-party .NET framework patched fairly shortly after, in a single coordinated release that we could upgrade to with minimal fuss. Meanwhile, at my current job we've still got standing exceptions allowing vulnerable version of log4j in certain services because they depend on some package that still has a hard dependency on a vulnerable version, which they in turn say they can't fix yet because they're waiting on one of their transitive dependencies to fix it, and so on. We can (and do) run periodic audits to confirm that the vulnerable parts of log4j aren't being used, but being able to put the whole thing in the past within a week or two would be vastly preferable to still having to actively worry about it 5 years later.
The relative conciseness of C# code that the parent poster mentioned was also a factor. Just shooting from the hip, I'd guess that I can get the same job done in about 2/3 as much code when I'm using C# instead of Java. Assuming that's accurate, that means that with Java we'd have had 50% more code to certify, 50% more code to maintain, 50% more code to re-certify as part of maintenance...
CharlieDigital 12 hours ago [-]
Hugely underrated aspect of .NET. If a CVE surfaces, there's a team a Microsoft that owns the code and is going to patch and ship a fix.
mawadev 10 hours ago [-]
In sectors that are critical here in the EU, nobody allows c# and microsoft due to licensing woes longterm. It's java and foss all the way down. SaaS also is not a thing unless it runs on prem.
dgellow 9 hours ago [-]
C# and Microsoft are in all critical places in Europe. What are you talking about
neonsunset 10 hours ago [-]
What kind of nonsense is this? EU is perfectly happy to use .NET-based languages as all of them, and the platform itself, are MIT (in fact, it's pretty popular out here).
CharlieDigital 13 hours ago [-]
C# is a very highly underrated (and oft misunderstood) language that has become more terse as it has aged -- in a very good way. C#'s terseness has not come at the cost of its legibility and in fact, I feel like enhances it in many cases.
> The maturity and vast amount of stable historical data for C# and the Unity API mean that tools like Gemini consistently provide highly relevant guidance.
This is also a highly underrated aspect of C# in that its surface area has largely remained stable from v1 (few breaking changes (though there are some valid complaints that surface from this with regards to keyword bloat!)). So the historical volume of extremely well-written documentation is a boon for LLMs. While you may get out-dated patterns (e.g. not using latest language features for terseness), you will not likely get non-working code because of the large and stable set of first party dependencies (whereas outdated 3rd party dependencies in Node often leads to breaking incompatibilities with the latest packages on NPM).
> It was also a huge boost to his confidence and contributed to a new feeling of momentum. I should point out that Blake had never written C# before.
Often overlooked with C# is its killer feature: productivity. Yes, when you get a "batteries included" framework and those "batteries" are quite good, you can be productive. Having a centralized repository for first party documentation is also a huge boon for productivity. When you have an extremely broad, well-written, well-organized standard library and first party libraries, it's very easy to ramp up productivity versus finding different 3rd party packages to fill gaps. Entity Framework, for example, feels miles better to me than Prisma, TypeORM, Drizzle, or any option on Node.js. Having first party rate limiting libraries OOB for web APIs is great for productivity. Same for having first party OpenAPI schema generators.
Less time wasted sifting through half-baked solutions.
> Code size shrank substantially, massively improving maintainability. As far as I can tell, most of this savings was just in the elimination of ECS boilerplate.
C# has three "super powers" to reduce code bloat which is its really rich runtime reflection, first-class expression trees, and Roslyn source generators to generate code on the fly. Used correctly, this can remove a lot of boilerplate and "templatey" code.
---
I make the case that many teams that outgrow JS/TS on Node.js should look to C# because of its congruence to TS[0] before Go, Java, Kotlin, and certainly not Rust.
C# has aged better but I feel like Java 8 approaching ANSI C level solid tools. If only Swing wasn't so ugly. They should poach Raymond Chen to make Java 8 Remastered I like his blog posts. There's probably a DOS joke in there. Also they should just use the JavaFX namespace so I don't have to change my code and I want the lawyer here to laugh too.
quotemstr 11 hours ago [-]
> Java 8
Why would you use Java 8?
atombender 8 hours ago [-]
C# is a great language, but it's been hampered by slow transition towards AOT.
My understanding (not having used it much, precisely because of this) is that AOT is still quite lacking; not very performant and not so seamless when it comes to cross-platform targeting. Do you know if things have gotten better recently?
I think fhat Microsoft had dropped the old .NET platform (CLR and so on) sooner and really nailed the AOT experience, they may have had a chance at competing with Go and even Rust and C++ for some things, but I suspect that ship has sailed, as it has for languages like D and Nim.
neonsunset 8 hours ago [-]
C# (well, .NET, because that's what does JIT/AOT compilation of the bytecode) is not transitioning to AOT. NativeAOT is just one of the ways to publish .NET applications for scenarios where it is desirable. Having JIT is a huge boon to a number of scenarios too, for example it is basically impossible to implement a competitive Regex engine with JIT compilation for the patterns in Go (aside from other limitations like not having SIMD primitives).
throw_m239339 12 hours ago [-]
> C# is a very highly underrated (and oft misunderstood) language that has become more terse as it has aged -- in a very good way. C#'s terseness has not come at the cost of its legibility and in fact, I feel like enhances it in many cases.
C# and .net are one of the most mature platform for development of all kind. It's just that online, it carries some sort of anti Microsoft stigma...
But a lot of AA or indie games are written in C# and they do fine. It's not just C++ or Rust in that industry.
People tend to be influenced by opinions online but often the real world is completely different. Been using C# for a decade now and it's one of the most productive language I have ever used, easy to set up, powerful toolchains... and yes a lot of closed source libs in the .net ecosystem but the open source community is large too by now.
CharlieDigital 12 hours ago [-]
> People tend to be influenced by opinions online but often the real world is completely different.
Unfortunately, my experience has been that C#'s lack of popularity online translates into a lot of misunderstandings about the language and thus many teams simply do not consider it.
Some folks still think it's Windows-only. Some folks think you need to use Visual Studio. Some think it's too hard to learn. Lots of misconceptions lead to teams overlooking it for more "hyped" languages like Rust and Go.
bob1029 11 hours ago [-]
You don't need to use Visual Studio, but it really makes a difference in the overall experience.
I think there may also be some misunderstandings regarding the purchase models around these tools. Visual Studio 2022 Professional is possible to outright purchase for $500 [0] and use perpetually. You do NOT need a subscription. I've got a license key printed on paper that I can use to activate my copy each time.
Imagine a plumber or electrician spending time worrying about the ideological consequences of purchasing critical tools that cost a few hundred dollars.
> Imagine a plumber or electrician spending time worrying about the ideological consequences of purchasing critical tools that cost a few hundred dollars.
That's just the way it is, especially with startups whom I think would benefit the most from C# because -- believe it or not -- I actually think that most startups would be able to move faster with C# on the backend than TypeScript.
dicytea 11 hours ago [-]
> Some folks think you need to use Visual Studio
How's the LSP support nowadays? I remember reading a lot of complaints about how badly done the LSP is compared to Visual Studio.
CharlieDigital 11 hours ago [-]
Pretty good.
I started using Visual Studio Code exclusively around 2020 for C# work and it's been great. Lightweight and fast. I did try Rider and 100% it is better if you are open to paying for a license and if you need more powerful refactoring, but I find VSC to be perfectly usable and I prefer its "lighter" feel.
nh2 13 hours ago [-]
The article says it's 64k -> 17k.
byearthithatius 11 hours ago [-]
Updated, good catch haha
Ygg2 12 hours ago [-]
That's not unexpected they went from Bevy which is more of a game framework, than a proper GUI engine.
I mean, you could also write how we went from C# code 1mil code of our mostly custom engine to 10k in Unreal C++.
taylorallred 12 hours ago [-]
I love Rust and wanted to use it for gamedev but I just had to admit to myself that it wasn't a good fit. Rust is a very good choice for user space systems level programming (ie. compilers, proxies, databases etc.). For gamedev, all of the explicitness that Rust requires around ownership/borrowing and types tends to just get in the way and not provide a lot of value. Games should be built to be fast, but the programmer should be able to focus almost completely on game logic rather than low-level details.
littlestymaar 3 hours ago [-]
Bevy solves the ownership/borrowing issues entirely with its ECS design though.
I had two groups students (complete Rust beginners) ship a basic FPS and Tower Defense as learning project using Bevy and their feedback was that they didn't fight the language at all.
The problem that remains is that as soon a you go from a toy game to an actual one, you'd realize that Bevy still has tons of work to do before it can be considered productive.
999900000999 12 hours ago [-]
Unity is still probably the best game engine for smaller games with Unreal being better for AAA.
The problem is you make a deal with the devil. You end up shipping a binary full of phone home spyware, if you don't use Unity in the exact way the general license intends they can and will try to force you into the more expensive industrial license.
However, the ease of actually shipping a game can't be matched.
Godot has a bunch of issues all over the place, a community more intent on self praise than actually building games. It's free and cool though.
I don't really enjoy Godot like I enjoy Unity , but I've been using Unity for over a decade. I might just need to get over it.
yyyk 12 hours ago [-]
GC isn't a big problem for many types of apps/games, and most games don't care about memory safety. Rust's advantages aren't so important in this domain, while its complexity remains. No surprise he prefers C# for this.
maccard 10 hours ago [-]
Disagree on both points. Anyone who has shipped a game in unity has dealt with object pooling, flipping to structs instead of classes, string interpolation, and replacing idiomatic APIs with out parameters of reused collections.
Similarly, anyone who has shipped a game in unreal will know that memory issues are absolutely rampant during development.
But, the cure rust presents to solve these for games is worse than the disease it seems. I don’t have a magic bullet either..
pornel 5 hours ago [-]
I'm shocked that Beat Saber is written in C# & Unity. That's probably the most timing sensitive game in the world, and they've somehow pulled it off.
neonsunset 4 hours ago [-]
There's another highly sensitive to timing game - Osu!, which is written in C# too (on top of custom engine).
Rohansi 7 hours ago [-]
This is a mostly Unity-specific issue. Unity unfortunately has a potato for a GC. This is not even an exaggeration - it uses Boehm GC. Unity does not support Mono's better GC (SGen). .NET has an even better GC (and JIT) that Unity can't take advantage of because they are built on Mono still.
Other game engines exist which use C# with .NET or at least Mono's better GC. When using these engines a few allocations won't turn your game into a stuttery mess.
Just wanted to make it clear that C# is not the issue - just the engine most people use, including the topic of this thread, is the main issue.
loeg 11 hours ago [-]
Not just GC -- performance in general is a total non-issue for a 2d tile-based game. You just don't need the low-level control that Rust or C++ gives you.
trealira 7 hours ago [-]
I wouldn't say it's a non-issue. I've played 2D tile-based, pixel art games where the framerate dropped noticeably with too many sprites on screen, even though it felt like a 3DS should have been able to run it, and my computer isn't super low-end, either. You have more leeway, but it's possible to badly make optimized 2D games to the point where performance becomes an issue again.
loeg 4 hours ago [-]
These are gross, macro-level design problems; not the kind of thing where C# vs C++/Rust makes any difference.
palata 11 hours ago [-]
Except that C# is memory safe.
foderking 11 hours ago [-]
great summary
seivan 12 hours ago [-]
[dead]
_QrE 12 hours ago [-]
> I failed to fairly evaluate my options at the start of the project.
The more projects I do, the more time I find that I dedicate to just planning things up front. Sometimes it's fun to just open a game engine and start playing with it (I too have an unfair bias in this area, but towards Godot [https://godotengine.org/]), but if I ever want to build something to release, I start with a spreadsheet.
gh0stcat 12 hours ago [-]
Do you think you needed to have those times to play around in the engine? Can a beginner possibly even know what to plan for if they don't fully understand the game engine itself? I am older so I know the benefits of planning, but I sometimes find that I need to persuade myself to plan a little less, just to get myself more in tune with the idioms and behaviors of the tool I am working in.
_QrE 12 hours ago [-]
I think even if you don't have much experience with tools, you can still plan effectively, especially now with LLMs that can give you an idea of what you're in for.
But if you're doing something for fun, then you definitely don't need much planning, if any - the project will probably be abandoned halfway through anyways :)
excerionsforte 12 hours ago [-]
I love Rust, but I would not try to make a full fledged game with it without patience. This post is not so much a moving away from Rust as much as Bevy is not enjoyable in its current form.
Bevy is in its early stages. I'm sure more Rust Game Engines will come up and make it easier. That said, Godot was great experience for me but doesn't run on mobile well for what I was making. I enjoy using Flutter Flame now (honestly different game engines for different genres or preference), but as Godot continues to get better, I personally would use Godot. Try Unity or Unreal as well if I just want to focus on making a game and less on engine quirks and bugs.
That's an excellent article - it's great when people share not only their victories, but mistakes, and what they learned from them.
That said regarding both rapid gameplay mechanic iteration and modding - would that not generally be solved via a scripting language on top of the core engine? Or is Rust + Bevy not supposed to be engine-level development, and actually supposed to solve the gameplay development use-case too? This is very much not my area of expertise, I'm just genuinely curious.
talldan 2 hours ago [-]
It does solve the gameplay development use case too. Bevy encourages using lots of small 'systems' to build out logic. These are functions that can spawn entities or query for entities in the game world and modify them and there's also a way to schedule when these systems should run.
I don't think Bevy has a built-in way to integrate with other languages like Godot does, it's probably too early in the project's life for that to be on the roadmap.
skeptrune 12 hours ago [-]
>I wanted UI to be easy to build, fast to iterate, and moddable. This was an area where we learned a lot in Rust and again had a good mental model for comparison.
I feel like this harkens to the general principle of being a software developer and not an "<insert-language-here>" developer.
Choose tools that expose you to more patterns and help to further develop your taste. Don't fixate on a particular syntax.
meisel 9 hours ago [-]
Aren't there some scripting languages designed around seamless interop with Rust that could be used here for scripting/prototyping? Not that it would fix all the issues in that blog post, but maybe some of them.
stemlord 5 hours ago [-]
Unity is predatorial. I work in a small studio which is part of a larger company (only 5 of us use Unity) and they have suddenly decided to hold our accounts hostage until we upgrade to Industry license because of the revenue our parent company makes even though that's completely separate cash flow versus what our studio actually works with. Industry license is $5000 PER SEAT PER YEAR. Absolute batshit crazy expensive for a single piece of software. We will never be able to afford that. So we are switching over to Unreal. It's really sad what Unity has become.
jmpavlec 5 hours ago [-]
Definitely not cheap, but I assume developer cost and migrating to unreal is probably not cheap either. I'm not too familiar with either engine, are they similar enough that it's "cheaper" to migrate? I imagine that sets back release dates as well.
Such a crappy thing for a company to do.
rorylaitila 7 hours ago [-]
For my going on 5 year side game project, this is why I can only write in vanilla tools (java, typescript) and with small libraries that are easy to replace. I would loose all motivation if I had to refactor my game and update the engine every time I come back to it. But also, I don't have the pressure of ever finishing the game...
chaosprint 10 hours ago [-]
I completely understand, and it's not the first time I've heard of people switching from Bevy to Unity. btw Bevy 0.16 just came out in case you missed the discussion:
In my personal opinion, a paradox of truly open-source projects (meaning community projects, not pseudo-open-source from commercial companies) is that development seems to show a tendency of diversity. While this leads to more and more cool things appearing, there always needs to be a balance with sustainable development.
Commercial projects, at least, always have a clear goal: to sell. For this goal, they can hold off on doing really cool things. Or they think about differentiated competition. Perhaps if the purpose is commercial, an editor would be the primary goal (let me know if this is alreay on the roadmap).
---
I don't think the language itself is the problem. The situation where you have to use mature solutions for efficiency is more common in games and apps.
For example, I've seen many people who have had to give up Bevy, Dioxus, and Tauri.
But I believe for servers, audio, CLI tools, and even agent systems, Rust is absolutely my first choice.
I've recently been rewriting Glicol (https://glicol.org) after 2 years. I start from embedded devices, switching to crates like Chumsky, and I feel the ecosystem has improved a lot compared to before.
So I still have 100% confidence in Rust.
nrvn 11 hours ago [-]
> Bevy is young and changes quickly. Each update brought with it incredible features, but also a substantial amount of API thrash
> Bevy is still in the early stages of development. Important features are missing. Documentation is sparse. A new version of Bevy containing breaking changes to the API is released approximately once every 3 months.
I would choose Bevy if and only if I would like to be heavily involved in the development of Bevy itself.
And never for anything that requires a steady foundation.
Programming language does not matter. Choose the right tool for job and be pragmatic.
eYrKEC2 8 hours ago [-]
I like not getting paged at night, so I like APIs written in Rust.
DarkmSparks 8 hours ago [-]
The best language for game logic is lua, switching to C# probably isnt going to help any.... IMHO.
Rohansi 7 hours ago [-]
What makes Lua the best for game logic? You don't even have types to help you out with Lua.
trealira 7 hours ago [-]
Yeah, I actually recently tried making a game in Lua using LOVE2D, and then making the same one in C with Raylib, and I didn't feel like Lua itself gave me all that much. I don't think Lua is best for game logic so much as it's the easiest language to embed in a game written in C or C++. That said, maybe some of its unique features, like its coroutines, or stuff relating to metatables, could be useful in defining game logic. I was writing very boring, procedural, occasionally somewhat object-oriented code either way.
Rohansi 7 hours ago [-]
Lua would definitely help with iteration times vs. C/C++/Rust but C# compiles very quickly. Especially in Unity where you have an editor that keeps assets cached and can hot reload code changes (with a plugin).
Coroutines can definitely be very useful for games and they're also available in C#.
hyllos 12 hours ago [-]
To which extent was the implementation in C# benefitting off both the clarified requirements (so the Rust experience could be seen more as prototyping mixed with production)?
Was it actually in major parts just a major refactor in a different language (admittedly with much more proven elements)?
mamcx 11 hours ago [-]
This can be summarized in a simple way: UI is totally, another world.
There is not chance for any language, not matter how good is it, to match the most horrendous (web!) but full-featured ui toolkit.
I bet, 1000%, that is easier to do a OS, a database engine, etc that try to match QT, Delphi, Unity, etc.
---
I made a decision that has become the most productive and problem-less approach of make UIs in my 30 years doing this:
1- Use the de-facto UI toolkit as-is (html, swift ui, jetpack compose). Ignore any tool that promise cross-platform UI (so that is html, but I mean: I don't try to do html in swift, ok?).
2- Use the same idea of html: Send plain data with the full fidelity of what you wanna render: Label(text=.., size=..).
3- Render it directly from the native UI toolkit.
Yes, this is more or less htmx/tailwindcss (I get the inspiration from them).
This mean my logic is full Rust, I pass serializable structs to the UI front-end and render directly from it. Critically, the UI toolkit is nearly devoid of any logic more complex that what you see in a mustache template language.. Not do the localization, formatting, etc. Only UI composition.
I don't care that I need to code in different ways, different apis, different flows, and visually divergent UIs.
IS GREAT.
After the pain of boilerplate, doing the next screen/component/wwhatever is so ridiculous simple that is like cheating.
So, the problem is not Rust. Is not F#, or Lisp. Is that UI is a kind of beast that is imperious to be improved by language alone.
peterashford 11 hours ago [-]
I disagree. The issue, which the article mentions, is iteration time. They were having issues iterating on gameplay, not UI.
My own experiences with game dev and Rust (which are separate experiences, I should add) resonate with what the article is expressing. Iterating systems is common in gamedev and Rust is slow to iterate because its precision ossifies systems. This is GREAT for safety, it's crap for momentum and fluidity
maccard 10 hours ago [-]
This is why game engines embedded scripting languages. Who gives a crap if the engine takes 12 hours to compile if 80% of the team are writing lua in a hot reload loop.
peterashford 9 hours ago [-]
Yeah but no-one is recompiling the engine. This is just about gameplay code
maccard 9 hours ago [-]
Which is why I said
> this is why game engines embedded scripting languages
api 11 hours ago [-]
> I bet, 1000%, that is easier to do a OS, a database engine, etc that try to match QT, Delphi, Unity, etc.
I 100% agree. A modern mature UI toolkit is at least equivalent to a modern game engine in difficulty. GitHub is strewn with the corpses of abandoned FOSS UI toolkits that got 80% of the way there only to discover that the other 20% of the problem is actually 20000% of the work.
The only way you have a chance developing a UI toolkit is to start in full self awareness of just how hard this is going to be. Saying "I am going to develop a modern UI toolkit" is like saying "I am going to develop a complete operating system."
Even worse: a lot of the work that goes into a good UI toolkit is the kind of work programmers hate: endless fixing of nit-picky edge case bugs, implementation of standards, and catering to user needs that do not overlap with one's own preferences.
10 hours ago [-]
cyprx 3 hours ago [-]
wow every rust topics have uncountable number of comments, it's indeed a successful language
11 hours ago [-]
corysama 8 hours ago [-]
Professional high-performance C++ game engine dev here. At a glance, their game looks great. But, to be frank, it also looks like it could have been made in the DOS era with sufficient effort.
Going hard with Rust ECS was not the appropriate choice here. Even a 1000x speed hit would be preferable if it gained speed of development. C# and Unity is a much smarter path for this particular game.
But, that’s not a knock on Rust. It’s just “Right tool for the job.”
12 hours ago [-]
cantrecallmypwd 9 hours ago [-]
API churn is so expensive, largely unnecessary, and rarely value-add. It's an anti-pattern that makes things otherwise promising things unusable.
I rarely touch game dev but that made me think Godot wasn't very suitable
ziddoap 13 hours ago [-]
I also would have liked to have seen the pro/con lists for each of the potential choices.
I've been toying with the idea of making a 2d game that I've had on my mind for awhile, but have no game development experience, and am having trouble deciding where to start (obviously wanting to avoid the author's predicament of choosing something and having to switch down the line).
jerf 12 hours ago [-]
The key is, you gotta be pretty cold in the analysis. It's probably more important to avoid what you hate than to lean in too hard to what you love, unless your terminal goal is to work in $FAVE_LANG. Too many people claim they want to make a game, but their actions show that their terminal goal was actually to work in their favorite language. I don't care if your goal is just to work in your favorite language, I just think you need to be brutally honest with yourself on that front.
Probably the best thing in your case is, look at the top three engines you could consider, spend maybe four hours gather what look like pros and cons, then just pick one and go. Don't overestimate your attachment to your first choice. You'll learn more just in finishing a tutorial for any of them then you can possibly learn with analysis in advance.
elktown 10 hours ago [-]
This is goes for a lot of things in tech unfortunately. For example, being stuck in a SRE/devops amusement park can be incredibly frustrating and surprisingly resource intense.
Sometimes it feels like we could use some kind of a temperance movement, because if one can just manage to walk the line one can often reap great rewards. But the incentives seem to be pointing in the opposite direction.
ziddoap 11 hours ago [-]
Thanks, I appreciate the comment! I'm certain that my goal is not to work in a specific language, but to bring a long-time idea to life, and ideally minimize the amount of avoidable headaches along the way.
You're probably right that it'd be best to just jump in and get going with a few of them rather than analyze the choice to death (as I am prone to do when starting anything).
kllrnohj 12 hours ago [-]
> We wrote extensive pros and cons, emphasizing how each option fared by the criteria above: Collaboration, Abstraction, Migration, Learning, and Modding.
Would you really expect Godot to win out over Unity given those priorities? Godot is pretty awesome these days, but it's still going to be behind for those priorities vs. Unity or Unreal.
GardenLetter27 13 hours ago [-]
I wondered the same - the separate C# build might be a bit of a hassle still though.
But they also could have combined Rust parts and C# parts if they needed to keep some of what they had.
jryan49 12 hours ago [-]
One of the complaints in the article was using a framework early in it's dev cycle. I imagine they were just picking what is safe at that point and didn't want to get burned again.
shadowgovt 12 hours ago [-]
Excellent write-up.
On the topic of rapid prototyping: most successful game engines I'm aware of hit this issue eventually. They eventually solve it by dividing into infrastructure (implemented in your low-level lanuage) and game-logic / application logic / scripting (implemented in something far more flexible and, usually, interpreted; I've seen Lua used for this, Python, JavaScript, and I think Unity's C# also fits this category?).
For any engine that would have used C++ instead, I can't think of a good reason to not use Rust, but most games with an engine aren't written in 100% C++.
lovegrenoble 10 hours ago [-]
Why not the awesome Gamemaker engine?
lostmsu 10 hours ago [-]
Related: just tried to switch to Rust when starting a new project. The main motivation was the combination of fearless concurrency and exhaustive error handling - things that were very painful in the more mature endeavor.
Gave up after 3 days for 3 reasons:
1. Refactoring and IDE tooling in general are still lightyears away from JetBrains tooling and a few astronomical units away from Visual Studio. Extract function barely works.
2. Crates with non-Rust dependencies are nearly impossible to debug as debuggers don't evaluate expressions. So, if you have a Rust wrapper for Ogg reader, you can't look at ogg_file.duration() in the debugger because that requires function evaluation.
3. In contrast to .NET and NuGet ecosystem, non-Rust dependencies typically don't ship with precompiled binaries, meaning you basically have to have fun getting the right C++ compilers, CMake, sometimes even external SDKs and manually setting up your environment variables to get them to build.
With these roadblocks I would never have gotten the "mature" project to the point, where dealing with hard to debug concurrency issues and funky unforeseen errors became necessary.
airstrike 8 hours ago [-]
Curious what kind of project that was. Were you making a GUI by any chance?
lostmsu 5 hours ago [-]
No, the new project that I tried Rust for is a voice API (VAD, Whisper, etc). Got disappointed because, for example, the codec is just a wrapper around libopus. So it doesn't provide safety guarantees, and finding a crate that would build without issues was a challenge.
neonsunset 10 hours ago [-]
> 3. In contrast to .NET and NuGet ecosystem, non-Rust dependencies typically don't ship with precompiled binaries, meaning you basically have to have fun getting the right C++ compilers, CMake, sometimes even external SDKs and manually setting up your environment variables to get them to build.
Depending on your scenario, you may want either one or another. Shipping pre-compiled binaries carries its own risks and you are at the mercy of the library author making sure to include the one for your platform. I found wiring up MSBuild to be more painful than the way it is done in Rust with cc crate, often I would prefer for the package to also build its other-language components for my specific platform, with extra optimization flags I passed in.
But yes, in .NET it creates sort of an impedance mismatch since all the managed code assemblies you get from your dependencies are portable and debuggable, and if you want to publish an application for a specific new target, with those it just works, be it FreeBSD or WASM. At the same time, when it works - it's nicer than having to build everything from scratch.
lostmsu 5 hours ago [-]
The big advantage of precompiled is that hundreds of people who downloaded the package don't have to figure out building steps over and over again.
Risks are real though.
shmerl 12 hours ago [-]
Using poor quality AI suggestions as a reason not to use Rust is a super weird argument. Something is very wrong with such idea. What's going to be next, avoiding everything where AI performs poorly?
Scripting being flexible is a proper idea, but that's not an argument against Rust either. Rather it's an argument for more separation between scripting machinery and the core engine.
For example Godot allows using Rust for game logic if you don't want to use GDScript, and it's not really messing up the design of their core engine. It's just more work to allow such flexibility of course.
The rest of the arguments are more in the familiarity / learning curve group, so nothing new in that sense (Rust is not the easiest language).
tptacek 12 hours ago [-]
Yes, a lot of people are reasonably going to decide to work in environments that are more legible to LLMs. Why would that surprise you?
The rest of your comment boils down to "skills issue". I mean, OK. But you can say that about any programming environment, including writing in raw assembly.
shmerl 9 hours ago [-]
First argument sounds like a major fallacy to me. It doesn't surprise me, but it find it extremely wrong.
tptacek 9 hours ago [-]
Why?
shmerl 8 hours ago [-]
Because it's a discouragement of learning based on mediocrity of AI. I find such idea perpetuating the mediocrity (not just of AI itself but of whatever it's used for).
It's like imagine saying, I don't want to learn how write a good story because AI always suggests me writing a bad one anyway. May be that delivers the idea better.
tptacek 8 hours ago [-]
It's not at all clear to me what this has to do with the practical delivery of software. In languages that LLMs handle well, with a careful user (ie, not a vibe coder; someone reading every line of output and subjecting most of it to multiple cycles of prompting) the code you end up with is basically indistinguishable from the replacement-level code of an expert in the language. It won't hit that human expert's peaks, but it won't generally sink below their median. That's a huge accelerator for actually delivering projects, because, for most projects, most of the code need only be replacement-grade.
Why would I valorize discarding this kind of automation? Is this just a craft vs. production thing? Like, the same reason I'd use only hand tools when doing joinery in Japanese-style woodworking? There's a place for that! But most woodworkers... use table saws and routers.
mwcampbell 6 hours ago [-]
> Why would I valorize discarding this kind of automation? Is this just a craft vs. production thing?
The strongest reason I can think of to discard this kind of automation, and do so proudly, is that it's effectively plagiarizing from all of the experts whose code was used in the training data set without their permission.
tptacek 4 hours ago [-]
No plausible advance in nanotechnology could produce a violin small enough to capture how badly I feel about out professional being "plagiarized" after decades of rationalizing about the importance of Star Wars to the culture justifying movie piracy.
Artists can come at me with this concern all they want, and I feel bad for them. No software developer can.
I disagree with you about the "plagiaristic" aspect of LLM code generation. But I also don't think our field has a moral leg to stand on here, even if I didn't disagree with you.
mwcampbell 30 minutes ago [-]
I'm not making an argument from grievance about my own code being plagiarized. I actually don't care if my own code is used without even the attribution required by the permissive licenses it's released under; I just want it to be used. I do also write proprietary code, but that's not in the training datasets, as far as I know. But the training datasets do include code under a variety of open-source licenses, both permissive and copyleft, and some of those developers do care how their code is used. We should respect that.
As for our tendency to disrespect the copyrights of art, clearly we've always been in the wrong about this, and we should respect the rights of artists. The fact that we've been in the wrong about this doesn't mean we should redouble the offense by also plagiarizing from other programmers.
And there is evidence that LLMs do plagiarize when generating code. I'll just list the most relevant citations from Baldur Bjarnason's book _The Intelligence Illusion_ (https://illusion.baldurbjarnason.com/), without quoting from that copyrighted work.
It's not about delivery of software, it's about avoidance of learning based on mediocrity of AI. I.e. original post literally brings LLMs being poor at suggestions for Rust as a reason to avoid it.
That implies that proponents of such approach don't want to pursue learning which requires them to do something that exceeds the mediocrity level set by the AI they rely on.
For me it's obvious that it has a major negative impact on many things.
tptacek 4 hours ago [-]
Your premise here being that any software not written in Rust must be mediocre? Wouldn't it be more productive to just figure out how to evolve LLM tooling to work well with Rust? Most people do not write Rust, so this is not a very compelling argument.
shmerl 3 hours ago [-]
Rust is just an example in this case, not essential to the point. If someone will evolve LLM to work with Rust better, it will still be mediocre at something else, and using this as an excuse to avoid it is problematic in itself, that's what I'm saying.
Basically, learn Rust based on whether it's helping solve your issues better, not on whether some LLM is useless or not useless in this case.
quantified 11 hours ago [-]
It's a weird idea now, but it won't be weird soon. As devs and organizations further buy into AI-first coding, anything not well-served by AI will be treated as second-class. Another thread here brought up the risk that AI will limit innovation by not being well-trained on new things.
shmerl 9 hours ago [-]
I agree that such trend exists, but it's extremely unhealthy and if anyone, developers should have more clue how bad it is.
brokencode 12 hours ago [-]
Developers often pick languages and libraries based on the strength of their developer tools. Having great dev tools was a major reason Ruby on Rails took off, for example.
Why exclude AI dev tools from this decision making? If you don’t find such tools useful, then great, don’t use them. But not everybody feels the same way.
bsaul 12 hours ago [-]
it could be a weird argument, but as a rust newcomer, i have to say it's really something that jumps to your face. LLMs are practically useless for anything non-basic, and rust contains a lot non-basic things.
quantified 11 hours ago [-]
So, what are the chances that the pendulum swings to lower-level programming via LLM-generated C/C++ if LLM-generated Rust doesn't emerge? Note that this question is a context switch from gaming to something larger. For gaming, it could easily be that the engine and culture around it (frequent regressions, etc) are the bigger problems than the language.
bsaul 9 hours ago [-]
I haven't coded in C/C++ in years but friends who do and worked on non-trivial codebase in those languages had a really crappy experience with LLMs too.
A friend of mine only understood why i was so impressed by LLMs once he had to start coding a website for his new project.
My feeling is that low-level / system programming is currently at the edge of what LLMs can do. So i'd say that languages that manage to provide nice abstractions around those types of problems will thrive. The others will have a hard time gaining support among young developers.
jokethrowaway 7 hours ago [-]
Congrats on the rewrite!
I think the worst issue was the lack of ready-made solution. Those 67k lines in Rust contains a good chunk of a game engine.
The second worst issue was that you targeted an unstable framework - I would have focused on a single version and shipped the entire game with it, no matter how good the goodies in the new version.
I know it's likely the last thing you want to do, but you might be in a great position to improve Bevy. I understand open sourcing it comes with IP challenges, but it would be good to find a champion with read access within Bevy to parse your code and come up with OSS packages (cleaned up with any specific game logic) based on the countless problems you must have solved in those extra 50k lines.
quotemstr 12 hours ago [-]
Rust is fine as a low-level systems programming language. It's a huge improvement over C and (because memory safety) a decent improvement over C++. However, most applications don't need a low-level systems programming language, and trying to shoehorn one where it doesn't belong just leads to sadness without commensurate benefit. Rust does not
* automatically make your program fast;
* eliminate memory leaks;
* eliminate deadlocks; or
* enforce your logical invariants for you.
Sometimes people mention that independent of performance and safety, Rust's pattern-matching and its traits system allow them to express logic in a clean way at least partially checked at compile time. And that's true! But other languages also have powerful type systems and expressive syntax, and these other languages don't pay the complexity penalty inherent in combining safety and manual memory management because they use automatic memory management instead --- and for the better, since the vast majority of programs out there don't need manual memory management.
I mean, sure, you can Arc<Box<Whatever>> many of your problems away, but that point, your global reference counting just becomes a crude form of manual garbage collection. You'd be better off with a finely-tuned garbage collector instead --- one like Unity (via the CLR and Mono) has.
And you're not really giving anything up this way either. If you have some compute kernel that's a bottleneck, thanks to easy FFIs these high-level languages have, you can just write that one bit of code in a lower-level language without bringing systems consideration to your whole program.
killme2008 8 hours ago [-]
I completely agree with you—Rust is not well-suited for application development. Application development requires rapid iteration, acceptable performance, and most importantly, a large developer community and a rich software ecosystem.
Languages like Go , JavaScript, C# or Java are much better choices for this purpose. Rust is still best suited for scenarios where traditional system languages excel, such as embedded systems or infrastructure software that needs to run for extended periods.
johng 11 hours ago [-]
I signed up for the mailing list. The game looks interesting, I hope there is a Mac version in the future.
morning-coffee 12 hours ago [-]
Expect many more commits like #12. ;)
nickkell 10 hours ago [-]
Awww that's not fair.
C# actually has fairly good null-checking now. Older projects would have to migrate some code to take advantage of it, but new projects are pretty much using it by default.
I'm not sure what the situation is with Unity though - aren't they usually a few versions behind the latest?
adamnemecek 12 hours ago [-]
For anyone considering Rust for gamedev check out the Fyrox engine
Sorry but this engine had(s) problems rendenring a simple rectangle with alpha channel texture, not longer than 3 months ago (I'm assuming it was fixed).
Is it normal for Rust ecosystem to suggest software with this level of maturity?
Very useful writeup, thank you for taking the time to do it.
PS: I love the art style of the game.
WesolyKubeczek 13 hours ago [-]
Somehow I can't read this with uBlock Origin on. Hm.
ryanisnan 13 hours ago [-]
Strange, I had no such issue.
nottorp 11 hours ago [-]
Me neither. Default uBlock Origin settings though, maybe the OP is more strict.
worik 12 hours ago [-]
Migrating away from Bevy is the main thrust.
Rust is a niche language, there is no evidence it is going to do well in the game space.
Unity and C# sound like a much better business choice for this. Choosing a system/language....
> My love of Rust and Bevy meant that I would be willing to bear some pain
....that is not a good business case.
Maybe one day there will be a Rust game engine that can compete with Unity, probably already are, in niches.
forrestthewoods 13 hours ago [-]
Rust is not good for video game gameplay logic. The ownership model of Rust can not represent the vast majority of allocations.
I love Rust. It’s not for shipping video games. No Tiny Glade doesn’t count.
Edit: don’t know why you’re downvoting. I love Rust. I use it at my job and look for ways to use it more. I’ve also shipped a lot of games. And if you look at Steam there are simply zero Rust made games in the top 2000. Zero. None nada zilch.
Also you’re strictly forbidden from shipping Rust code on PlayStation. So if you have a breakout indie hit on Steam in Rust (which has never happened) you can’t ship it on PS5. And maybe not Switch although I’m less certain.
pcwalton 8 hours ago [-]
> No Tiny Glade doesn’t count.
> And if you look at Steam there are simply zero Rust made games in the top 2000. Zero. None nada zilch.
Well, sure, if you arbitrarily exclude the popular game written in Rust, then of course there are no popular games written in Rust :)
> And maybe not Switch although I’m less certain.
I have talked to Nintendo SDK engineers about this and been told Rust is fine. It's not an official part of their toolchain, but if you can make Rust work they don't care.
forrestthewoods 7 hours ago [-]
Yeah in my haste I mixed up my rants. The bane of typing at work inbetween things.
Tiny Glade is indeed a rust game. So there is one! I am not aware of a second. But it’s not really a Bevy game. It uses the ECS crate from Bevy.
And for something like Gnorp, Rust is probably a decent choice.
queuebert 12 hours ago [-]
> The ownership model of Rust can not represent the vast majority of allocations.
What allocations can you not do in Rust?
forrestthewoods 12 hours ago [-]
Gameplay code is a big bag of mutable data that lives for relatively unknown amounts of time. This is the antithesis of Rust.
The Unity GameObject/Component model is pretty good. It’s very simple. And clearly very successful. This architecture can not be represented in Rust. There are a dozen ECS crates but no one has replicated the worlds most popular gameplay system architecture. Because they can’t.
yuriks 10 hours ago [-]
Which part of that architecture is impossible in Rust? Actually an honest question, I'm wondering if I'm missing something.
From what I remember from my Unity days (which granted, were a long time ago), GameObjects had their own lifecycle system separate from the C# runtime and had to be created and deleted using Destroy and Create calls in the Unity API. Similarly, components and references to them had to be created and retrieved using the GetComponent calls, which internally used handles, rather than being raw GC pointers. Runtime allocation of objects frequently caused GC issues, so you were practically required to pre-allocate them in an object pool anyway.
I don't see how any of those things would be impossible or even difficult to implement in Rust. In fact, this model is almost exactly what I used to see evangelized all the time for C++ engines (using safe handles and allocator pools) in GDC presentations back then.
In my view, as someone who has not really interacted or explored Rust gamedev much, the issue is more that Bevy has been attempting to present an overtly ambitious API, as opposed to focusing on a simpler, less idealistic one, and since it is the poster child for Rust game engines, people keep tripping over those problems.
queuebert 4 hours ago [-]
> ... big bag of mutable data that lives for relatively unknown amounts of time. This is the antithesis of Rust.
I'm sorry, but I still don't understand. There are myriad heap collections and even fancy stuff like Rc<Box<T>> or RefCell<T>. What am I missing here?
Is it as simple as global void pointers in C? No, but it's way safer.
uecker 2 hours ago [-]
Somehow I doubt Unity uses global void pointers in C. Not that one would have to use global void pointers when using C.
koakuma-chan 13 hours ago [-]
You could probably write the core in Rust and use some sort of scripting for gameplay logic. Warframe's gameplay logic is written in Lua.
WinstonSmith84 12 hours ago [-]
The headline is a bit sensational here and shall have been rather called "Migrating away from Bevy" .. That's not (really) comparing C# to Rust (and Luna but that one is missing), but rather comparing game engine where the language is secondary. Obviously Unity is the leader here (with Unreal) - despite all its flaws.
dismalaf 12 hours ago [-]
> No Tiny Glade doesn’t count.
Tiny Glade is also the buggiest Steam game I've ever encountered (bugs from disappearing cursor to not launching at all). Incredibly poor performance as well for a low poly game, even if it has fancy lighting...
koakuma-chan 12 hours ago [-]
Isn't Veloren doing pretty good?
forrestthewoods 12 hours ago [-]
No. No one plays Veloren. It’s a toy project for programmers.
No offense to the project. It’s cool and I’m glad it exists. But if you were to plot the top 2000 games on Steam by time played there are, I believe, precisely zero written in Rust.
adamrezich 11 hours ago [-]
> Also you’re strictly forbidden from shipping Rust code on PlayStation. So if you have a breakout indie hit on Steam in Rust (which has never happened) you can’t ship it on PS5. And maybe not Switch although I’m less certain.
What evidence do you have for this statement? It kind of doesn't make any sense on its face. Binaries are binaries, no matter what tools are used to compile them. Sure, you might need to use whatever platform-specific SDK stuff to sign the binary or whatever, but why would Rust in particular be singled out as being forbidden?
Despite not being yet released publicly, Jai can compile code for PlayStation, Xbox, and Switch platforms (with platform-specific modules not included in the beta release, available upon request provided proof of platform SDK access).
forrestthewoods 11 hours ago [-]
Sony mandates you use their toolchain. You don’t get to ship whatever you want on their console. They have a very thorough TRC check you must pass before you get to ship.
adamrezich 11 hours ago [-]
Rust being forbidden on a platform, and Rust being unsupported out-of-the-box with the SDK toolchain, seem to me like they're rather different things?
Philpax 11 hours ago [-]
...why does Tiny Glade not count?
Ciantic 12 hours ago [-]
> Rust can not represent the vast majority of allocations
Do you mean cyclic types?
Rust being low-level, nobody prevents one from implementing garbage-collected types, and I've been looking into this myself: https://github.com/Manishearth/rust-gc
It's "Simple tracing (mark and sweep) garbage collector for Rust", which allows cyclic allocations with simple `Gc<Foo>` syntax. Can't vouch for that implementation, but something like this would be good for many cases.
ninjis 12 hours ago [-]
The "Learning" point drives home a concern my brother-in-law and I were talking about recently. As LLMs become more entrenched as a tool, they may inevitably become the crutch that actually holds back innovation. Individuals and teams may be hesitant to explore or adopt bleeding edge technologies specifically because LLMs don't know about them or don't know enough about them yet.
How is that different from choosing not to adopt a technology because it’s not widely used therefore not widely documented? It’s the timeless mantra of “use boring tech” that seems to resurface every once in a while. It’s all about the goal: do you want to build a viable product, quickly, or do you want to learn and contribute to a specific tech stack? That’s the trade off most of the time.
Bolwin 12 hours ago [-]
It's a lot worse. A high quality project can have great documentation and guides that make it easy to use for a human, but an LLM won't until there's a lot of code and documents out there using it.
And if it's not already popular, that won't happen.
tptacek 11 hours ago [-]
No, this doesn't ring true: long before there were LLMs, people were selecting languages and stacks because of the quality and depth of their community.
But also: there is a lot of Rust code out there! And a cubic fuckload of high-quality written material about the language, its idioms, and its libraries, many of which are pretty famous. I don't think this issue is as simple as it's being out to be.
littlestymaar 3 hours ago [-]
It's not Rust in particular, but Bevy the game engine which is much newer than Rust and still has many breaking changes between version.
It's a bit like Rust in 2014, you would never have had enough material for LLMs to train on.
tayo42 11 hours ago [-]
Isn't this article an example of that. There might be a lot of rust code but if the apis are changing frequently it's all outdated and leads to unusable outputs.
no-dr-onboard 11 hours ago [-]
I see this quite a bit with Rust. I honestly cringe when people get up in arms about someone taking their project out of the rust community.
The same can be said of books as of programming languages:
"Not every ___ deserves to be read/used"
If the documentation or learning curve is so high and/or convoluted that it's disparaging to newcomers then perhaps it's just not a language that's fit for widespread adoption. That's actually fine.
"Thanks for your work on the language, but this one just isn't for me"
"Thanks for writing that awfully long book, but this one just isn't for me"
There's no harm in saying either of those statements. You shouldn't be disparaged for saying that rust just didn't work out for your case. More power to the author.
bigfatkitten 11 hours ago [-]
Rust attracts a religious fervour that you'll almost never see associated with any other language. That's why posts like this make the front page and receive over 200 comments.
If you switched from Java to C# or vice versa, nobody would care.
J_Shelby_J 9 hours ago [-]
A religious fervor against it: no one is in the comments telling the OP he’s wrong.
gregschlom 12 hours ago [-]
I was actually meaning to post this as an Ask HN question, but never found the time to word it well. Basically, what happens to new frameworks and technologies in the age of widespread LLM-assisted coding? Will users be reluctants to adopt bleeding-edge tools because the LLMs can't assist as well? Will companies behind the big frameworks put more resources towards documenting them in a way that makes it easy for LLMs to learn from?
n_ary 12 hours ago [-]
Actually, here in my corner of EU, only the prominent big tech backed well documented and battle tested tools are most marketable skills. So, React, 50 new jobs, but you worked with Svelte/Solidjs, what is that? Java/PHP/Python/Ruby/JS, adequate jobs. Go/Rust/Zig/Crystal/Nim, what are these? While Go has some popularity in recent years and I can spot Rust once in a blue moon. Anything involving requiring near metal work is always C/C++.
Availability of documentation and tooling, widespread adaptation and access to already-trained-at-someone-else's-dime possibility is deemed safe for hiring decision. Sometimes, the narrow tech is spotted in the wild, but it was mostly some senior/staff engineer wanted to experiment something which became part of production because management saw no issue, will sometimes open some doors for practitioners of those stack but the probability is akin to getting hit by lightning strike.
binary132 12 hours ago [-]
This is just reality outside of the early stage startup. The US tech industry and its social networks are very dominated by trendy startup ideas, but the reality is still the major tried-and-true platforms.
timeon 12 hours ago [-]
Maybe it is not the regulations what is holding EU back.
rad_gruchalski 12 hours ago [-]
Another way to look at it: working bleeding edge will become a competitive advantage and a signal to how competent the team is. „Do they consume it” vs „do they own it”.
n_ary 12 hours ago [-]
Or a signal that, someone did not think about the bus factor and future of the project when most of the teams jumped ship.
this_user 11 hours ago [-]
Constantly chasing the latest tech trends has probably done more harm than good, because more often than not, it turns out that the latest hype technology actually does not deliver what the marketing had promised. Look at NoSQL and MongoDB especially as recent examples. Most people who blindly jumped on the MDB bandwagon would have probably been better off just using Postgres, and they later had to spend a lot of resources migrating away from Mongo.
To me constantly chasing the latest trends means lack of experience in a team and absence of focus on what is actually important, which is delivering the product.
IgorPartola 12 hours ago [-]
This already happens. Is your new framework popular on GitHub and on Stack Overflow is a metric people use. LLMs are currently mostly capable of just adapting documentation, blog posts, and answers on SO. So they add a thin veneer on top of those resources.
px1999 11 hours ago [-]
I expect it will wind up like search engines where you either submit urls for indexing/inclusion or wait for a crawl to pick your information up.
Until the tech catches up it will have a stifling effect on progress toward and adoption of new things (which imo is pretty common of new/immature tech, eg how culture has more generally kind of stagnated since the early 2000s)
gs17 11 hours ago [-]
Hopefully, tools can adapt to integrate documentation better. I've already run into this with GitHub Copilot, trying to use Svelte 5 with it is a battle despite it being released most of a year ago.
inerte 12 hours ago [-]
There’s another future where reasoning models get better with larger context windows, and you can throw a new programming language or framework at it and it will do a pretty good job.
PaulKeeble 12 hours ago [-]
We already have quite a lot of that effect with tooling. A language can't really get much traction until its got a build, packaging and all the IDE support we expect or however productive the language is it looses out in practice because its hard to work with and doesn't just fit into our CI/CD systems.
wewtyflakes 11 hours ago [-]
Doesn't this mean that new tech will have to demonstrate material advantages, such that outweigh the LLM inertia, in order to be adopted? This sounds good to me; so much framework churn seems to be code fashion rather than function. Now if someone releases a new framework, they need to demonstrate real value first. People that are smart enough to read the docs and absorb the material of a new, better, framework will now have a competitive advantage; this all seems good.
dogprez 12 hours ago [-]
I think it's a good point and I experienced the same thing when playing with SDL3 the other day. So even established languages with new API's can be problematic.
However, I had a different takeaway when playing with Rust+AI. Having a language that has strict compile-time checks gave me more confidence in the code the AI was producing.
I did see Cursor get in an infinite loop where it couldn't solve a borrow checker problem and it eventually asked me for help. I prefer that to burying a bug.
adamrezich 11 hours ago [-]
I had the same issue a few months ago when I was trying to ask LLMs about Box2D 3.0. I kept getting answers that were either for Box2D 2.x, or some horrific mashup of 2.x and 3.0.
Now Box2D 3.1 has been released and there's zero chance any of the LLMs are going to emit any useful answers that integrate the newly introduced features and changes.
breuleux 11 hours ago [-]
I have that worry as well, but it may not be as bad as I feared. I am currently developing a Python serialization/deserialization library based on advanced multiple dispatch, so it is fairly different from how existing libraries work. Nonetheless, if I ask LLMs (using Cursor) to write new functionality or plugins within my framework, they are surprisingly adept at it, even with limited guidance. I expect it'll only get better in the next few years. Perhaps a set of AI directives and examples for new technologies would suffice.
In any case, there has always been a strong bias towards established technologies that have a lot of available help online. LLMs will remain better at using them, but as long as they are not completely useless on new technologies, they will also help enthusiasts and early adopters work with them and fill in the gaps.
mbrumlow 12 hours ago [-]
I don’t think we will have a lack of people who explore and know beyond others how to things.
LLMs will make people productive. But it will at the same time elevate those with real skill and passion to create good software. In the meantime there will be some maker confusion, and some engineers who are mediocre might find them selfs in demand like top end engineers. But over the time companies and markets will realize and top dollar will go to those select engineers who know how to do things with and without LLMs.
Lots of people are afraid of LLMs and think it is the end of the software engineer. It is and it is not. It’s the end of the “CLI engineer” or the “Front end engineer” and all those specializations that were attempt to require less skill to pay less. But the systems engineers who know how computers work, can take all week long describing what happens when you press enter on a keyboard at google.com will only be pressed into higher demand. This is because the single skill “engineer” wont really be a thing.
tldr; LLMs wont kill software engineering its a reset, it will cull those who chose such a path on a rubric only because it paid well.
doug_durham 11 hours ago [-]
What innovation? Languages with curly braces versus BEGIN/END? There is no innovation going on in computer languages. Rust is C with better ergonomics and rigorous memory management. This was made possible with better processors which made more elaborate compilers practical. It all gets compiled by LLVM down to the same object code. I think we are moving to an era of "read-only" languages. Languages that have horrible writing ergonomics yet are easy to understand when read. Humans won't write code. They will review code.
jdprgm 11 hours ago [-]
I've noticed this effect even with well established tech but just in degrees of popularity. I've recently been working on a Swift/SwiftUI project and the experience with LLM's compared to something like web dev stuff with React, etc is noticeably different/worse which I mostly attribute to there probably being at least 20 times less Swift specific content on the web in comparison.
cvwright 11 hours ago [-]
There are a ton of Swift /SwiftUI tutorials out there for every new technology.
The problem is, they’re all blogspam rehashes of the same few WWDC talks. So they all have the same blindspots and limitations, usually very surface level.
pkkm 11 hours ago [-]
Is that different from what is happening already? A lot of people won't adopt a language/technology unless it has a huge repository of answers on StackOverflow, mature tooling, and a decent hiring pool.
I'm not saying you're definitely wrong, but if you think that LLMs are going to bring qualitative change rather than just another thing to consider, then I'm interested in why.
gwd 12 hours ago [-]
New languages / packages / frameworks may need to collaborate with LLM providers to provide good training material. LLM-able training material may be the next important documentation thing.
Another potentially interesting avenue of research would be to explore allowing LLMs to use "self-play" to explore new things.
SoKamil 12 hours ago [-]
How can it compete with vast amount of trained codebases on Github? For LLMs, more data equals better results, so people will naturally be driven to better completion with already established frameworks and languages.
It would be hard to produce organic data on all ways your technology can be (ab)used.
huijzer 12 hours ago [-]
It’s the same now. I’ve spent arguably too much time trying to avoid Python and it has cost me a whole lot of time. You keep running into bugs and have to implement much more yourself if you go off the beaten path (see also [1]). I don’t regret it since I learned a lot, but it’s definitively not always the easiest path. To this day I wonder whether maybe I should have taken the simple route.
A showerthought I had recently was that newly-written software may have a perverse incentive to be intentionally buggy such that there will be more public complaints/solutions for said software, which gives LLMs more training data to work with.
doctorpangloss 11 hours ago [-]
Unity was a better choice for game engine long before the existence of LLMs.
calvinmorrison 11 hours ago [-]
Its not even innovation. I had a new Laravel project that i was chopping around to play with some new library and I couldn't the the dumbest stuff to work. Of course I went back to read the docs and - ah Laravel 19 or whatever is using config/boostrap.php again and no matter what chatgpt, or myself had figured, could understand why it wasnt working.
unfortunately, a lot of libraries and services - well I don't think chatGPT understands the differences or it would be hard to. At least I have found that with writing scriplets for RT, PHP tooling, etc. The web world seems to move fast enough (and RT moves hella slow) that its confusing libraries and interfaces through the versions.
It'd really need a wider project context where it can go look at how those includes, or functions, or whatever work instead of relying on 'built in' knowledge.
"Assume you know nothing, go look at this tool, api endpoint or, whatever, read the code, and tell me how to use it"
wottuh 12 hours ago [-]
[dead]
11 hours ago [-]
krapht 13 hours ago [-]
The article title is half-true. It wasn't so much they migrated away from Rust, but that they migrated away from Bevy, which is an alpha quality game engine.
I wouldn't have read the article if it'd been labeled that, so kudos to the blog writer, I guess.
jonas21 11 hours ago [-]
What are some non-alpha quality Rust game engines? If the answer is "there are none", then I'd say the title is accurate.
ivanjermakov 13 hours ago [-]
More surprising part for me is not migrating from Rust/Bevy, but migrating _to_ C#/Unity.
Although points mentioned in the post are quite valid.
koakuma-chan 12 hours ago [-]
Where would you migrate to?
gh0stcat 12 hours ago [-]
Not OP, but it seems that there is still a huge sentiment that Unity is not a "safe" platform to migrate to because of their relatively antagonistic approach to monetization guidelines compared to other open source game engines. I do think it makes sense to also consider Godot given his coworker is his brother who is stated to be new to game development, it has a scripting language even simpler than C#, more like python. Additionally, one might expect that someone more into Rust might prefer the C++ integration that Unreal offers. I think the timeline had an effect here too, as it's not been until recently that people have been taking Godot more seriously.
sadeshmukh 12 hours ago [-]
Maybe godot? The unity scandal recently is not great for developers.
pjmlp 12 hours ago [-]
People forget that Unity and Unreal are industry darlings for a reason.
The amount of platforms they support, the amount of features they support, many of which could be a PhD thesis in graphics programming, the tooling, the store,....
Personally, literally anything except Unity. The fact that they tried to retroactively change terms on developers means that it will be a long time before I feel comfortable trusting they won't try it again.
Jyaif 12 hours ago [-]
They mentioned ABI and the ability to create mods, which are Rust things.
Here's a thought experiment:
Would Minecraft have been as popular if it had been written in Rust instead of Java?
legobmw99 12 hours ago [-]
I mean, we already have a sort-of answer, because the "Bedrock Edition" of Minecraft is written in C++, and it is indeed less popular on PC (on console, it's the only option, so _overall_ it might win out) and does lack any real modding scene
rcxdude 9 hours ago [-]
Indeed. Java is sufficiently dynamic/decompilable a game written in it can be heavily modded without adding specific support. C++ is much harder (depending on the game engine), though not impossible. If you do add modding support then everything is much better regardless of language, though (see Factorio, written in C++ and with a huge modding scene, because it was basically written with modding in mind. Lua is certainly helping with that, of course).
im3w1l 29 minutes ago [-]
I actually disagree with that. Decompilation based mods can completely change anything and everything about the game. Scripting based mods can only change things within the boundaries allowed by the devs of the original game.
jedisct1 12 hours ago [-]
The problem with Rust is that almost everything is still at an alpha stage. The vast majority of crates are at version 0.x and are eventually abandoned, replaced, or subject to constant breaking changes
While the language itself is great and stable, the ecosystem is not, and reverting to more conservative options is often the most reasonable choice, especially for long-term projects.
the_mitsuhiko 12 hours ago [-]
I really don’t think Rust is a good match for game dev. Both because of the borrow checker which requires a lot of handles instead of pointers and because compile times are just not great.
But outside of games the situation looks very different. “Almost everything” is just not at all accurate. There are tons of very stable and productive ecosystems in Rust.
pcwalton 8 hours ago [-]
> I really don’t think Rust is a good match for game dev. Both because of the borrow checker which requires a lot of handles instead of pointers and because compile times are just not great.
I completely disagree, having been doing game dev in Rust for well over a year at this point. I've been extremely productive in Bevy, because of the ECS. And Unity compile times are pretty much just as bad (it's true, if you actually measure how long that dreaded "Reloading Domain" screen takes).
littlestymaar 12 hours ago [-]
Borrow checker is mostly a strawman for this discussion, the post is about using Bevy as an engine and Bevy uses an ECS than manages the lifetime of objects for you automatically. You will never have an issue with the borrow checker when using Bevy, not even once.
pclmulqdq 12 hours ago [-]
Everything in every ECS system is done with handles, but the parent comment is correct that many games use hairballs of pointers all over the place (and they are handles with ECS). There is never a borrow checker issue with handles since they divorce the concept of a pointer from the concept of ownership.
rcxdude 9 hours ago [-]
I wouldn't say 'almost everything', but there are some areas which require a huge amount of time and effort to build a mature solution for, UI and game engines being one, where there are still big gaps.
justmarc 12 hours ago [-]
I have totally disagree here.
I don't even look at crate versions but the stuff works, very well. The resulting code is stable, robust and the crates save an inordinate amount of development time. It's like lego for high end, high performance code.
With Rust and the crates you can build actual, useful stuff very quickly. Hit a bug in a crate or have missing functionality? contribute.
Software is something that is almost always a work in progress and almost never perfect, and done. It's something you live with. Try any of this in C or C++.
pjmlp 12 hours ago [-]
They might be unsafe, but there is enough tooling to pick from 60 and 50 years of industrial use, approximately.
littlestymaar 12 hours ago [-]
Well, on the flip side with C++ some of it hasn't been updated beyond very basic maintenance and you can't even understand the code if you are just familiar with more modern C++…
pjmlp 12 hours ago [-]
Well it is upon each one to be good with their craft.
If not, the language they pick doesn't really make a difference in the end.
It is like complaining playing a music instrument to be in band or orchestra requires too much effort, naturally.
littlestymaar 12 hours ago [-]
Except here you are a trained pianist and the tour manager gave you a pipe organ or a harpsichord.
pjmlp 11 hours ago [-]
Speaking as someone with musical background, that is where we discover those that actually understand music, from those that kind of get by.
Great musicians make a symphony out of what they can get their hands on.
littlestymaar 12 hours ago [-]
It's still true for game dev indeed, but for back-end or CLI tools it hasn't been true in like 7 years or so.
Ygg2 12 hours ago [-]
> The problem with Rust is that almost everything is still at an alpha stage.
Replace Rust with Bevy and language with framework, you might have a point. Bevy is still in alpha, it's lacking plenty of things, mainly UI and an easy way to have mods.
As for almost everything is at an alpha stage, yeah. Welcome to OSS + SemVer. Moving to 1.x makes a critical statement. It's ready for wider use, and now we take backwards compatibility seriously.
But hurray! Commercial interest won again, and now you have to change engines again, once the Unity Overlords decide to go full Shittification on your poorly paying ass.
pclmulqdq 12 hours ago [-]
Unfortunately, it is a failing of many projects in the Rust sphere that they spend quite a lot longer in 0.x than other projects. Rust language and library features themselves often spend years in nightly before making it to a release build.
You can also always go from 1.0 to 2.0 if you want to make breaking changes.
Ygg2 12 hours ago [-]
> Unfortunately, it is a failing of many projects in the Rust sphere that they spend quite a lot longer in 0.x than other projects
Yes. Because it makes a promise about backwards compatibility.
> Rust language and library features themselves often spend years in nightly before making it to a release build.
So did Java's. And I Rust probably has a fraction of its budget.
In defense of long nightly feature more than once, stabilizing some feature like negative impl and never types early would have caused huge backwards breaking changes.
> You can also always go from 1.0 to 2.0 if you want to make breaking changes.
Yeah, just like Python!
And split the community and double your maintenance burden. Or just pretend 2.0 is 1.1 and have the downstream enjoy the pain of migration.
bigstrat2003 12 hours ago [-]
> And split the community and double your maintenance burden.
If you choose to support 1.0 sure. But you don't have to. Overall I find that the Rust community is way too leery of going to 1.0. It doesn't have to be as big a burden as they make it out to be, that is something that comes down to how you handle it.
Ygg2 12 hours ago [-]
> If you choose to support 1.0 sure.
If you choose not to, then people wait for x.0 where x approaches infinity. I.e. they lose confidence in your crates/modules/libraries.
I mean, a big part of why I don't 1.x my OSS projects (not just Rust) is that I don't consider them finished yet.
pclmulqdq 5 hours ago [-]
Godot launched 0.1 in February 2014 and got to 1.0 in December 2014.
The distance in time between the launches of Unreal Engine 4 and Unreal Engine 5 was 8 years (April 2014 to April 2022). Unreal Engine 5 development started in May 2020 and had an early access release in May 2021.
Bevy launched 0.1 in 2020 and is at 0.16 now in 2025. 5 years later and no 1.0 in sight.
If you want people to use your OSS projects (maybe you don't), you have to accept that perfect is the enemy of good.
At this point, regulators and legislators are trying to force people to use the Rust ecosystem - if you want a non-GC language that is "memory safe," it's pretty much the de facto choice. It is long past time for the ecosystem to grow up.
Ygg2 4 hours ago [-]
> Godot launched 0.1 in February 2014 and got to 1.0 in December 2014.
Yeah because that's when it was open sourced, NOT DEVELOPED.
> Godot has been an in-house engine for a long time and the priority of new features were always linked to what was needed for each game and the priorities of our clients.
I checked the history and it was known by another name Larvita.
> If you want people to use your OSS project
Seeing how currently I have about 0.1 parts of me working on it, no I don't want to give people false sense of security.
> At this point, regulators and legislators are trying to force people to use the Rust ecosystem
Not ecosystem. Language. Ecosystem is a plus.
Further more the issue Bevy has is more of there aren't any good mature GUI libraries for Rust. Because cross OS GUIs were, are and will be a shit show.
Granted it's a shit show that can be directed with enough money.
BuyMyBitcoins 12 hours ago [-]
>”reverting to more conservative options”
From what I’ve heard about the Rust community, you may have made an unintentionally witty pun.
12 hours ago [-]
monkeyelite 13 hours ago [-]
It’s incredible how many projects and articles have been written around ECS with very little results.
Quake 1-3 uses a single array of structs, with sometimes unused properties. Is your game more complex than quake 3?
The “ECS” upgrade to that is having an array for each component type but just letting there be gaps:
Hype as usual, too many people waste time on how to implement engines, instead of how to make a game fun to play.
cogman10 12 hours ago [-]
The important part of ECS (IMO) is more that it's a pattern that others recognize and less that it's necessarily the best pattern to use.
dist-epoch 12 hours ago [-]
Quake 1-3 were written for computers where memory was not much slower than the CPU as is the situation today.
But yeah, probably you don't need an ECS for 90% of the games.
fooker 12 hours ago [-]
Memory is sometimes faster today!
pornel 5 hours ago [-]
In absolute terms yes, but relative to the CPU speed memory is ridiculously slow.
Quake struggled with the number of objects even in its days. What you've got in the game was already close to the maximum it could handle. Explosions spawning giblets could make it slow down to a crawl, and hit limits of the client<>server protocol.
The hardware got faster, but users' expectations have increased too. Quake 1 updated the world state at 10 ticks per second.
monkeyelite 40 minutes ago [-]
> Quake struggled with the number of objects even in its days.
Because of memory bandwidth of Iterating the entities? No way. Every other part - rendering, culling, network updates, etc is far worse.
Let’s restate. In 1998 this got you 1024 entities at 60 FPS. The entire array could no fit in L2 cache of a modern desktop.
And I already advised a simple change to improve memory layout.
> Quake 1 updated the world state at 10 ticks per secondo
That’s not a constraint in Quake 3 - which has the same architecture. So it’s not relevant.
> users' expectations have increased too
Your game is more complex than quake 3? In what regard?
12 hours ago [-]
seivan 12 hours ago [-]
[dead]
eftychis 11 hours ago [-]
This comment might not be liked by the usual commenters in these threads, but I think it is worth stressing:
First: I have experience with Bevy and other game engine frameworks; including Unreal. And I consider myself a seasoned Rust, C etc developer.
I could sympathize with what was stated by the author.
I think the issue here is (mainly) Bevy. It is just not even close to the standard yet (if ever). It is hard for any generic game engine to compete with Unity/GoDot. Nevermind, the de facto standard of Unreal.
But if you are a C# developer and using Unity already, and not C++ in Unreal, going to a bloated framework that is missing features that is Bevy makes little sense. [And here is also the minor issue, that if you are a C# developer, honestly you don't care about low level code, or not having a garbage collector.]
Now if you are a C++ developer and use Unreal, they only point to move to Rust (which I would argue for the usual reasons) is if Unreal supports Rust. Otherwise, there is nothing that even compares to Unreal. (That is not custom made game engine.)
JeremyBarbosa 11 hours ago [-]
As someone who has used Bevy in the past, that was my reading as well. It is an incredible tool, but some of the things mentioned in the article like the gnarly function signature and constant migrations are known issues that stop a lot of people from using it. That's not even to mention the strict ECS requirement if your game doesn't work well around it. Here is a good reddit thread I remember reading about some more difficulties other people had with Bevy:
The way I read about Bevy in online discussions obfuscates this. Someone who is new to game development could be confused into thinking Bevy is a fair competitor with the other engines you mentioned. And equate Bevy with Rust, or Bevy with Rust in game dev. I think stomping this out is critical to expectation management, and perhaps rust's future in game dev.
From my experience one has to take Rust discussions with a grain of salt because often shortcomings and disclosures are handwaved and/or ommited.
the__alchemist 9 hours ago [-]
I've learned to do the same. I see this in the embedded world as well.
And within rust, I've learned to look beyond the most popular and hyped tools; they are often not the best ones.
yyyk 3 hours ago [-]
>if you are a C# developer, honestly you don't care about low level code, or not having a garbage collector.
You can go low level in C#**, just like Rust can avoid the borrow checker. It's just not a good tradeoff for most code in most games.
** value types/unsafe/pointers/stackalloc etc.
neonsunset 2 hours ago [-]
Structs in C# or F# are not low-level per se, they simply are a choice and used frequently in gamedev. So is stackalloc because using it is just 'var things = (stackalloc Thing[5])' where the type of `things` is Span<Thing>. The keyword is a bit niche but it's very normal to see it in code that cares about avoiding allocations.
Note that going more hands-on with these is not the same as violating memory safety - C# even has ref and byreflike struct lifetime analysis specifically to ensure this not an issue (https://em-tg.github.io/csborrow/).
yyyk 42 minutes ago [-]
Right, it depends on how far one wants to go to avoid allocations. structs and spans are safe. But one can go even deeper and pin pointers and do Unsafe.AsPointer and get a de-facto (unsafe) union out of it....
Imo the place for rust in game dev isnt in games at all, but base libraries and tools. Writing your proc generation library in rust that is an isolated package you can call in isolation, or similar is where its useful.
eftychis 11 hours ago [-]
I agree. [Unless fully adopted by a serious game engine, of course.]
Rust's "superpower" is substituting critical C++ code in-place, with the goal of ensuring correctness and soundness. And increasing the development velocity as a result.
11 hours ago [-]
nine_k 11 hours ago [-]
Sounds like "Migrating away from Bevy towards Unity"; the Rust to C# transition is mostly a technical consequence.
Bevy: unstable, constantly regressing, with weird APIs here and there, in flux, so LLMs can't handle it well.
Unity: rock-solid, stable, well-known, featureful, LLMs know it well. You ought to choose it if you want to build the game, not hack on the engine, be its internal language C#, Haskell, or PHP. The language is downstream from the need to ship.
10 hours ago [-]
Sleaker 12 hours ago [-]
Anyone else get an empty page on mobile Firefox when they try to go the article? All that renders for me is a comment entry box. If I go back to news I can see the article list just fine.
firesteelrain 12 hours ago [-]
Works for me on mobile Chrome
fotta 12 hours ago [-]
Same on mobile safari
11 hours ago [-]
gbuk2013 13 hours ago [-]
Don’t see any content on that article for some reason (from iPhone)
maartenscholl 13 hours ago [-]
I experienced the same, I had to disable my adblocker to view it, it seems the content is inside a tag `<article class="social-sharing">` but I am unsure whether this triggered my adblocker.
janice1999 13 hours ago [-]
Adblocking seems to cause issues with the site. Disabling uBlock Origin worked for me as did readability mode in Firefox.
12 hours ago [-]
9 hours ago [-]
blueredmodern 12 hours ago [-]
[dead]
jheriko 12 hours ago [-]
[dead]
12 hours ago [-]
akkad33 12 hours ago [-]
[flagged]
airstrike 13 hours ago [-]
[flagged]
malkia 12 hours ago [-]
[flagged]
11 hours ago [-]
dmitrygr 11 hours ago [-]
Honey, a new incantation to summon Cthulhu just dropped.
I've been writing a metaverse client in Rust for almost five years now, which is too long.[1] Someone else set out to do something similar in C#/Unity and had something going in less than two years. This is discouraging.
Ecosystem problems:
The Rust 3D game dev user base is tiny.
Nobody ever wrote an AAA title in Rust. Nobody has really pushed the performance issues. I find myself having to break too much new ground, trying to get things to work that others doing first-person shooters should have solved years ago.
The lower levels are buggy and have a lot of churn
The stack I use is Rend3/Egui/Winit/Wgpu/Vulkan. Except for Vulkan, they've all had hard to find bugs. There just aren't enough users to wring out the bugs.
Also, too many different crates want to own the event loop.
These crates also get "refactored" every few months, with breaking API changes, which breaks the stack for months at a time until everyone gets back in sync.
Language problems:
Back-references are difficult
A owns B, and B can find A, is a frequently needed pattern, and one that's hard to do in Rust. It can be done with Rc and Arc, but it's a bit unwieldy to set up and adds run-time overhead.
There are three common workarounds:
- Architect the data structures so that you don't need back-references. This is a clean solution but is hard. Sometimes it won't work at all.
- Put everything in a Vec and use indices as references. This has most of the problems of raw pointers, except that you can't get memory corruption outside the Vec. You lose most of Rust's safety. When I've had to chase down difficult bugs in crates written by others, three times it's been due to errors in this workaround.
- Use "unsafe". Usually bad. On the two occasions I've had to use a debugger on Rust code, it's been because someone used "unsafe" and botched it.
Rust needs a coherent way to do single owner with back references. I've made some proposals on this, but they require much more checking machinery at compile time and better design. Basic concept: works like "Rc::Weak" and "upgrade", with compile time checking for overlapping upgrade scopes to insure no "upgrade" ever fails.
"Is-a" relationships are difficult
Rust traits are not objects. Traits cannot have associated data. Nor are they a good mechanism for constructing object hierarchies. People keep trying to do that, though, and the results are ugly.
[1] https://www.animats.com/sharpview/index.html
I was quite intrigued with the borrow checker, and set about learning about it. While D cannot be retrofitted with a borrow checker, it can be enhanced with it. A borrow checker has nothing tying it to the Rust syntax, so it should work.
So I implemented a borrow checker for D, and it is enabled by adding the `@live` annotation for a function, which turns on the borrow checker for that function. There are no syntax or semantic changes to the language, other than laying on a borrow checker.
Yes, it does data flow analysis, has semantic scopes, yup. It issues errors in the right places, although the error messages are rather basic.
In my personal coding style, I have gravitated towards following the borrow checker rules. I like it. But it doesn't work for everything.
It reminds me of OOP. OOP was sold as the answer to every programming problem. Many OOP languages appeared. But, eventually, things died down and OOP became just another tool in the toolbox. D and C++ support OOP, too.
I predict that over time the borrow checker will become just another tool in the toolbox, and it'll be used for algorithms and data structures where it makes sense, and other methods will be used where it doesn't.
I've been around to see a lot of fashions in programming, which is most likely why D is a bit of a polyglot language :-/
I can also say confidently that the #1 method to combat memory safety errors is array bounds checking. The #2 method is guaranteed initialization of variables. The #3 is stop doing pointer arithmetic (use arrays and ref's instead).
The language can nail that down for you (D does). What's left are memory allocation errors. Garbage collection fixes that.
At least until we get AI driven systems good enough to generate straight binaries.
Rust is to be celebrated for bringing affine types into mainstream, but it doesn't need to be the only way, productivity and performance can be made into the same language.
The way Ada, D, Swift, Chapel, Linear Haskell, OCaml effects and modes, are being improved, already show the way forward.
There there is the whole formal verification and dependent type languages, but that goes even beyond Rust, in what most mainstream developers are willing to learn, the development experience is still quite ruff.
But in that case doesn't the garbage collector ruin the experience for the user? Because that's the argument I always hear in favor of Rust.
I've gone back and forth on this, myself.
I wrote a custom b-tree implementation in rust for a project I've been working on. I use my own implementation because I need it to be an order-statistic tree, and I need internal run length encoding. The original version of my b-tree works just like how you'd implement it in C. Each internal node / leaf is a raw allocations on the heap.
Because leaves need to point back up the tree, there's unsafe everywhere, and a lot of raw pointers. I ended up with separate Cursor and CursorMut structs which held different kinds of references to the tree itself. Trying to avoid duplicating code for those two cursor types added a lot of complex types and trait magic. The implementation works, and its fast. But its horrible to work with, and it never passed MIRI's strict checks. Also, rust has really bad syntax for interacting with raw pointers.
Recently I rewrote the b-tree to simply use a vec of internal nodes, and a vec of leaves. References became array indexes (integers). The resulting code is completely safe rust. Its significantly simpler to read and work with - there's way less abstraction going on. I think its about 40% less code. Benchmarks show its about 25% faster than the raw pointer version. (I don't know why - but I suspect the reason is due to better cache locality.)
I think this is indeed peak rust.
It doesn't feel like it, but using an array-index style still preserves many of rust's memory safety guarantees because all array lookups are bounds checked. What it doesn't protect you from is use-after-free bugs.
Interestingly, I think this style would also be significantly more performant in GC languages like javascript and C#, because a single array-of-objects is much simpler for the garbage collector to keep track of than a graph of nodes & leaves which all reference one another. Food for thought!
Doesn't this also require you to correctly and efficiently implement (equivalents of C's) malloc() and free()? IIUC your requirements are more constrained, in that malloc() will only ever be called with a single block size, meaning you could just maintain a stack of free indices -- though if tree nodes are comparable in size to integers this increases memory usage by a significant fraction.
(I just checked and Rust has unions, but they require unsafe. So, on pain of unsafe, you could implement a "traditional" freelist-based allocator that stores the index of the next free block in-place inside the node.)
Static analysis could potentially check for those potential panics at compile time. If that was implemented, the run time check, and the potential for a panic, would go away. It's not hard to check, provided that all borrows have limited scope. You just have to determine, conservatively, that no two borrow scopes for the same thing overlap.
If you had that check, it would be possible to have something that behaves like RefCell, but is checked entirely at compile time. Then you know you're free of potential double-borrow panics.
I started a discussion on this on a Rust forum. A problem is that you have to perform that check after template expansion, and the Rust compiler is not set up to do global analysis after template expansion. This idea needs further development.
This check belongs to the same set of checks which prevent deadlocking a mutex against itself. There's been some work on Rust static deadlock analysis, but it's still a research topic.
However, I'm not sure what the implications are around mutability. I use a Cursor struct which stores a reference to a specific leaf node in the tree. Cursors can walk forward in the tree (cursor.next_entry()). The tree can also be modified at the cursor location (cursor.insert(item)). Modifying the tree via the cursor also updates some metadata all the way up from the leaf to the root.
If the cursor stored a Rc<Leaf> or Weak<Leaf>, I couldn't mutate the leaf item because rc.get_mut() returns None if there are other strong or weak pointers pointing to the node. (And that will always be the case!). Maybe I could use a Rc<Cell<Leaf>>? But then my pointers down the tree would need the same, and pointers up would be Weak<Cell<Leaf>> I guess? I have a headache just thinking about it.
Using Rc + Weak would mean less unsafe code, worse performance and code thats even harder to read and reason about. I don't have an intuitive sense of what the performance hit would be. And it might not be possible to implement this at all, because of mutability rules.
Switching to an array improved performance, removed all unsafe code and reduced complexity across the board. Cursors got significantly simpler - because they just store an array index. (And inserting becomes cursor.insert(item, &mut tree) - which is simple and easy to reason about.)
I really think the Vec<Node> / Vec<Leaf> approach is the best choice here. If I were writing this again, this is how I'd approach it from the start.
The difference is that I'm writing a metaverse client, not a game. A metaverse client is a rare beast about halfway between an MMO client and a web browser. It has to do most of the the graphical things a 3D MMO client does. But it gets all its assets and gameplay instructions from a server.
From a dev perspective, this means you're not making changes to gameplay by recompiling the client. You make changes to objects in the live world while you're connected to the server. So client compile times (I'm currently at about 1 minute 20 seconds for a recompile in release mode) aren't a big issue.
Most of the level and content building machinery of Bevy or Unity or Unreal Engine is thus irrelevant. The important parts needed for performance are down at the graphics level. Those all exist for Rust, but they're all at the My First Renderer level. They don't utilize the concurrency of Vulkan or multiple CPUs. When you get to a non-trivial world, you need that. Tiny Glade is nice, but it works because it's tiny.
What does matter is high performance and reliability while content is coming in at a high rate and changing. Anything can change at any time, but usually doesn't. So cache type optimizations are important, as is multithreading to handle the content flood. Content is constantly coming in, being displayed, and then discarded as the user moves around the big world. All that dynamism requires more complex data structures than a game that loads everything at startup.
Rust's "fearless multiprogramming" is a huge win for performance. I have about 20 threads running, and many are doing quite different things. That would be a horror to debug in C++. In Rust, it's not hard.
(There's a school of thought that says that fast, general purpose renderers are impossible. Each game should have its own renderer. Or you go all the way to a full game engine and integrate gameplay control and the scene graph with the renderer. Once the scene graph gets big enough that (lights x objects) becomes too large to do by brute force, the renderer level needs to cull based on position and size, which means at least a minimal scene graph with a spatial data structure. So now there's an abstraction layering problem - the rendering level needs to see the scene graph. No one in Rust land has solved this problem efficiently. Thus, none of the four available low-level renderers scale well.
I don't think it's impossible, just moderately difficult. I'm currently looking at how to do this efficiently, with some combination of lambdas which access the scene graph passed into the renderer, and caches. I really wish someone else had solved this generic problem, though. I'm a user of renderers, not a rendering expert.)
Meta blew $40 billion dollars on this problem and produced a dud virtual world, but some nice headsets. Improbable blew upwards of $400 million and produced a limited, expensive to run system. Metaverses are hard, but not that hard. If you blow some of the basic architectural decisions, though, you never recover.
These are all solvable problems, but in reality, it's very hard to write a good business case for being the one to solve them. Most of the cost accrues to you and most of the benefit to the commons. Unless a corporate actor decides to write a major new engine in Rust or use Bevy as the base for the same, or unless a whole lot of indie devs and part-time hackers arduously work all this out, it's not worth the trouble if you're approaching it from the perspective of a studio with severe limitations on both funding and time.
The only challenge is not having an ecosystem with ready made everything like you do in "batteries included" frameworks. You are basically building a game engine and a game at the same time.
We need a commercial engine in Rust or a decade of OSS work. But what features will be considered standard in Unreal Engine 2035?
I see this and I am reminded when I had to fight the 0 indexing, when I was cutting my teeth in C, for class.
I wonder why no one complains about 0 indexing anymore. Isn't it weird how you have to go 0 to length - 1, and implement algorithm differently than in a math book?
Most languages have abstractions for iterating over an array so that you don’t need to use 0 or length-1 these days
Am I the only person that remembers how hard it was to wrap your head around numbers starting at 0, rather than 1?
So the reason why you don't see many people fighting 0-indexing is because they actually prefer it.
I started out with BASIC and Fortran, which use 1 based indices. Going to C was a small bump in the road getting used to that, and then it's Fortran which is the oddball.
Maybe you had the luck of learning 0 based language first. Then most of them were a smooth ride.
My point is you forgot how hard it is because it's now muscle memory (if you need a recap of the difficulty learn a program with arbitrary array indexing and set you first array index to something exciting like 5 or -6). It also means if you are "fighting the borrow checker" you are still at pre-"muscle memory" stage of learning Rust.
To say it's the same as using array indices is just not true.
Bevy entity IDs are opaque and you have to try really hard to do arithmetic on them. You can technically do math on instance IDs in Unity too; you might say "well, nobody does that", which is my point exactly.
> You don't need to deal with the game life cycle AND the memory life cycle with a GC.
I don't know what this means. The memory for a `GameObject` is freed once you call `Destroy`, which is also how you despawn an object. That's managing the memory lifecycle.
> In Unity they free the native memory when a game object calls Destroy() but the C# data is handled by the GC. Same with any plain C# objects.
Is there a use for storing data on a dead `GameObject`? I've never had any reason to do so. In any case, if you really wanted to do that in Bevy you could always use an `EntityHashMap`.
I am dealing with similar issues in npm now, as someone who is touching Node dev again. The number of deprecations drives me nuts. Seems like I’m on a treadmill of updating APIs just to have the same functionality as before.
It’s not always possible to be so minimal, but I view every dependency as lugging around a huge lurking liability, so the benefit it brings had better far outweigh that big liability.
So far, I’ve only had one painful dependency upgrade in 5 years, and that was Tailwind 3-4. It wasn’t too painful, but it was painful enough to make me glad it’s not a regular occurrence.
The constant update cycles of some libraries (hello Router) is problematic in itself, but there's too many fashionable things that sound very good in theory but end up being a huge problem when used in fast-moving projects, like headless UI libraries.
On that subject, ironically code gen by ai for ai related work is often least reliable due to fast churn. Langchain is a good example of this and also kind of funny, they suggest / integrate gritql for deterministic code transforms rather than using AI directly: https://python.langchain.com/docs/versions/v0_3/.
Overall.. mastering things like gritql, ast grep, and CST tools for code transforms still pays off. For large code bases, No matter how good AI gets, it is probably better to get them to use formal/deterministic tools like these rather than trust them with code transformations more directly and just hope for the best..
eg: https://docs.openrewrite.org/recipes/java/migrate/joda/jodat...
I occasionally notice libraries or frameworks including OpenRewrite rules in their releases. I've never tried it, though!
I also hear you on the winit/wgpu/egui breaking changes. I appreciate that the ecosystem is evolving, but keeping up is a pain. Especially when making them work together across versions.
* Simply check all array accesses and pointer de references and panic if we are out of bounds and panic/throw an exception/etc. if we are doing something wrong.
* Guarantee at compile-time that we are always accessing valid memory, to prevent even those panics.
Rust makes a lot of effort to reach the second goal, but, since it gives you integers and arrays, it makes the problem fundamentally insoluble.
The memory it wants so hard to regulate access to is just an array, and a pointer is just an index.
I think you should think less like Java/C# and more like database.
If you have a Comment object that has parent object, you need to store the parent as a 'reference', because you can't put the entire parent.
So I'll probably use Box here to refer to the parent
> Migration - Bevy is young and changes quickly.
We were writing an animation system in Bevy and were hit by the painful upgrade cycle twice. And the issues we had to deal with were runtime failures, not build time failures. It broke the large libraries we were using, like space_editor, until point releases and bug fixes could land. We ultimately decided to migrate to Three.js.
> The team decided to invest in an experiment. I would pick three core features and see how difficult they would be to implement in Unity.
This is exactly what we did! We feared a total migration, but we decided to see if we could implement the features in Javascript within three weeks. Turns out Three.js got us significantly farther than Bevy, much more rapidly.
I definitely sympathize with the frustration around the churn--I feel it too and regularly complain upstream--but I should mention that Bevy didn't really have anything production-quality for animation until I landed the animation graph in Bevy 0.15. So sticking with a compatible API wasn't really an option: if you don't have arbitrary blending between animations and opt-in additive blending then you can't really ship most 3D games.
This is clearly false. The Bevy performance improvements that I and the rest of the team landed in 0.16 speak for themselves [1]: 3x faster rendering on our test scenes and excellent performance compared to other popular engines. It may be true that little work is being done on rend3, but please don't claim that there isn't work being done in other parts of the ecosystem.
[1]: https://bevyengine.org/news/bevy-0-16/
...although the fact that a 3x speed improvement was available kind of proves their point, even if it may be slightly out of date.
That is, any sufficiently mature indie game project will end up implementing an informally specified, ad hoc, bug-ridden implementation of Unity (... or just use the informally specified, ad hoc and bug-ridden game engine called "Unity")
> That is, any sufficiently mature indie game project will end up implementing an informally specified, ad hoc, bug-ridden implementation of Unity (... or just use the informally specified, ad hoc and bug-ridden game engine called "Unity")
But using Bevy isn't writing your own game engine. Bevy is 400k lines of code that does quite a lot. Using Bevy right now is more like taking a game engine and filling in some missing bits. While this is significantly more effort than using Unity, it's an order of magnitude less work than writing your own game engine from scratch.
If you just want to make a game, yes, absolutely just go for Unity, for the same reason why if you just want to ship a CRUD app you should just use an established batteries-included web framework. But indie game developers come in all shapes and some of them don't just want to make a game, some of them actually do enjoy owning every part of the stack. People write their own OSes for fun, is it so hard to believe that people (who aren't you) might enjoy the process of building a game engine?
But part of the thing that attracted me to the game I'm making is that it would be hard to make in a standard cookie-cutter way. The novelty of the systems involved is part of the appeal, both to me and (ideally) to my customers. If/when I get some of those (:
Generally, I've seen the exact opposite. People who code their own engines tend to get sucked into the engine and forget that they're supposed to be shipping a game. (I say this as someone who has coded their own engine, multiple times, and ended up not shipping a game--though I had a lot of fun working on the engine.)
The problem is that the fun, cool parts about building your own game engine are vastly outnumbered by the boring parts: supporting level and save data loading/storage, content pipelines, supporting multiple input devices and things like someone plugging in an XBox controller while the game is running and switching all the input symbols to the new input device in real time, supporting various display resolutions and supporting people plugging in new displays while the game is running, and writing something that works on PC/mobile/Switch(2)/XBox/Playstation... all solved problems, none of which are particularly intellectually stimulating to solve correctly.
If someone's finances depend on shipping a game that makes money, there's really no question that you should use Unity or Unreal. Maybe Godot but even that's a stretch. There's a small handful of indie custom game engine success stories, including some of my favorites like The Witness and Axiom Verge, but those are exceptions rather than the rule. And Axiom Verge notably had to be deeply reworked to get a Switch release, because it's built on MonoGame.
The Venn diagram between the people interested in technical aspects of an engine and in also shipping a game is probably composed of a few hundred individuals, most of them working for studios.
The "kid that wants to make an engine to make an MMO" is gonna do neither.
Shipping a playable game involves so so many things beyond enjoyable programming bits that it's an entirely different challenge.
I think it's telling that there are more Rust game engines than games written in Rust.
Typically the “itch is scratched” long before the application is done.
What really drags you down in games is iteration speed. It can be fun making your own game engine at first but after awhile you just want the damn thing to work so you can try out new ideas.
But for the vast majority of projects, I believe that C++ is not the right language, meaning that Rust isn't, either.
I feel like many people choose Rust because is sounds like it's more efficient, a bit as if people went for C++ instead of a JVM language "because the JVM is slow" (spoiler: it is not) or for C instead of C++ because "it's faster" (spoiler: it probably doesn't matter for your project).
It's a bit like choosing Gentoo "because it's faster" (or worse, because it "sounds cool"). If that's the only reason, it's probably a bad choice (disclaimer: I use and love Gentoo).
The result was not statistically different in performance than my Java implementation. Each took the same amount of time to complete. This surprised me, so I made triply sure that I was using the right optimization settings.
Lesson learned: Java is easy to get started with out of the box, memory safe, battle tested, and the powerful JIT means that if warmup times are a negligible factor in your usage patterns your Java code can later be optimized to be equivalent in performance to a Rust implementation.
Java has the stigma of ClassFactoryGeneratorFactory sticking to it like a nasty smell but that's not how the language makes you write things. I write Java professionally and it is as readable as any other language. You can write clean, straightforward and easy to reason code without much friction. It's a great general purpose language.
Unfortunately it's not a good gaming language. GC pauses aren't really acceptable (which C# also suffers from) and GPU support is limited.
Miguel de Icaza probably has more experience than anyone building game engines on GC platforms and is very vocally moving toward reference counted languages [1]
[1] https://www.youtube.com/watch?v=tzt36EGKEZo
Java has made great progress with low-pause (~1 ms) garbage collectors like ZGC and Shenandoah since ~5 years ago.
you're no longer writing idiomatic java at this point - probably with zero object oriented programming. so might as well write it in Rust from the get-go.
I don't understand this argument, which I've also seen it used against C#, quite frequently. When a language offers new features, you're not forced to use them. You generally don't even need to learn them if you don't want. I do think some restrictions in languages can be highly beneficial, like strong typing, but the difference is that in a weakly typed language that 'feature' is forced upon you, whereas random new feature in C++ or C# is near to always backwards compatible and opt-in only.
For instance, to take a dated example - consider move semantics in C++. If you never used it anywhere at all, you'd have 0 problems. But once you do, you get lots of neat things for free. And for these sort of features, I see no reason to ever oppose their endless introduction unless such starts to imperil the integrity/performance of the compiler, but that clearly is not happening.
The OP is doing game development. It’s possible to write a performant game in Java but you end up fighting the garbage collector the whole way and can’t use much library code because it’s just not written for predictable performance.
This said, they moved to Unity, which is C#, which is garbage collected, right?
For all my personal projects, I use a mix of Haskell and Rust, which I find covers 99% of the product domains I work in.
Ultra-low level (FPGA gateware): Haskell. The Clash compiler backend lets you compile (non-recursive) Haskell code directly to FPGA. I use this for audio codecs, IO expanders, and other gateware stuff.
Very low-level (MMUless microcontroller hard-realtime) to medium-level (graphics code, audio code): Rust dominates here
High-level (have an MMU, OS, and desktop levels of RAM, not sensitive to ~0.1ms GC pauses): Haskell becomes a lot easier to productively crank out "business logic" without worrying about memory management. If you need to specify high-level logic, implement a web server, etc. it's more productive than Rust for that type of thing.
Both languages have a lot of conceptual overlap (ADTs, constrained parametric types, etc.), so being familiar with one provides some degree of cross-training for the other.
Another question is about Clash. Your description sounds like the HLS (high level synthesis) approach. But I thought that Clash used a Haskell -based DSL, making it a true HDL. Could you clarify this? Thanks!
If your C is faster than your C++ then something has gone horribly wrong. C++ has been faster than C for a long time. C++ is about as fast as it gets for a systems language.
What is your basis for this claim? C and C++ are both built on essentially the same memory and execution model. There is a significant set of programs that are valid C and C++ both -- surely you're not suggesting that merely compiling them as C++ will make them faster?
There's basically no performance technique available in C++ that is not also available in C. I don't think it's meaningful to call one faster than the other.
Yes, you can write most things in modern C++ in roughly equivalent C with enough code, complexity, and effort. However, the disparate economics are so lopsided that almost no one ever writes the equivalent C in complex systems. At some point, the development cost is too high due to the limitations of the expressiveness and abstractions. Everyone has a finite budget.
I’ve written the same kinds of systems I write now in both C and modern C++. The C equivalent versions require several times the code of C++, are less safe, and are more difficult to maintain. I like C and wrote it for a long time but the demands of modern systems software are a beyond what it can efficiently express. Trying to make it work requires cutting a lot of corners in the implementation in practice. It is still suited to more classically simple systems software, though I really like what Zig is doing in that space.
I used to have a lot of nostalgia for working in C99 but C++ improved so rapidly that around C++17 I kind of lost interest in it.
You can argue that C takes more effort to write, but if you write equivalent programs in both (ie. that use comparable data structures and algorithms) they are going to have comparable performance.
In practice, many best-in-class projects are written in C (Lua, LuaJIT, SQLite, LMDB). To be fair, most of these projects inhabit a design space where it's worth spending years or decades refining the implementation, but the combination of performance and code size you can get from these C projects is something that few C++ projects I have seen can match.
For code size in particular, the use of templates makes typical C++ code many times larger than equivalent C. While a careful C++ programmer could avoid this (ie. by making templated types fall back to type-generic algorithms to save on code size), few programmers actually do this, and in practice you end up with N copies of std::vector, std::map, etc. in your program (even the slow fallback paths that get little benefit from type specialization).
Great question! Here's one answer:
Having written a great deal of C code, I made a discovery about it. The first algorithm and data structure selected for a C program, stayed there. It survives all the optimizations, refactorings and improvements. But everyone knows that finding a better algorithm and data structure is where the big wins are.
Why doesn't that happen with C code?
C code is not plastic. It is brittle. It does not bend, it breaks.
This is because C is a low level language that lacks higher level constructs and metaprogramming. (Yes, you can metaprogram with the C preprocessor, a technique right out of hell.) The implementation details of the algorithm and data structure are distributed throughout the code, and restructuring that is just too hard. So it doesn't happen.
A simple example:
Change a value to a pointer to a value. Now you have to go through your entire program changing dots to arrows, and sprinkle stars everywhere. Ick.
Or let's change a linked list to an array. Aarrgghh again.
Higher level features, like what C++ and D have, make this sort of thing vastly simpler. (D does it better than C++, as a dot serves both value and pointer uses.) And so algorithms and data structures can be quickly modified and tried out, resulting in faster code. A traversal of an array can be changed to a traversal of a linked list, a hash table, a binary tree, all without changing the traversal code at all.
When people claim C++ to be faster than C, that is usually understood as C++ provides tools that makes writing fast code easier than C, not that the fastest possible implementation in C++ is faster than the fastest possible implementation in C, which is trivially false as in both cases the fastest possible implementation is the same unmaintainable soup of inline assembly.
The typical example used to claim C++ is faster than C is sorting, where C due to its lack of templates and overloading needs `qsort` to work with void pointers and a pointer to function, making it very hard on the optimiser, when C++'s `std::sort` gets the actual types it works on and can directly inline the comparator, making the optimiser work easier.
That said, external comparators are a weakness of generic C library functions. I once manually inlined them in some performance critical code using the C preprocessor:
https://github.com/openzfs/zfs/commit/677c6f8457943fe5b56d7a...
One of the strengths of C++ is that it is well-suited to compile-time codegen of hyper-optimized data structures. In fact, that is one of the features that makes it much better than C for performance engineering work.
You have other sources of slow downs in C++, since the abstractions have a tendency to hide bloat, such as excessive dynamic memory usage, use of exceptions and code just outright compiling inefficiently compared to similar code in C. Too much inlining can also be a problem, since it puts pressure on CPU instruction caches.
Abstractions can hide bloat for sure, but the lack of abstraction can also push coders towards suboptimal solutions. For example C code tends to use linked lists just because its easy to implement when a dynamic array such as std::vector would have been more performant.
Too much inlining can of course be a problem, the optimizer has loads of heuristics to decide if inlinining is worth it or not, and the programmer can always mark the function as `[[gnu::noinline]]` if necessary. It is not because C++ makes it possible for the sort comparator to be inlined that it will.
In my experience, exceptions have a slightly positive impact on codegen (compared to code that actually checks error return values, not code that ignores them) because there is no error checking on the happy path at all. The sad path is greatly slowed down though.
Having worked in highly performance sensitive code all of my career (video game engines and trading software), I would miss a lot of my toolbox if I limited myself to plain C and would expect to need much more effort to achieve the same result.
For example, exceptions have been explicitly disabled on every C++ code base I’ve ever worked on, whether FAANG or a smaller industrial company. It isn’t compatible with some idiomatic high-performance software architectures so it would be weird to even turn it on. C++ allows you to strip all bloat at compile-time and provides tools to make it easy in a way that C could only dream of, a standard metaprogramming optimization. Excessive dynamic allocation isn’t a thing in real code bases unless you are naive. It is idiomatic for many C++ code bases to never do any dynamic allocation at runtime, never mind “excessive”.
C++ has many weaknesses. You are failing to identify any that a serious C++ practitioner would recognize as valid. In all of this you also failed to make an argument for why anyone should use C. It isn’t like C++ can’t use C code.
https://github.com/openzfs/zfs/commit/677c6f8457943fe5b56d7a...
The performance gain comes not from eliminating the function overhead, but enabling conditional move instructions to be used in the comparator, which eliminates a pipeline hazard on each loop iteration. There is some gain from eliminating the function overhead, but it is tiny in comparison to eliminating the pipeline hazard.
That said, C++ has its weaknesses too, particularly in its typical data structures, its excessive use of dynamic memory allocation and its exception handling. I gave an example here:
https://news.ycombinator.com/item?id=43827857
Honestly, I think these weaknesses are more severe than qsort being unable to inline the comparator.
Does not work if the compiler can not look into the function, but the same is true in C++.
Edit: It sort of works for the bsearch() standard library function:
https://godbolt.org/z/3vEYrscof
However, it optimized the binary search into a linear search. I wanted to see it implement a binary search, so I tried with a bigger array:
https://godbolt.org/z/rjbev3xGM
Now it calls bsearch instead of inlining the comparator.
That's not the most general case, but it's better than I expected.
That said, this brings me to my original reason for checking this, which is to say that it did not use a cmov instruction to eliminate unnecessary branching from the loop, so it is probably slower than a binary search that does:
https://en.algorithmica.org/hpc/data-structures/binary-searc...
That had been the entire motivation behind this commit to OpenZFS:
https://github.com/openzfs/zfs/commit/677c6f8457943fe5b56d7a...
It should be possible to adapt this to benchmark both the inlined bsearch() against an implementation designed to encourage the compiler to emit a conditional move to skip a branch to see which is faster:
https://github.com/scandum/binary_search
My guess is the cmov version will win. I assume merits a bug report, although I suspect improving this is a low priority much like my last report in this area:
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=110001
In certain cases, sure - inlining potential is far greater in C++ than in C.
For idiomatic C++ code that doesn't do any special inlining, probably not.
IOW, you can rework fairly readable C++ code to be much faster by making an unreadable mess of it. You can do that for any language (C included).
But what we are usually talking about when comparing runtime performance in production code is the idiomatic code, because that's how we wrote it. We didn't write our code to resemble the programs from the language benchmark game.
While I do not doubt some C++ code uses intrusive data structures, I doubt very much of it does. Meanwhile, C code using <sys/queue.h> uses intrusive lists as if they were second nature. C code using <sys/tree.h> from libbsd uses intrusive trees as if they were second nature. There is also the intrusive AVL trees from libuutil on systems that use ZFS and there are plenty of other options for such trees, as they are the default way of doing things in C. In any case, you see these intrusive data structures used all over C code and every time one is used, it is a performance win over the idiomatic C++ way of doing things, since it skips an allocation that C++ would otherwise do.
The use of intrusive data structures also can speed up operations on data structures in ways that are simply not possible with idiomatic C++. If you place the node and key in the same cache line, you can get two memory fetches for the price of one when sorting and searching. You might even see decent performance even if they are not in the same cache line, since the hardware prefetcher can predict the second memory access when the key and node are in the same object, while the extra memory access to access a key in a C++ STL data structure is unpredictable because it goes to an entirely different place in memory.
You could say if you have the C++ STL allocate the objects, you can avoid this, but you can only do that for 1 data structure. If you want the object to be in multiple data structures (which is extremely common in C code that I have seen), you are back to inefficient search/traversal. Your object lifetime also becomes tied to that data structure, so you must be certain in advance that you will never want to use it outside of that data structure or else you must do at a minimum, another memory allocation and some copies, that are completely unnecessary in C.
Exception handling in C++ also can silently kill performance if you have many exceptions thrown and the code handles it without saying a thing. By not having exception handling, C code avoids this pitfall.
You are arguing against what the language was 30-40 years ago. The language has undergone two pretty fundamental revisions since then.
Citation needed.
That's interesting, did ChatGPT tell you this?
In my experience, most people who don't want a JVM language "because it is slow" tend to take this as a principle, and when you ask why their first answer is "because it's interpreted". I would say they are stuck in the 90s, but probably they just don't know and repeat something they have heard.
Similar to someone who would say "I use Gentoo because Ubuntu sucks: it is super slow". I have many reasons to like Gentoo better than Ubuntu as my main distro, but speed isn't one in almost all cases.
It also struggles with numeric work involving large matrices, because there isn't good support for that built into the language or standard library, and there isn't a well-developed library like NumPy to reach for.
They're far slower in Python but that hasn't stopped anyone.
I was a Gentoo user (daily driver) for around 15 years but the endless compilation cycles finally got to me. It is such a shame because as I started to depart, Gentoo really got its arse in gear with things like user patching etc and no doubt is even better.
It has literally (lol) just occurred to me that some sort of dual partition thing could sort out my main issue with Gentoo.
@system could have two partitions - the running one and the next one that is compiled for and then switched over to on a reboot. @world probably ought to be split up into bits that can survive their libs being overwritten with new ones and those that can't.
Errrm, sorry, I seem to have subverted this thread.
But if you want to do a difficult and complicated thing, then Rust is going to raise the guard rails. Your program won't even compile if it's unsafe. It won't let you make a buggy app. So now you need to back up and decide if you want it to be easy, or you want it to be correct.
Yes, Rust is hard. But it doesn't have to be if you don't want.
This just isn’t a problem in other languages I’ve used, which granted aren’t as safe.
I love Rust. But saying it’s only hard if you are doing hard things is an oversimplification.
Most languages used with DBs are just as safe. This propaganda about Rust being more safe than languages with GC needs a rather big [Citation Needed] by the fans.
Querying a database while ensuring type safety is harder, but you still don't need an OEM for that. See sqlx.
I know that the compiler complains a lot. But I code with the help of realtime feedback from tools like the language server (rust-analyzer) and bacon. It feels like 'debug as you code'. And I really love the hand holding it does.
The whole point of Rust is to bring memory safety with zero cost abstraction. It's essentially bringing memory safety to the use-cases that require C/C++. If you don't require that, then a whole world of modern languages becomes available :-).
Plus in the meantime, even if I'm doing the "easy mode" approach I get to use all of the features I enjoy about writing in Rust - generics, macros, sum types, pattern matching, Result/Option types. Many of these can't be found all together in a single managed/GC'd languages, and the list of those that I would consider viable for my personal or professional use is quite sparse.
People are saying rust is harsh, i would day its not that much harder then other languages just more verbose and demanding.
Writing web service backends is one domain where Rust absolutely kicks ass. I would choose Rust/(Actix or Axum) over Go or Flask any day. The database story is a little rough around the edges, but it's getting better and SQLx is good enough for me.
edit: The downvoters are missing out.
I am absolutely convinced I can find success story of web backends built with all those languages.
The third is the interesting one. When your service has a lot of traffic and every bit of inefficiency costs you money (node rents) and energy. Rust is an obvious improvement over the interpreted languages. There are also a few rare cases where Rust has enough advantages over Go to choose the former. In general though, I feel that a lot of energy consumption and emissions can be avoided by choosing an appropriate language like Rust and Go.
This would be a strong argument in favor of these languages in the current environmental conditions, if it weren't for 'AI'. Whether it be to train them or run them, they guzzle energy even for problems that could be solved with a search engine. I agree that LLMs can do much more. But I don't think they do enough for the energy they consume.
The type system and package manager are a delight, and writing with sum types results in code that is measurably more defect free than languages with nulls.
Other than the great developer experience in tooling and language ergonomics (as in coherent features not necessarily ease of use) the reason I continue to put up with the difficulties of Rust's borrow checker is because I feel I can work towards mastering one language and then write code across multiple domains AND at the end I'll have an easy way to share it, no Docker and friends needed.
But I don't shy away from the downsides. Rust loads the cognitive burden at the ends. Hard as hell in the beginning when learning it and most people (me included) bounce from it for the first few times unless they have C++ experience (from what I can tell). At the middle it's a joy even when writing "throwaway" code with .expect("Lol oops!") and friends. But when you get to the complex stuff it becomes incredibly hard again because Rust forces you to either rethink your design to fit the borrow checker rules or deal with unsafe code blocks which seem to have their own flavor of C++ like eldritch horrors.
Anyway, would *I* recommend Rust to everyone? Nah, Go is a better proposition for a most bang for your buck language, tooling and ecosystem UNLESS you're the kind that likes to deal with complexity for the fulfilled promise of one language for almost anything. In even simpler terms Go is good for most things, Rust can be used for everything.
Also stuff like Maud and Minijinja for Rust are delights on the backend when making old fashioned MPA.
Thanks for coming to my TED talk.
For me it's a question of whether I can get away with garbage collection. If I can then pretty much everything else is going to be twice as productive but if I can't then the options are quite limited and Rust is a good choice.
When things get complex, you start missing Rust's type system and bugs creep in.
In node.js there was a notable improvement when TS became the de-facto standard and API development improved significantly (if you ignore the poor tooling, transpiling, building, TS being too slow). It's still far from perfect because TS has too many escape hatches and you can't trust TS code; with Rust, if it compiles and there are no unsafe (which is rarely a problem in web services) you get a lot of compile time guarantees for free.
Rust is an absolute gem at web backend. An absolute fucking gem.
Rust gamedev is the Wild West, and frontier development incurs the frontier tax. You have to put a lot of work into making an abstraction, even before you know if it’s the right fit.
Other “platforms” have the benefit of decades more work sunk into finding and maintaining the right abstractions. Add to that the fact that Rust is an ML in sheep’s clothing, and that games and UI in FP has never been a solved problem (or had much investment even), it’s no wonder Rust isn’t ready. We haven’t even agreed on the best solutions to many of these problems in FP, let alone Rust specifically!
Anyway, long story short, it takes a very special person to work on that frontier, and shipping isn’t their main concern.
Conversely and ironically, this is why I love Go. The language itself is so boring and often ugly, but it just gets out of the way and has the best in class tooling. The worst part is having seen the promised land of eg Rust enums, and not having them in other langs.
Feeling passionate about a programming language is generally bad for the products made with that language.
I also don’t want to use a language with questionable hireability.
Cross compilation, package manager and associated infrastructure, async io (epoll, io_uring etc), platform support, runtime requirements, FFI support, language server, etc.
Are a majority of these things available with first party (or best in class) integrated tooling that are trivial to set up on all big three desktop platforms?
For instance, can I compile an F# lib to an iOS framework, ideally with automatically generated bindings for C, C++ or Objective C? Can I use private repo (ie github) urls with automatic overrides while pulling deps?
Generally, the answer to these questions for – let’s call it ”niche” asterisk – languages, are ”there is a GitHub project with 15 stars last updated 3 years ago that maybe solves that problem”.
There are tons of amazing languages (or at the very least, underappreciated language features) that didn’t ”make it” because of these boring reasons.
My entire point is that the older and grumpier I get, the less the language itself matters. Sure, I hate it when my favorite elegant feature is missing, but at the end of the day it’s easy to work around. IMO the navel gazing and bikeshedding around languages is vastly overhyped in software engineering.
There are so many QoL things which would make Rust better for gamedev without revamping the language. Just a mode to automatically coerce between numeric types would make Rust so much more ergonomic for gamedev. But that's a really hard sell (and might be harder to implement than I imagine.)
GHC has an -fdefer-type-errors option that lets you compile and run this code:
Which obviously doesn't typecheck since 'a' is not an Int, but will run just fine since the value of `a` is not observed by this program. (If it were observed, -fdefer-type-errors guarantees that you get a runtime panic when it happens.) This basically gives you the no-types Python experience when iterating, then you clean it all up when you're done.This would be even better in cases where it can be automatically fixed. Just like how `cargo clippy --fix` will automatically fix lint errors whenever it can, there's no reason it couldn't also add explicit coercions of numeric types for you.
I’d go even further and say I wish my whole development stack had a switch I can use to say “I’m not done iterating on this idea yet, cool it with the warnings.”
Unused imports, I’m looking at you… stop bitching that I’m not using this import line simply because I commented out the line that uses it in order to test something.
Stop complaining about dead code just because I haven’t finished wiring it up yet, I just want to unit test it before I go that far.
Stop complaining about unreachable code because I put a quick early return line in this function so that I could mock it to chase down this other bug. I’ll get around to fixing it later, I’m trying to think!
In rust I can go to lib.rs somewhere and #![allow(unused_imports,dead_code,etc)] and then remember to drop it by the time I get the branch ready for review, but that’s more cumbersome than it ought to be. My whole IDE/build/other tooling should have a universal understanding of “this is a work in progress please let me express my thoughts with minimal obstructions” mode.
In my book, Rust is good at moving runtime-risk to compile-time pain and effort. For the space of C-Code running nuclear reactors, robots and missiles, that's a good tradeoff.
For the space of making an enemy move the other direction of the player in 80% of the cases, except for that story choice, and also inverted and spawning impossible enemies a dozen times if you killed that cute enemy over yonder, and.... and the worst case is a crash of a game and a revert to a save at level start.... less so.
And these are very regular requirements in a game, tbh.
And a lot of _very_silly_physics_exploits_ are safely typed float interactions going entirely nuts, btw. Type safety doesn't help there.
I don't think your experience with Amethyst merits your conclusion of the state of gamedev in rust, especially given Amethysts own take on Bevy [1, 2].
1: https://web.archive.org/web/20220719130541mp_/https://commun...
2: https://web.archive.org/web/20240202140023/https://amethyst....
C# is stricter about float vs. double for literals than Rust is, and the default in C# (double) is the opposite of the one you want for gamedev. That hasn't stopped Unity from gaining enormous market share. I don't think this is remotely near the top issue.
There is a very certain way rust is supposed to be used, which is a negative on it's own, but it will lead to a fulfilling and productive programming experience. (My opinion) If you need to regularly index something, then you're using the language wrong.
Long story short, yes, it's very different in game dev. It's very common to pre-allocate space for all your working data as large statically sized arrays because dynamic allocation is bad for performance. Oftentimes the data gets organized in parallel arrays (https://en.wikipedia.org/wiki/Parallel_array) instead of in collections of structs. This can save a lot of memory (because the data gets packed more densely) be more cache-friendly, and makes it much easier to make efficient use of SIMD instructions.
This is also fairly common in scientific computing (which is more my wheelhouse), and for the same reason: it's good for performance.
That seems like something that could very easily be turned into a compiler optimisation and enabled with something like an annotation. Would have some issue when calling across library boundaries ( a lot like the handling of gradual types), but within the codebase that'd be easy.
* Everything should be random access(because you want to have novel rulesets and interactions)
* It should also be fast to iterate over per-frame(since it's real-time)
* It should have some degree of late-binding so that you can reuse behaviors and assets and plug them together in various ways
* There are no ideal data structures to fulfill all of this across all types of scene, so you start hacking away at something good enough with what you have
* Pretty soon you have some notion of queries and optional caching and memory layouts to make specific iterations easier. Also it all changes when the hardware does.
* Congratulations, you are now the maintainer of a bespoken database engine
You can succeed at automating parts of it, but note that parent said "oftentimes", not "always". It's a treadmill of whack-a-mole engineering, just like every other optimizing compiler; the problem never fully generalizes into a right answer for all scenarios. And realistically, gamedevs probably haven't come close to maxing out what is possible in a systems-level sense of things since the 90's. Instead we have a few key algorithms that go really fast and then a muddle of glue for the rest of it.
(It's also not always a win: it can work really well if you primarily operate on the 'columns', and on each column more or less once per update loop, but otherwise you can run into memory bandwidth limitations. For example, games with a lot of heavily interacting systems and an entity list that doesn't fit in cache will probably be better off with trying to load and update each entity exactly once per loop. Factorio is a good example of a game which is limited by this, though it is a bit of an outlier in terms of simulation size.)
At least on the scientific computing side of things, having the way the code says the data is organized match the way the data is actually organized ends up being a lot easier in the long run than organizing it in a way that gives frontend developers warm fuzzies and then doing constant mental gymnastics to keep track of what the program is actually doing under the hood.
I think it's probably like sock knitting. People who do a lot of sock knitting tend to use double-pointed needles. They take some getting used to and look intimidating, though. So people who are just learning to knit socks tend to jump through all sorts of hoops and use clever tricks to allow them to continue using the same kind of knitting needles they're already used to. From there it can go two ways: either they get frustrated, decide sock knitting is not for them, and go back to knitting other things; or they get frustrated, decide magic loop is not for them, and learn how to use double-pointed needles.
In general I think GP is correct. There is some subset of problems that absolutely requires indexing to express efficiently.
This is something you can't do on a compute shader, given you don't have access to the built-in derivative methods (building your own won't be cheaper either).
Still, if you want those changes to persist, a compute shader would be the way to go. You _can_ do it using a pixel shader but it really is less clean and more hacky.
However this problem does still come up in iterator contexts. For example Iterator::take takes a usize.
Concrete example: pulling a single item out of a zip file, which supports random access, is O(1). Pulling a single item out of a *.tar.gz file, which can only be accessed by iterating it, is O(N).
Compressed tars are terrible for random access because the compression occurs after the concatenation and so knows nothing about inner file metadata, but it's good for streaming and backups. Uncompressed tars are much better for random access. (Tar was a used as a backup mechanism to tape (tape archive).)
Zips are terrible for streaming because their metadata is stored at the end, but are better for 1-pass creation and on-disk random access. (Remember that zip files and programs were created in an era of multiple floppy disk-based backups.)
When fast tar enumeration is desired, at the cost of compatibility and compression potential, it might be worth compressing files and then taring them when and if zipping alone isn't achieving enough compression and/or decompression performance. FUSE compressed tar mounting gets to be really expensive with terabyte archives.
Just use squashfs if that is the functionality that you need.
I'm usually working with positive values, and almost always with values within the range of integers f32 can safely represent (+- 16777216.0).
I want to be able to write `draw(x, y)` instead of `draw(x as u32, y as u32)`. I want to write "3" instead of "3.0". I want to stop writing "as".
It sounds silly, but it's enough to kill that gamedev flow loop. I'd love if the Rust compiler could (optionally) do that work for me.
[1] https://docs.rs/num-traits/latest/num_traits/trait.Num.html
Thing is, he didn't make the game in C. He built his game engine in C, and the game itself in Lua. The game engine is specific to this game, but there's a very clear separation where the engine ends and the game starts. This has also enabled amazing modding capabilities, since mods can do everything the game itself can do. Yes they need to use an embedded scripting language, but the whole game is built with that embedded scripting language so it has APIs to do anything you need.
For those who are curious - the game is 'Sapiens' on Steam: https://store.steampowered.com/app/1060230/Sapiens/
They're distributing their game on Steam too so Linux support is next to free via Proton.
Non-issue. Pick a single blessed distro. Clearly state that it's the only configuration that you officially support. Let the community sort the rest out.
Still, given the nature of what my project is (APIs and basic financial stuff), I think it was the right choice. I still plan to write about 5% of the project in Rust and call it from Go, if required, as there is a piece of code that simply cannot be fast enough, but I estimate for 95% of the project Go will be more than fast enough.
Obligatory ”remember to `go run -race`”, that thing is a life saver. I never run into difficult data races or deadlocks and I’m regularly doing things like starting multiple threads to race with cancelation signals, extending timeouts etc. It’s by far my favorite concurrency model.
• Go uses its own custom ABI and resizeable stacks, so there's some overhead to switching where the "Go context" must be saved and some things locked.
• Go's goroutines are a kind of preemptive green thread where multiple goroutines share the same OS thread. When calling C, the goroutine scheduler must jump through some hoops to ensure that this caller doesn't stall other goroutines on the same thread.
Calling C code from Go used to be slow, but over the last 10 years much of this overhead has been eliminated. In Go 1.21 (which came with major optimizations), a C call was down to about 40ns [1]. There are now some annotations you can use to further help speed up C calls.
[1] https://shane.ai/posts/cgo-performance-in-go1.21/
In Unity, Mono and/or IL2CPP's interop mechanism also ends up in the ballpark of direct call cost.
And chances are that it won't be required.
I too have a hobby-level interest in Rust, but doing things in Rust is, in my experience, almost always just harder. I mean no slight to the language, but this has universally been my experience.
Perhaps someday there will be a comparable game engine written in Rust, but it would probably take a major commercial sponsor to make it happen.
This was more of a me-problem, but I was constantly having to change my strategy to avoid fighting the borrow-checker, manage references, etc. In any case, it was a productivity sink.
That's not to say that games aren't a very cool space to be in, but the challenges have moved beyond the code. Particularly in the indie space, for 10+ years it's been all about story, characters, writing, artwork, visual identity, sound and music design, pacing, unique gameplay mechanics, etc. If you're making a game in 2025 and the hard part is the code, then you're almost certainly doing it wrong.
you want to make some change, so you adjust a struct or a function signature, and then your IDE highlights all the places where changes are necessary with red squigglies.
Once you’re done playing whack-a-mole with the red squigglies, and tests pass, you know there’s no weird random crash hiding somewhere
Many of the negatives in the post are positives to me.
> Each update brought with it incredible features, but also a substantial amount of API thrash.
This is highly annoying, no doubt, but the API now is just so much better than it used to be. Keeping backwards compatibility is valuable once a product is mature, but like how you need to be able to iterate on your game, game engine developers need to be able to iterate on their engine. I admit that this is a debuff to the experience of using Bevy, but it also means that the API can actually get better (unlike Unity which is filled with historical baggage, like the Text component).
I think all posts I have seen regarding migrating away from writing a game in Rust were using Bevy, which is interesting. I do think Bevy is awesome and great, but it's a complex project.
I have worked as a professional dev at game studios many would recognize. Those studios which used Unity didn't even upgrade Unity versions often unless a specific breaking bug got fixed. Same for those studios which used DirectX. Often a game shipped with a version of the underlying tech that was hard locked to something several years old.
The other points in the article are all valid, but the two factors above held the greatest weight as to why the project needed to switch (and the article says so -- it was an API change in Bevy that was "the straw that broke the camel's back").
Hot reloading! Iteration!
A friend of mine wrote an article 25+ years ago about using C++ based scripting (compiles to C++). My friend is super smart engineer, but I don't think he was thinking of those poor scripters that would have to wait on iteration times. Granted 25 years ago the teams were small, but nowadays the amount of scripters you would have on AAA game is probably dozen if not two or three dozen and even more!
Imagine all of them waiting on compile... Or trying to deal with correctness, etc.
From a dev perspective, I think, Rust and Bevy are the right direction, but after reading this account, Bevy probably isn't there yet.
For a long time, Unity games felt sluggish and bloated, but somehow they got that fixed. I played some games lately that run pretty smoothly on decade old hardware.
This is the biggest reason I push for C#/.NET in "serious business" where concerns like auditing and compliance are non-negotiable aspects of the software engineering process. Virtually all of the batteries are included already.
For example, which 3rd party vendors we use to build products is something that customers in sectors like banking care deeply about. No one is going to install your SaaS product inside their sacred walled garden if it depends on parties they don't already trust or can't easily vet themselves. Microsoft is a party that virtually everyone can get on board with in these contexts. No one has to jump through a bunch of hoops to explain why the bank should trust System or Microsoft namespaces. Having ~everything you need already included makes it an obvious choice if you are serious about approaching highly sensitive customers.
Log4shell was a good example of a relative strength of .NET in this area. If a comparable bug had happened in .NET's standard logging tooling, we likely would have seen all of the first-party .NET framework patched fairly shortly after, in a single coordinated release that we could upgrade to with minimal fuss. Meanwhile, at my current job we've still got standing exceptions allowing vulnerable version of log4j in certain services because they depend on some package that still has a hard dependency on a vulnerable version, which they in turn say they can't fix yet because they're waiting on one of their transitive dependencies to fix it, and so on. We can (and do) run periodic audits to confirm that the vulnerable parts of log4j aren't being used, but being able to put the whole thing in the past within a week or two would be vastly preferable to still having to actively worry about it 5 years later.
The relative conciseness of C# code that the parent poster mentioned was also a factor. Just shooting from the hip, I'd guess that I can get the same job done in about 2/3 as much code when I'm using C# instead of Java. Assuming that's accurate, that means that with Java we'd have had 50% more code to certify, 50% more code to maintain, 50% more code to re-certify as part of maintenance...
Less time wasted sifting through half-baked solutions.
C# has three "super powers" to reduce code bloat which is its really rich runtime reflection, first-class expression trees, and Roslyn source generators to generate code on the fly. Used correctly, this can remove a lot of boilerplate and "templatey" code.---
I make the case that many teams that outgrow JS/TS on Node.js should look to C# because of its congruence to TS[0] before Go, Java, Kotlin, and certainly not Rust.
[0] https://typescript-is-like-csharp.chrlschn.dev/
Why would you use Java 8?
My understanding (not having used it much, precisely because of this) is that AOT is still quite lacking; not very performant and not so seamless when it comes to cross-platform targeting. Do you know if things have gotten better recently?
I think fhat Microsoft had dropped the old .NET platform (CLR and so on) sooner and really nailed the AOT experience, they may have had a chance at competing with Go and even Rust and C++ for some things, but I suspect that ship has sailed, as it has for languages like D and Nim.
C# and .net are one of the most mature platform for development of all kind. It's just that online, it carries some sort of anti Microsoft stigma...
But a lot of AA or indie games are written in C# and they do fine. It's not just C++ or Rust in that industry.
People tend to be influenced by opinions online but often the real world is completely different. Been using C# for a decade now and it's one of the most productive language I have ever used, easy to set up, powerful toolchains... and yes a lot of closed source libs in the .net ecosystem but the open source community is large too by now.
Some folks still think it's Windows-only. Some folks think you need to use Visual Studio. Some think it's too hard to learn. Lots of misconceptions lead to teams overlooking it for more "hyped" languages like Rust and Go.
I think there may also be some misunderstandings regarding the purchase models around these tools. Visual Studio 2022 Professional is possible to outright purchase for $500 [0] and use perpetually. You do NOT need a subscription. I've got a license key printed on paper that I can use to activate my copy each time.
Imagine a plumber or electrician spending time worrying about the ideological consequences of purchasing critical tools that cost a few hundred dollars.
[0] https://www.microsoft.com/en-us/d/visual-studio-professional...
How's the LSP support nowadays? I remember reading a lot of complaints about how badly done the LSP is compared to Visual Studio.
I started using Visual Studio Code exclusively around 2020 for C# work and it's been great. Lightweight and fast. I did try Rider and 100% it is better if you are open to paying for a license and if you need more powerful refactoring, but I find VSC to be perfectly usable and I prefer its "lighter" feel.
I mean, you could also write how we went from C# code 1mil code of our mostly custom engine to 10k in Unreal C++.
I had two groups students (complete Rust beginners) ship a basic FPS and Tower Defense as learning project using Bevy and their feedback was that they didn't fight the language at all.
The problem that remains is that as soon a you go from a toy game to an actual one, you'd realize that Bevy still has tons of work to do before it can be considered productive.
The problem is you make a deal with the devil. You end up shipping a binary full of phone home spyware, if you don't use Unity in the exact way the general license intends they can and will try to force you into the more expensive industrial license.
However, the ease of actually shipping a game can't be matched.
Godot has a bunch of issues all over the place, a community more intent on self praise than actually building games. It's free and cool though.
I don't really enjoy Godot like I enjoy Unity , but I've been using Unity for over a decade. I might just need to get over it.
Similarly, anyone who has shipped a game in unreal will know that memory issues are absolutely rampant during development.
But, the cure rust presents to solve these for games is worse than the disease it seems. I don’t have a magic bullet either..
Other game engines exist which use C# with .NET or at least Mono's better GC. When using these engines a few allocations won't turn your game into a stuttery mess.
Just wanted to make it clear that C# is not the issue - just the engine most people use, including the topic of this thread, is the main issue.
The more projects I do, the more time I find that I dedicate to just planning things up front. Sometimes it's fun to just open a game engine and start playing with it (I too have an unfair bias in this area, but towards Godot [https://godotengine.org/]), but if I ever want to build something to release, I start with a spreadsheet.
But if you're doing something for fun, then you definitely don't need much planning, if any - the project will probably be abandoned halfway through anyways :)
Bevy is in its early stages. I'm sure more Rust Game Engines will come up and make it easier. That said, Godot was great experience for me but doesn't run on mobile well for what I was making. I enjoy using Flutter Flame now (honestly different game engines for different genres or preference), but as Godot continues to get better, I personally would use Godot. Try Unity or Unreal as well if I just want to focus on making a game and less on engine quirks and bugs.
https://bevyengine.org/learn/quick-start/introduction/
That said regarding both rapid gameplay mechanic iteration and modding - would that not generally be solved via a scripting language on top of the core engine? Or is Rust + Bevy not supposed to be engine-level development, and actually supposed to solve the gameplay development use-case too? This is very much not my area of expertise, I'm just genuinely curious.
I don't think Bevy has a built-in way to integrate with other languages like Godot does, it's probably too early in the project's life for that to be on the roadmap.
I feel like this harkens to the general principle of being a software developer and not an "<insert-language-here>" developer.
Choose tools that expose you to more patterns and help to further develop your taste. Don't fixate on a particular syntax.
Such a crappy thing for a company to do.
https://news.ycombinator.com/item?id=43787012
In my personal opinion, a paradox of truly open-source projects (meaning community projects, not pseudo-open-source from commercial companies) is that development seems to show a tendency of diversity. While this leads to more and more cool things appearing, there always needs to be a balance with sustainable development.
Commercial projects, at least, always have a clear goal: to sell. For this goal, they can hold off on doing really cool things. Or they think about differentiated competition. Perhaps if the purpose is commercial, an editor would be the primary goal (let me know if this is alreay on the roadmap).
---
I don't think the language itself is the problem. The situation where you have to use mature solutions for efficiency is more common in games and apps.
For example, I've seen many people who have had to give up Bevy, Dioxus, and Tauri.
But I believe for servers, audio, CLI tools, and even agent systems, Rust is absolutely my first choice.
I've recently been rewriting Glicol (https://glicol.org) after 2 years. I start from embedded devices, switching to crates like Chumsky, and I feel the ecosystem has improved a lot compared to before.
So I still have 100% confidence in Rust.
> Bevy is still in the early stages of development. Important features are missing. Documentation is sparse. A new version of Bevy containing breaking changes to the API is released approximately once every 3 months.
I would choose Bevy if and only if I would like to be heavily involved in the development of Bevy itself.
And never for anything that requires a steady foundation.
Programming language does not matter. Choose the right tool for job and be pragmatic.
Coroutines can definitely be very useful for games and they're also available in C#.
There is not chance for any language, not matter how good is it, to match the most horrendous (web!) but full-featured ui toolkit.
I bet, 1000%, that is easier to do a OS, a database engine, etc that try to match QT, Delphi, Unity, etc.
---
I made a decision that has become the most productive and problem-less approach of make UIs in my 30 years doing this:
1- Use the de-facto UI toolkit as-is (html, swift ui, jetpack compose). Ignore any tool that promise cross-platform UI (so that is html, but I mean: I don't try to do html in swift, ok?).
2- Use the same idea of html: Send plain data with the full fidelity of what you wanna render: Label(text=.., size=..).
3- Render it directly from the native UI toolkit.
Yes, this is more or less htmx/tailwindcss (I get the inspiration from them).
This mean my logic is full Rust, I pass serializable structs to the UI front-end and render directly from it. Critically, the UI toolkit is nearly devoid of any logic more complex that what you see in a mustache template language.. Not do the localization, formatting, etc. Only UI composition.
I don't care that I need to code in different ways, different apis, different flows, and visually divergent UIs.
IS GREAT.
After the pain of boilerplate, doing the next screen/component/wwhatever is so ridiculous simple that is like cheating.
So, the problem is not Rust. Is not F#, or Lisp. Is that UI is a kind of beast that is imperious to be improved by language alone.
> this is why game engines embedded scripting languages
I 100% agree. A modern mature UI toolkit is at least equivalent to a modern game engine in difficulty. GitHub is strewn with the corpses of abandoned FOSS UI toolkits that got 80% of the way there only to discover that the other 20% of the problem is actually 20000% of the work.
The only way you have a chance developing a UI toolkit is to start in full self awareness of just how hard this is going to be. Saying "I am going to develop a modern UI toolkit" is like saying "I am going to develop a complete operating system."
Even worse: a lot of the work that goes into a good UI toolkit is the kind of work programmers hate: endless fixing of nit-picky edge case bugs, implementation of standards, and catering to user needs that do not overlap with one's own preferences.
Going hard with Rust ECS was not the appropriate choice here. Even a 1000x speed hit would be preferable if it gained speed of development. C# and Unity is a much smarter path for this particular game.
But, that’s not a knock on Rust. It’s just “Right tool for the job.”
I rarely touch game dev but that made me think Godot wasn't very suitable
I've been toying with the idea of making a 2d game that I've had on my mind for awhile, but have no game development experience, and am having trouble deciding where to start (obviously wanting to avoid the author's predicament of choosing something and having to switch down the line).
Probably the best thing in your case is, look at the top three engines you could consider, spend maybe four hours gather what look like pros and cons, then just pick one and go. Don't overestimate your attachment to your first choice. You'll learn more just in finishing a tutorial for any of them then you can possibly learn with analysis in advance.
Sometimes it feels like we could use some kind of a temperance movement, because if one can just manage to walk the line one can often reap great rewards. But the incentives seem to be pointing in the opposite direction.
You're probably right that it'd be best to just jump in and get going with a few of them rather than analyze the choice to death (as I am prone to do when starting anything).
Would you really expect Godot to win out over Unity given those priorities? Godot is pretty awesome these days, but it's still going to be behind for those priorities vs. Unity or Unreal.
But they also could have combined Rust parts and C# parts if they needed to keep some of what they had.
On the topic of rapid prototyping: most successful game engines I'm aware of hit this issue eventually. They eventually solve it by dividing into infrastructure (implemented in your low-level lanuage) and game-logic / application logic / scripting (implemented in something far more flexible and, usually, interpreted; I've seen Lua used for this, Python, JavaScript, and I think Unity's C# also fits this category?).
For any engine that would have used C++ instead, I can't think of a good reason to not use Rust, but most games with an engine aren't written in 100% C++.
Gave up after 3 days for 3 reasons:
1. Refactoring and IDE tooling in general are still lightyears away from JetBrains tooling and a few astronomical units away from Visual Studio. Extract function barely works.
2. Crates with non-Rust dependencies are nearly impossible to debug as debuggers don't evaluate expressions. So, if you have a Rust wrapper for Ogg reader, you can't look at ogg_file.duration() in the debugger because that requires function evaluation.
3. In contrast to .NET and NuGet ecosystem, non-Rust dependencies typically don't ship with precompiled binaries, meaning you basically have to have fun getting the right C++ compilers, CMake, sometimes even external SDKs and manually setting up your environment variables to get them to build.
With these roadblocks I would never have gotten the "mature" project to the point, where dealing with hard to debug concurrency issues and funky unforeseen errors became necessary.
Depending on your scenario, you may want either one or another. Shipping pre-compiled binaries carries its own risks and you are at the mercy of the library author making sure to include the one for your platform. I found wiring up MSBuild to be more painful than the way it is done in Rust with cc crate, often I would prefer for the package to also build its other-language components for my specific platform, with extra optimization flags I passed in.
But yes, in .NET it creates sort of an impedance mismatch since all the managed code assemblies you get from your dependencies are portable and debuggable, and if you want to publish an application for a specific new target, with those it just works, be it FreeBSD or WASM. At the same time, when it works - it's nicer than having to build everything from scratch.
Risks are real though.
Scripting being flexible is a proper idea, but that's not an argument against Rust either. Rather it's an argument for more separation between scripting machinery and the core engine.
For example Godot allows using Rust for game logic if you don't want to use GDScript, and it's not really messing up the design of their core engine. It's just more work to allow such flexibility of course.
The rest of the arguments are more in the familiarity / learning curve group, so nothing new in that sense (Rust is not the easiest language).
The rest of your comment boils down to "skills issue". I mean, OK. But you can say that about any programming environment, including writing in raw assembly.
It's like imagine saying, I don't want to learn how write a good story because AI always suggests me writing a bad one anyway. May be that delivers the idea better.
Why would I valorize discarding this kind of automation? Is this just a craft vs. production thing? Like, the same reason I'd use only hand tools when doing joinery in Japanese-style woodworking? There's a place for that! But most woodworkers... use table saws and routers.
The strongest reason I can think of to discard this kind of automation, and do so proudly, is that it's effectively plagiarizing from all of the experts whose code was used in the training data set without their permission.
Artists can come at me with this concern all they want, and I feel bad for them. No software developer can.
I disagree with you about the "plagiaristic" aspect of LLM code generation. But I also don't think our field has a moral leg to stand on here, even if I didn't disagree with you.
As for our tendency to disrespect the copyrights of art, clearly we've always been in the wrong about this, and we should respect the rights of artists. The fact that we've been in the wrong about this doesn't mean we should redouble the offense by also plagiarizing from other programmers.
And there is evidence that LLMs do plagiarize when generating code. I'll just list the most relevant citations from Baldur Bjarnason's book _The Intelligence Illusion_ (https://illusion.baldurbjarnason.com/), without quoting from that copyrighted work.
https://arxiv.org/abs/2202.07646
https://dl.acm.org/doi/10.1145/3447548.3467198
https://papers.nips.cc/paper/2020/hash/1e14bfe2714193e7af5ab...
That implies that proponents of such approach don't want to pursue learning which requires them to do something that exceeds the mediocrity level set by the AI they rely on.
For me it's obvious that it has a major negative impact on many things.
Basically, learn Rust based on whether it's helping solve your issues better, not on whether some LLM is useless or not useless in this case.
Why exclude AI dev tools from this decision making? If you don’t find such tools useful, then great, don’t use them. But not everybody feels the same way.
A friend of mine only understood why i was so impressed by LLMs once he had to start coding a website for his new project.
My feeling is that low-level / system programming is currently at the edge of what LLMs can do. So i'd say that languages that manage to provide nice abstractions around those types of problems will thrive. The others will have a hard time gaining support among young developers.
I think the worst issue was the lack of ready-made solution. Those 67k lines in Rust contains a good chunk of a game engine.
The second worst issue was that you targeted an unstable framework - I would have focused on a single version and shipped the entire game with it, no matter how good the goodies in the new version.
I know it's likely the last thing you want to do, but you might be in a great position to improve Bevy. I understand open sourcing it comes with IP challenges, but it would be good to find a champion with read access within Bevy to parse your code and come up with OSS packages (cleaned up with any specific game logic) based on the countless problems you must have solved in those extra 50k lines.
* automatically make your program fast;
* eliminate memory leaks;
* eliminate deadlocks; or
* enforce your logical invariants for you.
Sometimes people mention that independent of performance and safety, Rust's pattern-matching and its traits system allow them to express logic in a clean way at least partially checked at compile time. And that's true! But other languages also have powerful type systems and expressive syntax, and these other languages don't pay the complexity penalty inherent in combining safety and manual memory management because they use automatic memory management instead --- and for the better, since the vast majority of programs out there don't need manual memory management.
I mean, sure, you can Arc<Box<Whatever>> many of your problems away, but that point, your global reference counting just becomes a crude form of manual garbage collection. You'd be better off with a finely-tuned garbage collector instead --- one like Unity (via the CLR and Mono) has.
And you're not really giving anything up this way either. If you have some compute kernel that's a bottleneck, thanks to easy FFIs these high-level languages have, you can just write that one bit of code in a lower-level language without bringing systems consideration to your whole program.
Languages like Go , JavaScript, C# or Java are much better choices for this purpose. Rust is still best suited for scenarios where traditional system languages excel, such as embedded systems or infrastructure software that needs to run for extended periods.
C# actually has fairly good null-checking now. Older projects would have to migrate some code to take advantage of it, but new projects are pretty much using it by default.
I'm not sure what the situation is with Unity though - aren't they usually a few versions behind the latest?
https://fyrox.rs/
here's a web demo
https://fyrox.rs/assets/demo/animation/index.html
Is it normal for Rust ecosystem to suggest software with this level of maturity?
https://github.com/FyroxEngine/Fyrox/discussions/725
PS: I love the art style of the game.
Rust is a niche language, there is no evidence it is going to do well in the game space.
Unity and C# sound like a much better business choice for this. Choosing a system/language....
> My love of Rust and Bevy meant that I would be willing to bear some pain
....that is not a good business case.
Maybe one day there will be a Rust game engine that can compete with Unity, probably already are, in niches.
I love Rust. It’s not for shipping video games. No Tiny Glade doesn’t count.
Edit: don’t know why you’re downvoting. I love Rust. I use it at my job and look for ways to use it more. I’ve also shipped a lot of games. And if you look at Steam there are simply zero Rust made games in the top 2000. Zero. None nada zilch.
Also you’re strictly forbidden from shipping Rust code on PlayStation. So if you have a breakout indie hit on Steam in Rust (which has never happened) you can’t ship it on PS5. And maybe not Switch although I’m less certain.
> And if you look at Steam there are simply zero Rust made games in the top 2000. Zero. None nada zilch.
Well, sure, if you arbitrarily exclude the popular game written in Rust, then of course there are no popular games written in Rust :)
> And maybe not Switch although I’m less certain.
I have talked to Nintendo SDK engineers about this and been told Rust is fine. It's not an official part of their toolchain, but if you can make Rust work they don't care.
Tiny Glade is indeed a rust game. So there is one! I am not aware of a second. But it’s not really a Bevy game. It uses the ECS crate from Bevy.
Egg on my face. Regrets.
And for something like Gnorp, Rust is probably a decent choice.
What allocations can you not do in Rust?
The Unity GameObject/Component model is pretty good. It’s very simple. And clearly very successful. This architecture can not be represented in Rust. There are a dozen ECS crates but no one has replicated the worlds most popular gameplay system architecture. Because they can’t.
From what I remember from my Unity days (which granted, were a long time ago), GameObjects had their own lifecycle system separate from the C# runtime and had to be created and deleted using Destroy and Create calls in the Unity API. Similarly, components and references to them had to be created and retrieved using the GetComponent calls, which internally used handles, rather than being raw GC pointers. Runtime allocation of objects frequently caused GC issues, so you were practically required to pre-allocate them in an object pool anyway.
I don't see how any of those things would be impossible or even difficult to implement in Rust. In fact, this model is almost exactly what I used to see evangelized all the time for C++ engines (using safe handles and allocator pools) in GDC presentations back then.
In my view, as someone who has not really interacted or explored Rust gamedev much, the issue is more that Bevy has been attempting to present an overtly ambitious API, as opposed to focusing on a simpler, less idealistic one, and since it is the poster child for Rust game engines, people keep tripping over those problems.
I'm sorry, but I still don't understand. There are myriad heap collections and even fancy stuff like Rc<Box<T>> or RefCell<T>. What am I missing here?
Is it as simple as global void pointers in C? No, but it's way safer.
Tiny Glade is also the buggiest Steam game I've ever encountered (bugs from disappearing cursor to not launching at all). Incredibly poor performance as well for a low poly game, even if it has fancy lighting...
No offense to the project. It’s cool and I’m glad it exists. But if you were to plot the top 2000 games on Steam by time played there are, I believe, precisely zero written in Rust.
What evidence do you have for this statement? It kind of doesn't make any sense on its face. Binaries are binaries, no matter what tools are used to compile them. Sure, you might need to use whatever platform-specific SDK stuff to sign the binary or whatever, but why would Rust in particular be singled out as being forbidden?
Despite not being yet released publicly, Jai can compile code for PlayStation, Xbox, and Switch platforms (with platform-specific modules not included in the beta release, available upon request provided proof of platform SDK access).
Do you mean cyclic types?
Rust being low-level, nobody prevents one from implementing garbage-collected types, and I've been looking into this myself: https://github.com/Manishearth/rust-gc
It's "Simple tracing (mark and sweep) garbage collector for Rust", which allows cyclic allocations with simple `Gc<Foo>` syntax. Can't vouch for that implementation, but something like this would be good for many cases.
And if it's not already popular, that won't happen.
But also: there is a lot of Rust code out there! And a cubic fuckload of high-quality written material about the language, its idioms, and its libraries, many of which are pretty famous. I don't think this issue is as simple as it's being out to be.
It's a bit like Rust in 2014, you would never have had enough material for LLMs to train on.
The same can be said of books as of programming languages:
"Not every ___ deserves to be read/used"
If the documentation or learning curve is so high and/or convoluted that it's disparaging to newcomers then perhaps it's just not a language that's fit for widespread adoption. That's actually fine.
"Thanks for your work on the language, but this one just isn't for me" "Thanks for writing that awfully long book, but this one just isn't for me"
There's no harm in saying either of those statements. You shouldn't be disparaged for saying that rust just didn't work out for your case. More power to the author.
If you switched from Java to C# or vice versa, nobody would care.
Availability of documentation and tooling, widespread adaptation and access to already-trained-at-someone-else's-dime possibility is deemed safe for hiring decision. Sometimes, the narrow tech is spotted in the wild, but it was mostly some senior/staff engineer wanted to experiment something which became part of production because management saw no issue, will sometimes open some doors for practitioners of those stack but the probability is akin to getting hit by lightning strike.
To me constantly chasing the latest trends means lack of experience in a team and absence of focus on what is actually important, which is delivering the product.
Until the tech catches up it will have a stifling effect on progress toward and adoption of new things (which imo is pretty common of new/immature tech, eg how culture has more generally kind of stagnated since the early 2000s)
However, I had a different takeaway when playing with Rust+AI. Having a language that has strict compile-time checks gave me more confidence in the code the AI was producing.
I did see Cursor get in an infinite loop where it couldn't solve a borrow checker problem and it eventually asked me for help. I prefer that to burying a bug.
Now Box2D 3.1 has been released and there's zero chance any of the LLMs are going to emit any useful answers that integrate the newly introduced features and changes.
In any case, there has always been a strong bias towards established technologies that have a lot of available help online. LLMs will remain better at using them, but as long as they are not completely useless on new technologies, they will also help enthusiasts and early adopters work with them and fill in the gaps.
LLMs will make people productive. But it will at the same time elevate those with real skill and passion to create good software. In the meantime there will be some maker confusion, and some engineers who are mediocre might find them selfs in demand like top end engineers. But over the time companies and markets will realize and top dollar will go to those select engineers who know how to do things with and without LLMs.
Lots of people are afraid of LLMs and think it is the end of the software engineer. It is and it is not. It’s the end of the “CLI engineer” or the “Front end engineer” and all those specializations that were attempt to require less skill to pay less. But the systems engineers who know how computers work, can take all week long describing what happens when you press enter on a keyboard at google.com will only be pressed into higher demand. This is because the single skill “engineer” wont really be a thing.
tldr; LLMs wont kill software engineering its a reset, it will cull those who chose such a path on a rubric only because it paid well.
The problem is, they’re all blogspam rehashes of the same few WWDC talks. So they all have the same blindspots and limitations, usually very surface level.
I'm not saying you're definitely wrong, but if you think that LLMs are going to bring qualitative change rather than just another thing to consider, then I'm interested in why.
Another potentially interesting avenue of research would be to explore allowing LLMs to use "self-play" to explore new things.
[1]: https://huijzer.xyz/posts/killer-domain/
unfortunately, a lot of libraries and services - well I don't think chatGPT understands the differences or it would be hard to. At least I have found that with writing scriplets for RT, PHP tooling, etc. The web world seems to move fast enough (and RT moves hella slow) that its confusing libraries and interfaces through the versions.
It'd really need a wider project context where it can go look at how those includes, or functions, or whatever work instead of relying on 'built in' knowledge.
"Assume you know nothing, go look at this tool, api endpoint or, whatever, read the code, and tell me how to use it"
I wouldn't have read the article if it'd been labeled that, so kudos to the blog writer, I guess.
Although points mentioned in the post are quite valid.
The amount of platforms they support, the amount of features they support, many of which could be a PhD thesis in graphics programming, the tooling, the store,....
Here's a thought experiment: Would Minecraft have been as popular if it had been written in Rust instead of Java?
While the language itself is great and stable, the ecosystem is not, and reverting to more conservative options is often the most reasonable choice, especially for long-term projects.
But outside of games the situation looks very different. “Almost everything” is just not at all accurate. There are tons of very stable and productive ecosystems in Rust.
I completely disagree, having been doing game dev in Rust for well over a year at this point. I've been extremely productive in Bevy, because of the ECS. And Unity compile times are pretty much just as bad (it's true, if you actually measure how long that dreaded "Reloading Domain" screen takes).
I don't even look at crate versions but the stuff works, very well. The resulting code is stable, robust and the crates save an inordinate amount of development time. It's like lego for high end, high performance code.
With Rust and the crates you can build actual, useful stuff very quickly. Hit a bug in a crate or have missing functionality? contribute.
Software is something that is almost always a work in progress and almost never perfect, and done. It's something you live with. Try any of this in C or C++.
If not, the language they pick doesn't really make a difference in the end.
It is like complaining playing a music instrument to be in band or orchestra requires too much effort, naturally.
Great musicians make a symphony out of what they can get their hands on.
Replace Rust with Bevy and language with framework, you might have a point. Bevy is still in alpha, it's lacking plenty of things, mainly UI and an easy way to have mods.
As for almost everything is at an alpha stage, yeah. Welcome to OSS + SemVer. Moving to 1.x makes a critical statement. It's ready for wider use, and now we take backwards compatibility seriously.
But hurray! Commercial interest won again, and now you have to change engines again, once the Unity Overlords decide to go full Shittification on your poorly paying ass.
You can also always go from 1.0 to 2.0 if you want to make breaking changes.
Yes. Because it makes a promise about backwards compatibility.
> Rust language and library features themselves often spend years in nightly before making it to a release build.
So did Java's. And I Rust probably has a fraction of its budget.
In defense of long nightly feature more than once, stabilizing some feature like negative impl and never types early would have caused huge backwards breaking changes.
> You can also always go from 1.0 to 2.0 if you want to make breaking changes.
Yeah, just like Python!
And split the community and double your maintenance burden. Or just pretend 2.0 is 1.1 and have the downstream enjoy the pain of migration.
If you choose to support 1.0 sure. But you don't have to. Overall I find that the Rust community is way too leery of going to 1.0. It doesn't have to be as big a burden as they make it out to be, that is something that comes down to how you handle it.
If you choose not to, then people wait for x.0 where x approaches infinity. I.e. they lose confidence in your crates/modules/libraries.
I mean, a big part of why I don't 1.x my OSS projects (not just Rust) is that I don't consider them finished yet.
The distance in time between the launches of Unreal Engine 4 and Unreal Engine 5 was 8 years (April 2014 to April 2022). Unreal Engine 5 development started in May 2020 and had an early access release in May 2021.
Bevy launched 0.1 in 2020 and is at 0.16 now in 2025. 5 years later and no 1.0 in sight.
If you want people to use your OSS projects (maybe you don't), you have to accept that perfect is the enemy of good.
At this point, regulators and legislators are trying to force people to use the Rust ecosystem - if you want a non-GC language that is "memory safe," it's pretty much the de facto choice. It is long past time for the ecosystem to grow up.
Yeah because that's when it was open sourced, NOT DEVELOPED.
See https://godotengine.org/article/first-public-release/
> Godot has been an in-house engine for a long time and the priority of new features were always linked to what was needed for each game and the priorities of our clients.
I checked the history and it was known by another name Larvita.
> If you want people to use your OSS project
Seeing how currently I have about 0.1 parts of me working on it, no I don't want to give people false sense of security.
> At this point, regulators and legislators are trying to force people to use the Rust ecosystem
Not ecosystem. Language. Ecosystem is a plus.
Further more the issue Bevy has is more of there aren't any good mature GUI libraries for Rust. Because cross OS GUIs were, are and will be a shit show.
Granted it's a shit show that can be directed with enough money.
From what I’ve heard about the Rust community, you may have made an unintentionally witty pun.
Quake 1-3 uses a single array of structs, with sometimes unused properties. Is your game more complex than quake 3?
The “ECS” upgrade to that is having an array for each component type but just letting there be gaps:
But yeah, probably you don't need an ECS for 90% of the games.
Quake struggled with the number of objects even in its days. What you've got in the game was already close to the maximum it could handle. Explosions spawning giblets could make it slow down to a crawl, and hit limits of the client<>server protocol.
The hardware got faster, but users' expectations have increased too. Quake 1 updated the world state at 10 ticks per second.
Because of memory bandwidth of Iterating the entities? No way. Every other part - rendering, culling, network updates, etc is far worse.
Let’s restate. In 1998 this got you 1024 entities at 60 FPS. The entire array could no fit in L2 cache of a modern desktop.
And I already advised a simple change to improve memory layout.
> Quake 1 updated the world state at 10 ticks per secondo
That’s not a constraint in Quake 3 - which has the same architecture. So it’s not relevant.
> users' expectations have increased too
Your game is more complex than quake 3? In what regard?
First: I have experience with Bevy and other game engine frameworks; including Unreal. And I consider myself a seasoned Rust, C etc developer.
I could sympathize with what was stated by the author.
I think the issue here is (mainly) Bevy. It is just not even close to the standard yet (if ever). It is hard for any generic game engine to compete with Unity/GoDot. Nevermind, the de facto standard of Unreal.
But if you are a C# developer and using Unity already, and not C++ in Unreal, going to a bloated framework that is missing features that is Bevy makes little sense. [And here is also the minor issue, that if you are a C# developer, honestly you don't care about low level code, or not having a garbage collector.]
Now if you are a C++ developer and use Unreal, they only point to move to Rust (which I would argue for the usual reasons) is if Unreal supports Rust. Otherwise, there is nothing that even compares to Unreal. (That is not custom made game engine.)
https://old.reddit.com/r/rust_gamedev/comments/13wteyb/is_be...
I wonder how something simpler in the rust world like macroquad[0] would have worked out for them (superpowers from Unity's maturity aside).
[0] https://macroquad.rs/
From my experience one has to take Rust discussions with a grain of salt because often shortcomings and disclosures are handwaved and/or ommited.
And within rust, I've learned to look beyond the most popular and hyped tools; they are often not the best ones.
You can go low level in C#**, just like Rust can avoid the borrow checker. It's just not a good tradeoff for most code in most games.
** value types/unsafe/pointers/stackalloc etc.
Note that going more hands-on with these is not the same as violating memory safety - C# even has ref and byreflike struct lifetime analysis specifically to ensure this not an issue (https://em-tg.github.io/csborrow/).
>https://em-tg.github.io/csborrow/
Oooh... I didn't know scoped refs existed.
Bevy: unstable, constantly regressing, with weird APIs here and there, in flux, so LLMs can't handle it well.
Unity: rock-solid, stable, well-known, featureful, LLMs know it well. You ought to choose it if you want to build the game, not hack on the engine, be its internal language C#, Haskell, or PHP. The language is downstream from the need to ship.