I watched the first half hour of this earlier this week. I was surprised at just how differently two people can view the world. I'm not sure I would be as dogmatic as him, but just using the first few points, I think you can make strong cases against:
1. Everything is exposed as an API with little no insight to inner workings (black box)
2. Everything should be broken down into modules at a level where one person works on that module
3. He shows a video of his video editor, saying it supports multiple inputs (like two keyboards or two mice), then says no platform supports it, but if they ever do, it will work
4. Don't implement 'good enough apis'
I hope that anybody who has ever worked on software understands that there are virtues to doing exactly the opposite of what is described in each of these points. Even if you can make an argument for any of these, you would have to qualify them with so many exceptions that you would effectively negate the entire argument.
I spent a lot of evenings early in my career watching similar videos, hoping to find some magic bullet in how people better than me do what I do. People make a living doing this on the conference circuit. Maybe it is a fools errand to try to distill something as complex and situationally dependent as software into a video, but I'm having a hard time finding any major insights in all of the videos I watched.
ribelo 2 hours ago [-]
He basically just described the FCIS[0] architecture—the same one Gary Bernhart laid out thirteen years ago. We love reinventing the wheel. Chaplicki did it with ELM, Abramov with Redux, day8 did it with re-frame, and the beat goes on.
I’m still amazed it isn’t obvious: every piece of software should be a black box with a pin-hole for input and an even tinier pin-hole for output. The best code I’ve ever touched worked exactly like that and maintaining it was a pleasure, everything else was garbage. I push this rule in every project I touch.
Could you point to the place in the video that you felt resembled this principle most? I don’t really see the connection, but am open to it.
leecommamichael 2 hours ago [-]
I think the mistake people make when trying to teach this stuff is in generalizing too much.
His input layer is good because it helped him emulate local multiplayer with keyboard and mouse. It solved _his problem_.
The graphics layer is good because it is so much easier to work with than OpenGL _for him_.
The library wrappers are good for him because they solve _his desire_ to run portably while maintaining the smallest possible interface.
This stuff matters to Eskil because he’s:
- just one person
- has deep expertise in what he’s wrapping (win32, OpenGL)
- wants to make an impressive program solo
I think his expertise, historical perspective, and culture make it feel as if this is the only way to do this very hard task, so he wants to share it. It helps him believe his way is right that many great, and reliable projects are done in C89.
I think the truth at this point is that folks still using old versions of C have, on average, more experience than everyone else. It’s not just the language that’s making them strong, either. It’s having evolved with the platforms.
Now the only question that leaves is whether it makes a huge difference to really stick with one language over the decades. I know we’ve all heard both sides of that quandary.
mjr00 3 hours ago [-]
> Don't ever implement good-enough-for-now APIs
Agree in theory, in practice this is impossible. Even if you're an absolute domain expert in whatever you're doing, software and requirements will evolve and you will end up needing to implement something for which your current API is not suitable. Just ask S3:ListObjectsV2 or golangs' `encoding/json/v2` etc.
I push back hard on this one because a lot of developers will try to be "clever" and basically turn their api into
or an equivalent, and now you have an API which theoretically can handle any requirement, but does so in a way that's extremely unfriendly and painful to use.
It's better to accept that your API will change at some point and have a versioning and deprecation strategy in place. With well-architected software this usually isn't hard.
imglorp 3 hours ago [-]
Yes, and also don't try to anticipate everything by implementing features that won't be used soon, ie "just in case". If the soon turns to never, any unused feature is basically dead code and represents future costs and constraints to maintain or remove it.
leecommamichael 2 hours ago [-]
Yeah, it’s too hard of a rule. In reality interfaces have points of resistance; where they aren’t really helping you do what you’re wanting, and they have fitting-points where they do what you need done.
I’d argue it’s your job to strike some kind of balance there. If you know you’re working with something stable, why settle for good-enough? Well, because the task of assessing what is stable requires years of mistakes, and good-enough varies drastically between languages. I think I see a point here for using primitive C; there’s hardly a type-system to waste time with, but you can cut yourself with macros. This is why I use Odin.
hk1337 2 hours ago [-]
I would add a caveat...
Don't ever implement good-enough-for-now APIs without a plan to come back and fix it and a deadline to fix it
Most of the time "good-enough-for-now" really is just "good-enough-forever".
corytheboyd 1 hours ago [-]
I’ve never once seen “come back and fix with deadline” work. It’s really only the “with deadline” that does not work. Good teams will constantly be judging if now is the right time to fix the thing, and that fluid process works pretty well. Setting deadlines for tech debt just leads to the people setting the deadlines getting upset and quitting when they are inevitably missed. Save the urgency for when it’s warranted, like discovering a time bomb in the code, a scaling limit that is about to be breached— people will take you more seriously, you will be happier, they will be happier, hooray.
By all means, shoot for absolute perfection on your own projects. I work very, VERY differently on my own code than I do on work code, and I get different types of satisfaction out of both (though of course massively prefer working solo).
namuol 36 minutes ago [-]
Ages ago, before I had any real professional experience, I was blown away by Steenberg’s demos and enamored by his tech talks. The demos still impress me and have aged well, but today I’m glad I didn’t fall into the trap of taking his development advice as scripture.
His name has been popping up a lot more recently. I would be worried about the impact he might be having on young programmers, but honestly I doubt his worst advice would survive in industry anyway. I didn’t watch this particular video but his recent talk at Better Software Conference was illuminating, in a bad way.
kthxb 12 minutes ago [-]
I feel like a lot of his takes -- like c89 being the best -- may be true in the context of the kind of complex desktop applications he seems to develop, but not universally applicable.
Still, he gives a lot of good advice and fundamental engineering rules.
boricj 3 hours ago [-]
I've seen relational database schemas degenerating into untyped free-for-all key-values because they weren't expressive enough. I've seen massive code duplication (and massive amounts of bugs) because no one invested into a quality core set of libraries to address basic use-cases of a given platform. I've seen systems crumbling under tech debt because of chronic prioritization of features over maintenance.
I've worked on large software projects. The only ones I've met that weren't dreadful to work on had engineering and management working together with a shared, coherent vision.
tracker1 1 hours ago [-]
I think the "how easy will this be to replace" question should be a guiding factor in terms of some architectural decisions. Having modular code, and abstractions can add a lot of value, but if you do it in a way that locks you down and makes everything it touches a multiple in terms of complexity, is it really worth it.
One thing I often do, in terms of modularizing, especially for re-share or external use is sit down and think about the ergonomics of what I'm creating and how I would like to be able to use/consume said module... often writing documentation before writing the code.
habitue 4 hours ago [-]
Started watching, but "C89 is the safe option for long lived software" kind of turned me off. There are plenty of safe long lived stable languages out there where you dont have to manually manipulate memory. At the very least, this guy should be advocating for Java.
But even that's too far really. Like it or not, "shiny fad" languages like Python & Javascript have been around forever, support for them isn't going away, this should be a non-concern at the architectural level. (Bigger language concerns: is it performant enough for my expected use? Does it have strong types to help with correctness? Can I hire programmers who know it? etc)
alexott 3 hours ago [-]
It really depends. I was at talk of architect of car company when he was talking about need to develop and support a car software for 20-30 years - few years before release, 10-20 years of production, and the critical fixes after end of support. And it includes not only soft itself, but all compilers, etc.
mahalex 3 hours ago [-]
> languages like Python & Javascript have been around forever, support for them isn't going away
???
Python 2 went out of support five years ago.
habitue 3 hours ago [-]
I mean C89 has no support, it's not getting an update or a bugfix, the standard is what it is. So if vendor support is your overriding concern, you should be constantly updating your software to LTS versions.
I meant support in terms of there's an active community of people using the language and building things with it. It's not going to die as a language like Algol 68 or Pascal.
leecommamichael 3 hours ago [-]
C89 still has an active community of people using the language and building things with it.
In addition to this, its existence and validity is still relied on by basically every other language via standard library, transient dependency, or calling convention. Heck, practically every C++ project probably depends on it.
The Linux Kernel, ffmpeg, SQLite, curl and many others chose C89 and often consider using C99, but most do not. Each of those projects also write at-length via newsletter or blog as to why they’re still not quite ready to update (and sometimes why they are.)
mahalex 3 hours ago [-]
Is there an active community of people using Python 2 and building things with it? Meanwhile, there are plenty of actively maintained compilers for C89.
unclad5968 2 hours ago [-]
I have two different compilers that implement C89 on my computer right now and I know of at least one other. How much support do you require before you consider something supported?
kod 4 hours ago [-]
The "one module should be written by only one person" dogma is kind of interesting.
But I got to the "my wrapper around SDL supports multiple simultaneous mouse inputs, even though no operating system does" and noped out. YAGNI, even in a large project
leecommamichael 3 hours ago [-]
He’s sitting at a system that thousands of people built together simultaneously. We have gripes with our OSes but they’re all capable of nearly perfect uptime (depending on the hardware and workload.) So I am not convinced individuals need to own modules. I think it’s good for things to work that way, but not necessary.
I didn’t find much fault at all with what he’s saying about SDL. It’s just an example of the “layered design” he’s advocating for. You may have drawn your conclusion a little early; he immediately follows up with his API for graphics, which is actually a very practical example. He’s really just saying people should consider writing their own APIs even if it’s implemented with a library, because you can select exactly how complex the API needs to be for your app’s requirements. This makes the task of replacing a dependency much simpler.
He’s actually trying to prevent relying on things you don’t need. YAGNI.
barbazoo 3 hours ago [-]
> YAGNI
“You Aren’t Gonna Need It”
gashmol 3 hours ago [-]
Aside - Why do we need the word "architecting" anyway? Why not just use designing?
Architecting at least somewhat harkens to engineering; where there are costs, limits, tolerances, and to some degree aesthetics.
gashmol 2 hours ago [-]
I'm pretty sure every engineering field calls it designing. Perhaps software devs feel a need to inflate what they actually do.
corytheboyd 1 hours ago [-]
In my (software) experience, the terms are basically interchangeable. Some people will violently defend “architect right, design wrong” and others the opposite, So uh, pretty hard for me, a normal person, to care much about which word is right for the “you sit down and think before you build” part of software engineering.
yuvadam 3 hours ago [-]
Obligatory mention of A Philosophy of Software Design by John Ousterhout as the arguably the most important book every developer should read to understand proper modularization and complexity management.
1. Everything is exposed as an API with little no insight to inner workings (black box)
2. Everything should be broken down into modules at a level where one person works on that module
3. He shows a video of his video editor, saying it supports multiple inputs (like two keyboards or two mice), then says no platform supports it, but if they ever do, it will work
4. Don't implement 'good enough apis'
I hope that anybody who has ever worked on software understands that there are virtues to doing exactly the opposite of what is described in each of these points. Even if you can make an argument for any of these, you would have to qualify them with so many exceptions that you would effectively negate the entire argument.
I spent a lot of evenings early in my career watching similar videos, hoping to find some magic bullet in how people better than me do what I do. People make a living doing this on the conference circuit. Maybe it is a fools errand to try to distill something as complex and situationally dependent as software into a video, but I'm having a hard time finding any major insights in all of the videos I watched.
I’m still amazed it isn’t obvious: every piece of software should be a black box with a pin-hole for input and an even tinier pin-hole for output. The best code I’ve ever touched worked exactly like that and maintaining it was a pleasure, everything else was garbage. I push this rule in every project I touch.
[0] https://www.destroyallsoftware.com/screencasts/catalog/funct...
His input layer is good because it helped him emulate local multiplayer with keyboard and mouse. It solved _his problem_.
The graphics layer is good because it is so much easier to work with than OpenGL _for him_.
The library wrappers are good for him because they solve _his desire_ to run portably while maintaining the smallest possible interface.
This stuff matters to Eskil because he’s: - just one person - has deep expertise in what he’s wrapping (win32, OpenGL) - wants to make an impressive program solo
I think his expertise, historical perspective, and culture make it feel as if this is the only way to do this very hard task, so he wants to share it. It helps him believe his way is right that many great, and reliable projects are done in C89.
I think the truth at this point is that folks still using old versions of C have, on average, more experience than everyone else. It’s not just the language that’s making them strong, either. It’s having evolved with the platforms.
Now the only question that leaves is whether it makes a huge difference to really stick with one language over the decades. I know we’ve all heard both sides of that quandary.
Agree in theory, in practice this is impossible. Even if you're an absolute domain expert in whatever you're doing, software and requirements will evolve and you will end up needing to implement something for which your current API is not suitable. Just ask S3:ListObjectsV2 or golangs' `encoding/json/v2` etc.
I push back hard on this one because a lot of developers will try to be "clever" and basically turn their api into
or an equivalent, and now you have an API which theoretically can handle any requirement, but does so in a way that's extremely unfriendly and painful to use.It's better to accept that your API will change at some point and have a versioning and deprecation strategy in place. With well-architected software this usually isn't hard.
I’d argue it’s your job to strike some kind of balance there. If you know you’re working with something stable, why settle for good-enough? Well, because the task of assessing what is stable requires years of mistakes, and good-enough varies drastically between languages. I think I see a point here for using primitive C; there’s hardly a type-system to waste time with, but you can cut yourself with macros. This is why I use Odin.
Don't ever implement good-enough-for-now APIs without a plan to come back and fix it and a deadline to fix it
Most of the time "good-enough-for-now" really is just "good-enough-forever".
By all means, shoot for absolute perfection on your own projects. I work very, VERY differently on my own code than I do on work code, and I get different types of satisfaction out of both (though of course massively prefer working solo).
His name has been popping up a lot more recently. I would be worried about the impact he might be having on young programmers, but honestly I doubt his worst advice would survive in industry anyway. I didn’t watch this particular video but his recent talk at Better Software Conference was illuminating, in a bad way.
Still, he gives a lot of good advice and fundamental engineering rules.
I've worked on large software projects. The only ones I've met that weren't dreadful to work on had engineering and management working together with a shared, coherent vision.
One thing I often do, in terms of modularizing, especially for re-share or external use is sit down and think about the ergonomics of what I'm creating and how I would like to be able to use/consume said module... often writing documentation before writing the code.
But even that's too far really. Like it or not, "shiny fad" languages like Python & Javascript have been around forever, support for them isn't going away, this should be a non-concern at the architectural level. (Bigger language concerns: is it performant enough for my expected use? Does it have strong types to help with correctness? Can I hire programmers who know it? etc)
??? Python 2 went out of support five years ago.
I meant support in terms of there's an active community of people using the language and building things with it. It's not going to die as a language like Algol 68 or Pascal.
In addition to this, its existence and validity is still relied on by basically every other language via standard library, transient dependency, or calling convention. Heck, practically every C++ project probably depends on it.
The Linux Kernel, ffmpeg, SQLite, curl and many others chose C89 and often consider using C99, but most do not. Each of those projects also write at-length via newsletter or blog as to why they’re still not quite ready to update (and sometimes why they are.)
But I got to the "my wrapper around SDL supports multiple simultaneous mouse inputs, even though no operating system does" and noped out. YAGNI, even in a large project
I didn’t find much fault at all with what he’s saying about SDL. It’s just an example of the “layered design” he’s advocating for. You may have drawn your conclusion a little early; he immediately follows up with his API for graphics, which is actually a very practical example. He’s really just saying people should consider writing their own APIs even if it’s implemented with a library, because you can select exactly how complex the API needs to be for your app’s requirements. This makes the task of replacing a dependency much simpler.
He’s actually trying to prevent relying on things you don’t need. YAGNI.
“You Aren’t Gonna Need It”