I feel like Wayne is taking the Agile Maxim “requirements always change” too literally. Agile doesn’t mean "every requirement always changes forever”.
In most live production environments today, requirements do keep changing — security, compliance, customer behavior, scaling — even when teams think they're done.
Agile isn’t making an empirical prediction ("all requirements will mutate endlessly"); it’s a philosophical posture toward uncertainty
Wayne misses this interpretative nuance.
tbrownaw 4 hours ago [-]
> Agile isn’t making an empirical prediction ("all requirements will mutate endlessly"); it’s a philosophical posture toward uncertainty
At some point this philosophy has to result in something concrete.
How much ongoing effort should be put into handling the possibility that this particular requirement might change?
Swizec 3 hours ago [-]
> How much ongoing effort should be put into handling the possibility that this particular requirement might change?
How likely is it that the world freezes and stops changing around your software? This includes business processes, dependencies, end-user expectations, regulations, etc.
In general that’s the difference between a product and a project. Even Coca Cola keeps tweaking its recipe based on ingredient availability, changes in manufacturing, price optimizations, logistics developments, etc.
Hell, COBOL and FORTRAN still get regular updates every few years! Because the software that runs on them continues to stay under active maintenance and has evolving needs.
rightbyte 1 hours ago [-]
> Even Coca Cola keeps tweaking its recipe
Ye and they should stop. Has there been any big changes except the "New Coke" that never reached my home town?
rTX5CMRXIfFG 4 hours ago [-]
You have to be able to distinguish between general and specific theories, so that you don’t expect the general to provide you the specific.
fredo2025 7 hours ago [-]
I agree with Wayne that the needs of the user don’t seem to end, even if your project or contract completes. Either the need is to keep maintaining it, put a twist on it, radically change it, or abandon it for something else.
I don’t agree on testing. It’s been a long time since I bought into that, and even tests for uncertain behavior to have confidence is a form of tech debt, as the developer that follows you must make a decision whether to maintain the test or to delete it; its value doesn’t usually last. An exception would be verifying expected behavior of a library or service that must stay consistent, but that is not the job of most developers.
AndrewKemendo 5 hours ago [-]
Preface: A Formally verified end to end application with associated state machine(s) is kind of my engineering holy grail - so I’m a likely mark for this article.
However the author never actually makes a good case for FV other than to satisfy hard-core OCD engineers like ourselves. Maybe the author feels like we all know their opinion - but it seems like the author is arguing against a poster of claude shannon.
If the system is - for all intents and purposes - deterministically solving the subset of problems for the customer, and you never build the state machine, then who cares?
My argument is “there isn’t one” — that’s provided we’re in a business context where new features are ALWAYS more beneficial to the business inputs than formal verification.
If a business requirement requires formal verification then the argument is also moot - because it is part of the business requirement - and so it’s not optional it’s a feature.
Come to think of it I’m not really sure I’ve ever seen software created on behalf of a business, that has formal verification, where FM is not mandatory requirement of the application or it’s not a research project.
The last time I saw formal state machines built against a Formally Verfied system it was from a bored 50 year old unicorn engineer doing it on a simple C application.
shoo 5 hours ago [-]
> we’re in a business context where new features are ALWAYS more beneficial to the business inputs than formal verification.
Another way of framing this is "what is the impact (to the business / to the customers / to the users) of shipping a defect?". In a lot of contexts the impact of shipping defects is relatively low -- say SaaS applications providing a non-critical service, where defects, once noticed, can usually be fixed by rolling back to the last good version server side. In some contexts the impact of shipping defects is very high, say if the defect gets baked into hardware and ships before it is detected, and fixing it would require a recall that would bankrupt the company, or if a defect could kill the customers/users or crash the space probe or so on.
xlii 3 hours ago [-]
> In some contexts the impact of shipping defects is very high (…)
I agree however I think that many overestimate how frequent those environments are. Almost everything can be updated (and that includes dumb appliances with hardware chips replaced by technician) and the only real question is what is your reliability vector.
At the end of the spectrum there’s Two Generals Problem and Space Bit Flip and so much complexity that’s mind blowing. I’ve seen on my own eyes industry wide screwups that were fixed with month full of phone calls and paper slips exchange, so it’s not like we (as humans) cannot live with unreliable systems.
I’ve been researching formal verification for a while and IMO they are not fit for general use through lack of ergonomics. I might have some ideas how to solve it but I rather try to put in a commercial box <insert dr evil meme>
hdjrudni 18 minutes ago [-]
Things are often fixable, but if you keep breaking things for your user you're going to develop a reputation for being unstable and your customers will leave.
ipaddr 4 hours ago [-]
This may seem counterintuitive but new features often alienate customers. It's not because of formal verification it's because a percentage of customers don't want change.
In most live production environments today, requirements do keep changing — security, compliance, customer behavior, scaling — even when teams think they're done.
Agile isn’t making an empirical prediction ("all requirements will mutate endlessly"); it’s a philosophical posture toward uncertainty
Wayne misses this interpretative nuance.
At some point this philosophy has to result in something concrete.
How much ongoing effort should be put into handling the possibility that this particular requirement might change?
How likely is it that the world freezes and stops changing around your software? This includes business processes, dependencies, end-user expectations, regulations, etc.
In general that’s the difference between a product and a project. Even Coca Cola keeps tweaking its recipe based on ingredient availability, changes in manufacturing, price optimizations, logistics developments, etc.
Hell, COBOL and FORTRAN still get regular updates every few years! Because the software that runs on them continues to stay under active maintenance and has evolving needs.
Ye and they should stop. Has there been any big changes except the "New Coke" that never reached my home town?
I don’t agree on testing. It’s been a long time since I bought into that, and even tests for uncertain behavior to have confidence is a form of tech debt, as the developer that follows you must make a decision whether to maintain the test or to delete it; its value doesn’t usually last. An exception would be verifying expected behavior of a library or service that must stay consistent, but that is not the job of most developers.
However the author never actually makes a good case for FV other than to satisfy hard-core OCD engineers like ourselves. Maybe the author feels like we all know their opinion - but it seems like the author is arguing against a poster of claude shannon.
If the system is - for all intents and purposes - deterministically solving the subset of problems for the customer, and you never build the state machine, then who cares?
My argument is “there isn’t one” — that’s provided we’re in a business context where new features are ALWAYS more beneficial to the business inputs than formal verification.
If a business requirement requires formal verification then the argument is also moot - because it is part of the business requirement - and so it’s not optional it’s a feature.
Come to think of it I’m not really sure I’ve ever seen software created on behalf of a business, that has formal verification, where FM is not mandatory requirement of the application or it’s not a research project.
The last time I saw formal state machines built against a Formally Verfied system it was from a bored 50 year old unicorn engineer doing it on a simple C application.
Another way of framing this is "what is the impact (to the business / to the customers / to the users) of shipping a defect?". In a lot of contexts the impact of shipping defects is relatively low -- say SaaS applications providing a non-critical service, where defects, once noticed, can usually be fixed by rolling back to the last good version server side. In some contexts the impact of shipping defects is very high, say if the defect gets baked into hardware and ships before it is detected, and fixing it would require a recall that would bankrupt the company, or if a defect could kill the customers/users or crash the space probe or so on.
I agree however I think that many overestimate how frequent those environments are. Almost everything can be updated (and that includes dumb appliances with hardware chips replaced by technician) and the only real question is what is your reliability vector.
At the end of the spectrum there’s Two Generals Problem and Space Bit Flip and so much complexity that’s mind blowing. I’ve seen on my own eyes industry wide screwups that were fixed with month full of phone calls and paper slips exchange, so it’s not like we (as humans) cannot live with unreliable systems.
I’ve been researching formal verification for a while and IMO they are not fit for general use through lack of ergonomics. I might have some ideas how to solve it but I rather try to put in a commercial box <insert dr evil meme>