The reason coding the shortest path first feels better is that you hit milestones early. However, it is a major generator of technical debt, and fleshing out the project is the hard part that takes longer and introduces breaking changes.
Taking time to plan, getting the API planned in advance, generalising the code (obviously not TOO much), and so on might feel less rewarding at first, because the early milestones are the hard part. But do it right and you end up seeing everything fall into place in quick succession, according to plan - and that’s a much greater sense of achievement IMO. It is often also a quicker way of seeing large projects to completion, though it doesn’t always feel like it at the time.
Building a Proof of Concept allows you to get the best of both worlds. It allows you to be naive, it allows you to write the bare minimum, and it gets something working that others can try out and give feedback on. As a bonus, it doesn’t generate technical debt, because you don’t build on the PoC - you use it as a reference while building the actual thing.
JonChesterfield 670 days ago [-]
> because you don’t build on the PoC
Ah yes, I remember that idea. Naturally we shipped the proof of concept.
Also "it is known" that rewrites from scratch are bad things, so it got iterated on instead of replaced.
Based on the internal architecture of other software I've seen I think this is a relatively popular development strategy.
Etheryte 670 days ago [-]
There is a somewhat malicious compliance way of avoiding this, and that's to write the proof of concept in an obscure language or using some other set of tools that makes it impossible to pick up as-is by the rest of the team. The downside is that it also adds an additional cost to you when building it.
ZaoLahma 670 days ago [-]
Does not work and makes life worse.
I once in an earlier life worked in a very Java centered company, and we had a C# component that our "normal" Java blob was communicating with that an old team member left behind right before he left the company.
So no one in the team wanted to touch the C# parts, but once per year or so we had to when we (sometimes unknowingly) introduced backwards incompatible changes to the API that the C# component was using for communicating with our Java blob.
Last thing we were thinking about before I left was... NOT to rewrite the C# parts in Java properly, but to wrap all of it in a thin Java layer that we could use as an adapter when the API inevitably changed.
OkayPhysicist 670 days ago [-]
How inflexible do you have to be that nobody on a team bothered to pick up C#? Java and C# are probably the two most similar languages that are still considered distinct.
ZaoLahma 670 days ago [-]
It was never about competence. Most of us were of different backgrounds.
It was about the need to track down, secure licenses for and install the dev environment that you'd use for those 4-8 hours per year that it took to add support for the new API.
The entire company lived in the JVM otherwise.
kmstout 670 days ago [-]
"I myself find it faster to work out algorithms in APL; then I translate these to PL/I for matching to the system environment."
--Fred Brooks, The Mythical Man Month, Ch. 12.
ghosty141 670 days ago [-]
Obscure is not even needed sometimes, for embedded stuff you can just use a lang that doesnt run fast enough on the tafget hardware
TeMPOraL 670 days ago [-]
I did that once, almost a decade ago, when I was working on some PHP backend code implementing some moderately complex data processing in context of reservations of venues (the complexity came from the fact that venues could sometimes be reserved fractionally, and reservations could sometimes overlap, based on a variety of factors). I had a huge mental block then, caused by overall burnout on the project I got stuck in.
Back then, I've had my first crush on Lisp, which made me think a lot in terms of higher-order functions. When I noticed that thinking "in lispy ways" lets me avoid mental blocks, I used this to finally figure out the solution for my work problem; I then implemented a quick and dirty prototype in Common Lisp, tested it, and proceeded to rewrite it in PHP.
Obviously, it was a slog, but half-way through crying and thinking things like "if only PHP had #'mapcan...", I finally figured that, with a little abuse of some more obscure (back then) PHP features, I could implement in PHP all the high-level constructs I used in my Lisp solution. So I did, and then I was able to trivially translate the whole solution to PHP.
End result: it worked, but it's good that the CEO didn't manage to hire those promised additional developers for that project, because I do not envy anyone who would have to read through my hacky solution implemented on top of a non-idiomatic Common Lisp emulation layer...
So sure, write the PoC in whatever obscure language you like, but be aware that past certain size/burnout level, someone (or you) may figure it's easier to port the language you used than just the PoC.
majewsky 670 days ago [-]
I can't remember seeing Greenspun's Tenth Rule invoked so literally.
fho 670 days ago [-]
That story sounds very familiar to me ... I was working in Java but used Haskell for toy projects ... the amount of "if only Java had" thoughts were insane and I tried to do things with Generics that just weren't possible.
In the end I actually were able to use Haskell "in anger" for a work project ... that quickly cured me and actually getting things done in Haskell became the same slog.
loloquwowndueo 670 days ago [-]
I’ve seen entire teams get fired for using a non-approved language to build the PoC :(
btbuildem 670 days ago [-]
I've done this a number of times, it seems to work quite well, and I don't think of it as malicious.
A "softer" way of handling this is to assign the prototyping / PoC development to a different team. The tribality / process worship / power dynamics between teams virtually guarantee that no code will be reused between the PoC and prod.
delecti 670 days ago [-]
Do you have experience with that second approach actually working? I'm currently stuck maintaining and upgrading multiple microservices that are a total mess because nobody understood how they were built, and we just had to cope with their weird spaghetti.
btbuildem 670 days ago [-]
The way I've seen it work is that the PoC defines the features, then the dev team breaks this down and implements it any way they see fit. It's up to the teams to maintain their best practices and interoperability of the different solutions. Whomever built the PoC doesn't face any of these constraints, their focus is to help finalize the functional spec.
auggierose 670 days ago [-]
You also lose knowledge gained through building the PoC.
btbuildem 670 days ago [-]
I think that depends on why you built the PoC in the first place. Looking through the comments here I get the impression most people are looking at it as "is this technically feasible", while in my experience a PoC / prototype is usually made for business / usability validation ("is this valuable / will people use it").
pyrale 670 days ago [-]
> I think this is a relatively popular development strategy.
...And this is why PoCs should be small-scale but technically sound things, rather than a shortcut competition. People are rarely going to complain about a PoC delivered a week late, but they are definitely going to complain later on, after they ship the PoC against your opinion and development slows down.
sumtechguy 670 days ago [-]
Even if you make it clear they will continue to use it.
I have one bit of code 'i do not claim any knowledge of' that I bashed out in under 1 day with promises that it will be re-written soon. 5 years on I still get questions about it. I get questions because it is a PoC that does one thing very well and everything else badly.
btbuildem 670 days ago [-]
I strongly disagree, I think you misunderstood the purpose of a PoC
We build these things to flesh out ideas, to put something in front of PMs and ideally customers, to get feedback, and to make sure that we have a good bead on what the Most Important Problem is.
PoCs are disposable, sticks-and-ductape contraptions to demo features, and maybe have a small subset of users to play with for a limited time. They are by no means starting points for actual development.
jeromino 669 days ago [-]
By that time the POC shipper got promoted for releasing fast, and no longer does dev on that team, and the replacements get harangued for delivering late. Business doesn't care who created the problem in the first place.
mejutoco 670 days ago [-]
People can complain about anything. That does not mean they are right.
If a PoC is later extended, it will of course have limitations. We do not need to change the meaning of PoC to full product to preemptively solve that.
Instead, when a PoC is done everybody involved needs to understand the implications. If people insist on misinterpreting them, that is on them. Those are political problems, not technical ones. They can be solved by aligning incentives.
TLDR. A PoC is a PoC, not a full product.
pyrale 670 days ago [-]
> Those are political problems, not technical ones. [...] TLDR. A PoC is a PoC, not a full product.
Politics eat technnological semantics for breakfast though. It's up to you to decide whether this is a hill you want to die on.
> They can be solved by aligning incentives.
Often enough, that means shipping the poc.
mejutoco 670 days ago [-]
> Instead, when a PoC is done everybody involved needs to understand the implications.
> Politics eat technnological semantics for breakfast though.
I see where you are coming from, but I have to disagree. I am not debating semantics, but reality. A prototype will always have limitations. That is the definition of a prototype.
I understand in reality sometimes it is worth to push through (startup), and sometime it is not needed but it is demanded. I am not starry-eyed, but it is useful to know what the reality is before one tries to bend it or adapt to it.
If you want, Nature/laws of physics eats politics for breakfast :)
Aligning incentives in this case could mean making the developer have a stake in the outcome (bonus), have competent people and clear deliverables, and bonuses for products that work well/convert. Not just for speed or busyness.
Basically you are assuming a sort of feudal relationship between someone handing arbitrary deadlines and (whatever you must do) and someone in charge of implementing it having no say in anything, and obeying blindly. That is not how the best engineering is done. Places exist (I have worked in some of them) where people can be professional, and a lot still gets done at the end of the day, even more than with the stick and harsh words approach.
A certain amount of back and forth is healthy and can produce much better outcomes for the company.
pyrale 670 days ago [-]
> If you want, Nature/laws of physics eats politics for breakfast :)
It would be that politics eats tech, but culture eats politics.
And strengthening your PoCs is a predictable cultural consequence of any organisation where leadership forces PoCs in production.
For leadership to align incentives is to prevent engineers from suffering the consequences of PoCs pushed to production. What you described may work in some situations.
> A certain amount of back and forth is healthy and can produce much better outcomes for the company.
The reaction I described is a kind of back and forth, if a bit conflictual. However, it's leadership's responsibility to ensure that communication happens in a non-conflictual way.
kristiandupont 670 days ago [-]
How do you differentiate? To me, building a PoC is exactly that: create the shortest path, in order to (in)validate the concept.
atlantic 670 days ago [-]
Exactly. The proof-of-concept app should address the core technical obstacles in the new project, and prove they are surmountable in the simplest possible way.
btbuildem 670 days ago [-]
That's a dev-centric perspective.
Alternately, a PoC should address the core business and usability questions. Everything else, including technical feasibility, is secondary and trivially solved with enough resources.
In the order of importance, the first question is "is this worth doing", only then you ask "how can we do this"
atlantic 665 days ago [-]
Good point. I'm a dev, so by the time this kind of project reaches me, the business angle has (presumably) been sorted out, and it's the technical feasibility that is in question.
irq-1 670 days ago [-]
“Where a new system concept or new technology is used, one has to build a system to throw away, for even the best planning is not so omniscient as to get it right the first time. Hence plan to throw one away; you will, anyhow.”
- Fred Brooks
oweiler 670 days ago [-]
Just make sure that it ends in the bin and is not shipped. I often write the PoC as a Kotlin script, with dependencies declared inline with @file, no tests, Just the bare minimum. No one would ask to ship this.
i-use-nixos-btw 669 days ago [-]
Absolutely. A PoC is supposed to be badly thought out, that’s the point. Get from A to B quickly, in whatever way possible, and then get feedback while building the real deal properly.
There has been some pushback against that idea on here, but I’m firm on it.
My approach these days is actually to get the PoC done with a lot of assistance from ChatGPT and copilot. It generally works great and fulfils the basic criteria, but isn’t production grade in the slightest. That’s fine - it means I can provide PoCs in the space of an hour or two, and it gives me something to aim for and refer to.
senel 670 days ago [-]
This also helps you think about how feasible the overall project would be in terms of coding.
t43562 670 days ago [-]
Usually one understands a project poorly at the start and much better at the end.
So to agree with the article I think it's unwise to make all your decisions at the point where you know the least.
By getting something working you improve your understanding and then you can choose optimisations and abstractions in a judicious manner - no point in optimising things that end up having no impact and no point in introducing abstractions that in practice never will be used.
There are those who imagine that you can completely plan work before lifting a finger and it's a problem to struggle with them sometimes. Another one is when some aspect of the outcome is big in people's minds.
I was once on a project where we thought we'd be charging people based on their usage of the product. This made the reporting system very critical because if we messed anything up we'd be cheating our customers or giving them freebies. In the end we realised nobody wanted to pay that way so this huge design consideration which made everything much more complicated was gone. This sort of pattern happens often and it was a mistake to start that way. But that was a requirements mistake rather than a programming one and this is why your requirements are so critical. A single sentence in a document can double the cost of a project and your customers often don't realise that.
corry 670 days ago [-]
"Usually one understands a project poorly at the start and much better at the end" <--- this is 100% it, totally agree.
It also happens at the product level - building features too early or too deeply for a poorly understood workflow; abstracting things that don't need to be abstracted for probably YEARS; over-engineering for major scale despite having no users; knowing that ONE DAY you'll need, say, multi-language support, so on Day 1 you over-complicate everything by insisting on a language framework etc.
Beginners lack the foresight of 'what this will look like at scale' or maybe better said 'why this won't work at scale', but that's ironically why they are better early on - speed matters a lot more than anticipating potential scale issues years later.
OkayPhysicist 670 days ago [-]
> There are those who imagine that you can completely plan work before lifting a finger
My response to these types is always to point out that we already have tools that translate a complete plan of a project into software: They're called compilers, and that plan is source code.
kqr 670 days ago [-]
There's a fine line between shortest path first and most certain path first. It's tempting to jump in and do the things you know for sure how they will work – these are the things with the lowest need for exploration. Early-stage, you should focus on the things that are most uncertain, the things that have a chance of dooming the entire project if you don't understand them better.
You can still take the shortest path while focusing on the most uncertain first, but it is another concern that needs to be prioritised.
evanlh 670 days ago [-]
Yes, totally agree with this-- I perhaps should have emphasized that my point is to code the shortest path thru the hardest problem. If somehow you just code around the hard part & keep deferring that til later you haven't learned anything.
tuukkah 670 days ago [-]
I think the shortest/fastest path can have value even if you didn't learn much yet, because it can provide an end-to-end platform for learning: something to show and discuss with stakeholders and potential users.
After you have that platform, the next target can be the biggest uncertainty / hardest problem that you need to solve to achieve an MVP. "We have X, can we get to an MVP?"
(After you have an MVP, the hardest problems may still await you but you can prioritize based on what increases value.)
Kalanos 670 days ago [-]
right, you might get an "actually, how about X?"
tuukkah 670 days ago [-]
Yes, or even more likely Y ;-)
Tade0 670 days ago [-]
I think this advice is a little vague and therefore easy to get wrong.
My approach converged to something that can be understood as a form of progressive enhancement, so basically providing the simplest usable version of a given feature, bearing in mind that eventually you'll have to expand it to what was originally requested - but that's all in separate tickets.
Some examples:
Six different payment processors? Start with one or two.
SPA frontend? Start with server-rendered. The tech is there to smoothly transition from one to the other, but it's possible that this will never be required.
That colour picker shaped like a peacock, following the mouse with its gaze? Just use a regular colour picker, but make it easily swappable. Where's that in the requirements anyway?
What's interesting is that more often than not enhancements lose priority in favour of new features.
Meanwhile some universal techniques like preferring pure functions where reasonable, using immutable data structures and actually having an architecture take as much time as doing sloppy work and go a long way into ensuring maintainability.
remon 670 days ago [-]
Your examples are very reasonable, and somewhat at odds if what the author is advising I think. As you say the advice is vague but I suspect it's also just fundamentally flawed.
There are very few real world scenarios where "just make it work" is a good approach to tackle engineering problems that require senior developers (read; developers with extensive experience in the related problem domain) in the first place.
My approach, which is I suspect it similar to what you're describing, is to define functional contracts first. Making them work is actually pretty low on the priority list, since that's consistently also the easiest thing to get right.
The hard part is correctly defining expected behaviour, interfaces and so on. In your example the hard part is constructing an interface/API for a payment processor that satisfies your business needs on the consuming side and is reasonably implementable for the relevant major payment providers. Actually implementing one, the making it work bit, is just not where the senior expertise adds value.
Unless, of course, you enjoy shipping glorified POCs to your customers because your stakeholders "saw it work" and blissfully ignored the metric tons of tech debts you just introduced, not to mention a skewed perception of the amount of effort needed for a that deliverable.
Tade0 670 days ago [-]
The whole reasoning behind my approach is that stakeholders have needs and wants, but unfortunately these two get mixed up in communication very often, so to uncover which is which one must pit ones against the others. To this end every task is split into a "need" part and a "possibly not need, only want" part, with the latter later weighted against the "need" part of the next task.
Sort of insidious, but stakeholders are happy because they get their needs catered to in a timely manner.
tuukkah 670 days ago [-]
I notice they link to The Pragmatic Programmer here:
> If it’s a true greenfield project you are “prototyping”, if it’s part of an existing project you are making a “tracer bullet”.
In the C2 wiki, someone paraphrases it like this:
> In PragmaticProgrammer, they talk about TracerBullets in the context of building an ArchitecturalPrototype - a bare-bones skeleton of your system that is complete enough to hang future pieces of functionality on. It's exploratory, but it's not really a prototype because you are not planning to throw it away - it will become the foundation of your real system.https://wiki.c2.com/?TracerBullets
Does anyone remember more about what Pragmatic Programmer says about this topic?
evanlh 670 days ago [-]
Hi! Yes I couldn't find a better source, here's excerpting from the book--
"We once undertook a complex client-server database marketing project. Part of its requirement was the ability to specify and execute temporal queries. The servers were a range of relational and specialized databases. The client GUI, written in Object Pascal, used a set of C libraries to provide an interface to the servers. The user's query was stored on the server in a Lisp-like notation before being converted to optimized SQL just prior to execution. There were many unknowns and many different environments, and no one was too sure how the GUI should behave. This was a great opportunity to use tracer code. We developed the framework for the front end, libraries for representing the queries, and a structure for converting a stored query into a database-specific query. Then we put it all together and checked that it worked. For that initial build, all we could do was submit a query that listed all the rows in a table, but it proved that the UI could talk to the libraries, the libraries could serialize and unserialize a query, and the server could generate SQL from the result. Over the following months we gradually fleshed out this basic structure, adding new functionality by augmenting each component of the tracer code in parallel. When the UI added a new query type, the library grew and the SQL generation was made more sophisticated. Tracer code is not disposable: you write it for keeps. It contains all the error checking, structuring, documentation, and self-checking that any piece of production code has. It simply is not fully functional. However, once you have achieved an end-to-end connection among the components of your system, you can check how close to the target you are, adjusting if necessary. Once you're on target, adding functionality is easy."
They later list some of the advantages of this approach--
- Users get to see something working early
- Developers build a structure to work in.
And differentiate it from prototyping--
"The tracer code approach addresses a different problem. You need to know how the application as a whole hangs together. You want to show your users how the interactions will work in practice, and you want to give your developers an architectural skeleton on which to hang code. In this case, you might construct a tracer consisting of a trivial implementation of the container packing algorithm (maybe something like first-come, first-served) and a simple but working user interface. Once you have all the components in the application plumbed together, you have a framework to show your users and your developers. Over time, you add to this framework with new functionality, completing stubbed routines. But the framework stays intact, and you know the system will continue to behave the way it did when your first tracer code was completed."
It's still a great book 20 years later, I highly recommend picking up a copy.
tuukkah 670 days ago [-]
Great examples! We may have been doing it, but didn't have a name for it. More names is useful because when people say "prototype" or "PoC" they can mean many things.
agentultra 670 days ago [-]
Prototyping is a wonderful thing we should do more of.
However, in my experience, when you take this approach the majority of organizations will make the prototype the product. You will never throw out that code. It will simply be added on to, papered over, and mixed up with everything else. What started off as a fine prototype becomes a error-ridden ball-of-mud that nobody understands anymore. Where working on that code takes longer and longer and carries a higher risk of introducing even more errors.
The key thing with prototypes is that you have to mercilessly rip that code out before people start extending it and relying on it otherwise it's going to stick around.
sodapopcan 670 days ago [-]
I worked at an org that did prototyping and it was fantastic. It was especially great for validating if a customer actually wanted the feature they were asking for (they often didn't). We _always_ threw away the work and started fresh if they did want it. What really helped keep us honest here was that we TDD and pair programmed. We also prioritized getting the prototype finished as quickly as possible. This meant writing really horrible code, calling variables and functions things like `foo`, and breaking other features if necessary.
pksebben 670 days ago [-]
> The key thing with prototypes is that you have to mercilessly rip that code out before people start extending it and relying on it otherwise it's going to stick around.
A way to avoid this is to start with a clearly defined data model between components, and then _within the context of each of those_ hit the gas towards an MVP, flesh that out, refactor, etc etc etc.
Not always a possibility, I'll grant, but a ton of headache can be saved by being religious about API-ifying those services which can be made into APIs. Stable I/O, chaotic move-fast-break-stuff for internals.
moffkalast 670 days ago [-]
Recently left a company that mostly just wanted random stuff to be glued onto an early prototype for years now. It just gets progressively more and more infuriating as it almost becomes obvious that a full rewrite would take less time than adding the stupid <thing of the week> that the management wants. Doubly so with AI acceleration for new projects.
After a while it becomes impossible to convince anyone to ditch the prototype because of all the testing hours that have been poured into it, making for a very strong sunk cost fallacy.
lhnz 670 days ago [-]
Maybe I'm not senior enough, but I think this misses the point. If I'm not prototyping, I don't try to "code the shortest path first", I try to code using the most popular libraries, and the most concrete, broad-strokes, fundamental abstractions, towards a goal of improving understandability.
At the end of the day, most code is deleted anyway. But even then people still need to understand it. I am attempting to write in the "lingua franca" and to make it clear to myself and others what I understand about the problem.
nickelpro 670 days ago [-]
It's contextual, and the author is presenting it as universal.
If you don't know how to build the thing, the author's advice is on the right track. If you're unfamiliar with the problem space, just bang away at it. Write that god object, that 500 line function, hardcode all the things. You wouldn't be able to come up with useful abstractions so don't bother trying. Get your stupid terrible code to work, the tiny demo operational, and then take a step back and understand your creation and refactor.
If you do know how to build the thing, if this is your 5th time writing a task scheduler and you know what the ground work for a new one should look like, trust your instincts.
lhnz 670 days ago [-]
> If you're unfamiliar with the problem space, just bang away at it.
> Write that god object, that 500 line function, hardcode all the things.
> You wouldn't be able to come up with useful abstractions so don't
> bother trying.
I take your point about this applying to the context of prototyping. But speaking universally, I'm trying to explain a third way in which you don't try to pick abstractions or apply hyped patterns or find code that you can make DRY, but instead try to write your code conservatively as if it were to be featured in a programming tutorial for newbie engineers. You try to make it easy to understand but you don't make choices about the solution that would require more understanding than you currently have.
In this case, it's not about applying "object-oriented design & algorithms & design patterns & frameworks & abstractions & higher-order functions & monoids & whatever else you found on Hacker News". It could be writing a god object or 500 line function, but only if these are easy to understand.
Basically, I think we should write programs as communication to ourselves and others and as a form of "theory-building". I think our artifacts should fit into this objective and communicate understanding. (I am heavily Naur-pilled, e.g. https://pages.cs.wisc.edu/~remzi/Naur.pdf)
nickelpro 670 days ago [-]
Then I disagree with you. If you don't know how to build a static asset server or a Vulkan renderer or an arena allocator at all, then you don't know how to build those things conservatively either.
The first pass in situations you have never been in before is always going to be completely worthless. You do not have any tacit knowledge of the problem space. Your prototype is strictly to gain that tacit knowledge, zero mental stamina should be spent on anyone that isn't between the chair and the keyboard understanding that code.
Once you've built that prototype, you use it as a reference for the ground up "conservative" or whatever other implementation strategy you want to take for the real thing.
lhnz 670 days ago [-]
I think perhaps conservative means something different to you that it does to me.
You can write code conservatively by avoiding higher-level abstractions and preferring boring technologies. If you look at the engineering indulgences discussed within the article, they are all things that you can conservatively avoid without any prior understanding of the problem.
I also disagree that zero mental stamina should be spent on anyone that isn't between the chair and the keyboard. If you're a software engineer working within a company, the approach should generally be understandable by your team at least.
viraptor 670 days ago [-]
> At the end of the day, most code is deleted anyway.
The timescale for that really depend on the project. I'm digging in git history from ~2013 every few weeks, in an app which definitely runs and handles lots of transactions today. That code is not getting deleted any time soon and any explanations about the reasons in the PRs are extremely helpful. Understandability is great in this case.
lhnz 670 days ago [-]
Yes, we value understandability if an application is succcessful.
My point was that even if logic isn't valuable understandability helps the person reading it decide whether to delete it or replace it. Code that can't be understood and sits within vast applications providing no value is often very difficult to remove and ends up being a long-term cost to a business.
joshdata 670 days ago [-]
This is a similar idea to how I understand the Elephant Carpaccio exercise by Henrik Kniberg & Alistair Cockburn (2013), from what I've been able to Google. The key idea is that work should be broken down into "vertical slices" where vertical means that the entire user story is captured, or as it's described at https://uploads-ssl.webflow.com/5e3bed81529ab12a517031ab/5ec..., "very thin slices, each one still elephant-shaped." The first vertical slice might be a mockup or very-low-fidelity prototype of the complete project and subsequent slices are enhancements following user stories. Horizontal slices might be, say, system components or other subtasks that leave you without something prototype-looking until all of the slices are complete. At least, this is how I've interpreted what I've read about it.
zrkrlc 670 days ago [-]
Mind giving a concrete example? I read through the entire thing but couldn't make heads or tails of it. Is the point that your user stories should touch upon every aspect of your app, while still being incremental?
joshdata 670 days ago [-]
I think the idea is that for a slice of an elephant to be "elephant shaped," it has a bit of all of the key parts of an elephant - a bit of the trunk, a bit of a heart, stubs for four legs, whatever else makes an elephant an elephant. But what do the elephant's organs map to? I agree that the information on Elephant Carpaccio that I've been able to find doesn't really answer this.
My best guess is the idea is that it maps to aspects of a user story like "get input from the user," "do some business logic," "show output to user." So even the first slice is a working prototype in some superficial sense. The elephant organs might be app components (UI, database, etc.), but in the first slice you don't have a complete UI (maybe you have text input) and you don't have a production database (maybe you just have an in-memory dictionary) and you don't have robust business logic. You have the whole stack, but each part of the stack is incomplete. That's what I think makes it a vertical slice.
A horizontal slice (what not to do) would be one complete elephant organ. Maybe that's a production transactional database. So in the first slice you have a complete database or you've written the final business logic, but none of the other things that you would need in a mockup/prototype/MVP or an integration test.
Anyway, this is my best guess.
joshdata 668 days ago [-]
Self-replying...
I found the comments really helpful so I wrote up some thoughts on different approaches to tackling projects. Hope this might be helpful to others.
You’re mixing product decisions with code decisions.
Product should make sure to create an MVP aka the fastest solution for A-B.
Code should be done right no matter what, you’re being paid as an expert to do that, if they would want whatever crappy code gets it done they would do it themselves with some nocode solution and test the hypothesis.
mschuster91 670 days ago [-]
> Code should be done right no matter what, you’re being paid as an expert to do that
As I've written in a recent thread... that may be the case in the academic world, but certainly not in the business world, where time-to-market and profitability always trump code quality if not explicitly required / audited by client contracts.
defrost 670 days ago [-]
Somewhere twixt the two is another domain; the hard equation engineering world.
Computations along novel curved beam configurations have to be correct, 400 m deep billion dollar / annum mine stope angles need to be both aggressive and safe, et al.
Often there's not as much competition as might be in other domains, and while profitability pays the bills the real onus is on the production of provably correct software (to the greatest degree practical).
smugglerFlynn 670 days ago [-]
The problem sounds more like "how do I deliver quality code when 95% of companies out there do not see quality as an objective, oversell poorly made PoCs, do not give space and resources for design and engineering, and pushback on any refactoring?"
The answer is very simple: you don't.
pyrale 670 days ago [-]
> where time-to-market and profitability always trump code quality
What actually happens is more like :
"deliver as fast a possible, no matter what"
...
"The poc was delivered in a week, why are new features so slow? And can you explain what this refactoring item adds to the bottom line?"
mejutoco 670 days ago [-]
That seems like a failure of management.
pyrale 670 days ago [-]
Well, if one's management consistently fails in a predictable way that one can adapt to, one should probably adapt to it regardless of whether they believe it's a failure.
mejutoco 670 days ago [-]
That is one option, and it is fine. Adapting to people treating a PoC as a full product by making it a more full-featured product is basically stopping doing PoC.
mejutoco 670 days ago [-]
I see code as engineering, where there is no "right". There is "right" for the features, or right for the safety, or right for the budget, in a balance of compromises. Sometimes "right" is crappy code, and sometimes it is formally verified code.
sktrdie 670 days ago [-]
Too bad "the right way" also differs between engineering experts.
sktrdie 670 days ago [-]
Like with everything in life, I'll answer this article with a strikingly "it depends".
What are your business requirements? How much budget do you have? Deadlines? Do you already have a clearly defined audience?
If you're a company like Figma then dedicating resources to crafting the hell out of the product & pushing the envelope in terms of maintainability, tests, performance & software craftmanship is a must. Probably going directly from A -> B is not scalable.
If you're a company with 200 costumers and 3 developers then I feel it's the opposite. Dedicating time & resources into all those premature optimizations might kill your company.
I remember seeing something along the lines of "Over-engineering cited as major cause of product failure. Because it never ships."
YuukiRey 670 days ago [-]
I 100% agree with this. Whenever I force myself to really, really do the simplest and dumbest thing first it leads to a better outcome.
I get to a working version of the feature quicker. From that, I get more insights. Sometimes I even realize that the simple and dumb version is good enough already.
I would say that this is the single most efficient rule/heuristic I have for making sure my productivity stays high.
gcanyon 670 days ago [-]
I fell deep into this trap just yesterday, solving a Project Euler problem in Python. It involved a 2-million+ digit number. I'm just starting with Python, and while I know it transparently handles large integers, out of an abundance of caution I spent an hour optimizing to avoid dealing with greater-than-64-bit values, since the result needed is modulo a <64 bit value.
My code ran in about 4 seconds.
Then I thought I'd try a slightly larger optimization that involved ~128 bit values. That ran in a second, so obviously the switch to large integers either doesn't happen at 64 bits, or Python just handles it really well.
Then I thought to just do the math and let Python sort the results. One line of Python. Took ~20 seconds to write. Calculated the 2-million digit number and then did the modulo. Ran in a small fraction of a second. <sigh>
a_c 670 days ago [-]
Want to add that the shortest path is only obvious from hindsight. And someone's shortest path maybe shorter than yours. Incremental iteration seems all the rage nowadays, but it didn't take one's taste and experience into account. If one's shortest path is long and winded, no amount of iteration will bring you to any sorts of local optima. The ultimate judgement is whether the things you built is getting use. If a tree falls in a forest and no one heard it, it didn't make a sound. It doesn't matter your code is good or bad. Build a mental map that leads you to building things useful
nirui 670 days ago [-]
Based on the context provided through the article, I think what the author actually wanted was a Minimum Viable Product, that is, instead of coding the shortest path first, you remove the need for unnecessary paths (and without cutting corners). This should create a reasonably correct product that is safe to use and easy to build on top of.
But I do agree about the CI/CD part. In my case however, it's because many of these CI/CD services uses their own propriety config formats which are unfriendly for local testing. I can't remember how much time I've spent on hot-trying Travis CI just to get the build process right. I imagine things could feel a lot different if the services supports NixOS or just Dockerfile based script, because I can at least try the script locally before invoking the online service.
cosmiccatnap 670 days ago [-]
There are so many articles that float through here that can be summed up as "do this, unless you should do that" with a title equivalent to "why you should always do this"
Does this article present findings from other projects? Does it have a personal code story? Does it use any data or even antidotal evidence to support it's claims.
The answer to all of these is NO it does not...it's just a half hearted article talking about a fundamental problem in modern programming with no real solutions other that an axe to grind that they can't even really elaborate on the origins of.
dt3ft 670 days ago [-]
I built FlingUp following the quickest path and compared to what we build at work.. FlingUp is lightyears ahead. Not held down by mindless patterns, very easy and quick to extend and build upon. Adding 1 new db field at work requires half a day of work until it is available for use on the frontend. The applications at work are so ridiculously overengineered that I sometimes feel we had too much money and time to throw at the codebase, engineers were experimenting and playing with pattern of the month. Maintainability is pretty much gone.
I get what he’s saying, but definitely several gotchas. The main project I work on these days has code that definitely isn’t the shortest path already checked in to main and running in prod. I’m that guy- set up ci/cd, apm, upgraded frameworks, and am working on major refactor. I tell management we can move faster on features and bugfixes if we can reduce the complexity of existing code.
But I do struggle with how far to take that. I worry I’m getting too deep and need to focus on features.
koromak 670 days ago [-]
The problem for me is that the shortest path often has nothing to do with the "best" path. Committing to it means you're never actually going to get it working right. You're going to get a knot in your stomach a year later when edits come down to your crappy MVP feature that feels like shit to work on.
gjvc 670 days ago [-]
A by-product of coding up a working solution is one of better understanding the problem. Unfortunately, this valuable result is invisible to many, and its existence seldom acknowledged. Documentation can serve as a proxy for it, to make it somewhat tangible.
xiphias2 670 days ago [-]
This is great, one modification that I would make is to add CI/CD and testing when there's a regression that was not expected, or when it makes development simpler. That way I don't have to think about the ,,right time'' to introduce these.
nickelpro 670 days ago [-]
I agree with the sentiment but not the examples:
> spend days setting up a CI/CD pipeline
This should/does take minutes. It's like 15 lines of YAML for most CI providers. Ideally it's a part of the template you use for new code.
> use a cool new library they just found
Integrating a useful lib that makes the code simpler should be done from day 1. Don't code your own platform lib and switch to SDL halfway through development. Don't code your "tracer bullet" on Win32 API calls when you're going to be using libuv. And like CI/CD, integrating libraries into the build should be painless
> if it’s software that’s going to ship, it needs tests
Oftentimes the tests are the only way you know if the code is even minimum viable, even manages to be the "tracer bullet". Ok you implemented a new feature, what says the code even runs and doesn't segfault immediately if there's not a test to build the new code into and run? Not comprehensive tests, but something
heisenbit 670 days ago [-]
>> spend days setting up a CI/CD pipeline
> This should/does take minutes
Should if you discount tool selection, learning tools, managing access control, deal with technical and organization imposed constraints, documentation and alignment across the team.
nickelpro 670 days ago [-]
All of those things are easy. If they're not easy you have organizational problems that are slowing down development unnecessarily.
Your team has a CI system of choice, plugging a new thing into that CI system should be trivial, if it's not you're doing CI very poorly.
Hard-to-use is a bug.
sumanthvepa 673 days ago [-]
This is excellent advice. I would only modify it to say that one should first focus on getting something working, not necessarily the shortest path Then you can refactor and improve the code before you actually deploy it to production.
m3kw9 670 days ago [-]
Yeah the over optimize before getting the proof of concept/getting it working minimally should always be recognized as gambling, as the n you may lose all that optimization if the stuff doesn’t work with it down the line
heisenbit 670 days ago [-]
It reminds me of Go (not the language). Beginners play straight lines, then one learns fancy moves and complicates everything while high ranked players patterns again exhibit straighter simpler forms.
roflyear 670 days ago [-]
I like "throw away your first solution" or "be prepared to throw away your first solution."
And the difference between a senior dev and a junior dev is knowing when to stop.
uraura 670 days ago [-]
Recently I ask ChatGPT to do that. I become the one to improve it.
ankaAr 670 days ago [-]
While reading I was thinking of Dan Harmon's about writing.
It is the same.
Just write/code then, when the thing is done, start the reviewing stage.
Anyway, you never know everything about your project when you start it.
baseballpuck 673 days ago [-]
This is a balancing act. Spending the time later to undo all of the technical debt accumulated can take even longer.
evanlh 673 days ago [-]
Yeah 100%. I'm not suggesting land technical debt, it's more about the approach to solving the problem in the early stages while you're still seeking a solution-- don't get bogged down by perfectly conforming to a bunch of intermediary abstractions on your way towards the goal.
jaggederest 670 days ago [-]
And my cynical response is that, if you demonstrate something working too quickly in certain organizations, you will be voluntold to work on a new project before it has been factored at all.
hkon 670 days ago [-]
For sideprojects, definitely, there is simply not enough time otherwise.
maverwa 670 days ago [-]
For me, its not just time, but also motivation. I cannot count how many site projects died in the early days, just because I came up with this impossible perfect thing it should be, having all the things. Then implementing it becoumes a chore. The amount of code/time I'd have to invest to make this imagination a reality becomes a mountain I cannot climb. Either because a lack of skill/knowledge, time, motivation, dedication, or just all of them. Usually this ends in me giving up the whole thing for good, ending most of these hobby project before they even began.
Starting with a "MVP", something that can be implemented quickly (relative to "the perfect thing") and provides some immediate benefit or feedback, pretty much always works better for me.
Its something I still stuggle a lot with. Its hard for me to get things done, because whatever I build never holds up to what I want it to be. But I think, I am getting better of accepting that, and just getting _something_ done.
imtringued 670 days ago [-]
Well kept secret: If you want to get things done, keep the scope within something you can actually do.
If you are unhappy with the fact that you planned to do more, then congratulations, you can just get things done again!
Kalanos 670 days ago [-]
be agile? haha. I refer to this as punching a hole all the way through and then pull the rest through. punch + pull.
670 days ago [-]
herval11 670 days ago [-]
Once you become super senior you actually realize what the author said here is not completely correct. This guy has experience, but he hasn't reached nirvana.
There is a singular high level design pattern/abstraction that you can use in actuality to start off your projects.
There is no name for this pattern but it is essentially this:
Segregate io and mutations away from pure functions. Write your code in modular components such that all your logic is in pure functions and all your io and mutations are in other modules.
Why does this style of organization work? Because delineation and organization of every form of application you can think of benefits from breaking out your program organization along this pattern.
Your pure functions will be the most modular, reusable, and testable. You will rarely need to rearchitect logic in pure functions... Instead typically you write new modules and rearrange core functions and recompose them in different ways with newly added pure functions to get from A to B.
The errors and organizational mistakes will happen at the io layer. Those functions likely need to be replaced/overhauled. It's inevitable. Exactly like the author says this section of your program is the most experimental because you are exploring a new technological space.
But the thing is you segregated this away from all your pure logic. So then you're good. You can modify this section of your project and it remains entirely separate from your pure logic.
This pattern has several side effects. One side effect is it automatically makes your code highly unit testable. All pure functions are easily unit tested.
The second side effect is that it maximizes the modularity of your program. This sort of programming nirvana where you search for the right abstraction such that all your code reaches maximum reusability and refractors simply involve moving around and recomposing core logic modules is reached with pure functions as your core abstraction primitive.
You're not going to find this pattern listed in a blog post or anything like that. It's not well known. A software engineer gains this knowledge through experience and luck. You have to stumble on this pattern in order to know it. Senior engineers as a result can spend years following the hack first philosophy in the blog post without ever knowing about a heavy abstraction that can be reused in every single context.
If you don't believe me. Try it. Try some project that segregates logic away from IO. You will indeed find that most of your edits and reorganization of the logic happens with things that touch io. Your pure logic remains untouched and can even be reused in completely different projects as well!
jstimpfle 669 days ago [-]
IME, separating I/O in the code works because it literally is separate -- it usually happens at the boundaries of the application. It rarely makes any sense to connect two code components (running in the same process) with a disk-backed information path. And where it does make sense -- well, your I/O code moves deeper inside the application.
The "boundary" argument works even better where it's about network I/O or video output, because applications are even less likely to connect code modules using such types of data.
Other than that I don't buy much into that I/O vs "pure" at all, it's pretty much an arbitrary categorization since there really isn't much of a difference between writing to disk or memory. I think the idea that writing to memory is somehow purer comes mostly from Haskell and similar languages that somewhat enforce immutability at the language level. Given that it is still an artificial categorization I don't take this too seriously. The one benefit I see is that it often does improve clarity of architecture to have mostly construct-consume-discard data access patterns without any mutability after construction. Oh, and potentially you can handle errors from disk I/O, while you cannot really handle errors from memory I/O.
herval11 669 days ago [-]
[dead]
000ooo000 670 days ago [-]
tl;dr: long form version of "make it work, then make it pretty"
Cthulhu_ 670 days ago [-]
Which in itself is a short form of "make it work, make it pretty, make it fast, in that order", which tackles both over-engineering on an architectural and a performance level.
For the vast majority of any task that requires writing code, performance is the least of your concerns; your code is fast enough, your compiler and runtime is fast enough, the hardware is fast enough. Use decent algorithms / don't do anything stupid (e.g. n+1 if you use an ORM), but don't fret too much whether it's fast enough either.
If your code is working and pretty, nine times out of ten it's fast enough. For the last 10%, measure before you make assumptions.
friendzis 670 days ago [-]
> For the vast majority of any task that requires writing code, performance is the least of your concerns;
I always have conflicting thoughts when faced with such statements. On one hand, you are right - in many cases any single code unit you write is not going to be the performance bottleneck anyway, and if it is you can optimize it later.
On the other hand, there are scaling characteristics, both algorithmic and architectural. Once you chose architecture that is just bad at the scale you target, it is going to be increasingly difficult to change that.
I guess the takeaway here is that we often forget the distinction between code performance and product performance.
xyzzy123 670 days ago [-]
Also conflicted, but I find myself happier working with "the dumbest thing that barely works" for a first release of something. To me, this is "optimal engineering under uncertainty".
Obviously, there are limits. And maybe a difference in perspective here reflects our respective typical uncertainty.
I'd rather optimise based on user feedback and with production traces than "in a vacuum".
Very often I don't even know if the feature or product is a good idea or how much / whether anyone will actually use it.
Optimisations usually come at the cost of some flexibility and this can hurt when there's a need to evolve the product in a direction I didn't expect (and for some reason that happens way more often than it seems like it should).
imtringued 670 days ago [-]
>To me, this is "optimal engineering under uncertainty".
Information acquisition costs are the worst, aren't they? They are everywhere and they don't appear neatly on your bill.
sanitycheck 670 days ago [-]
> For the vast majority of any task that requires writing code, performance is the least of your concerns;
Maybe, but it's perhaps not as rare as you think.
My main current project exists because development of the previous version was aborted after a year due to fundamental technology/architecture choices which it turned out would never achieve sufficient performance on the target devices.
Yesterday I was handed someone elses Android app to debug, it turns out to take 8 minutes for each (incremental!) compile for some reason. That's a performance problem which hugely slows down development.
A couple of years ago I wrote a moderately complicated one-way sync script to take data from one system and feed it into another. I had to artificially limit number of requests per minute to about 200 because apparently otherwise I was putting "massive load" on the target system causing it to auto-scale up several times. This was mostly GETs which the occasional small POST/PUT. Something very wrong there!
Pannoniae 670 days ago [-]
This approach of "things being fast enough" leads to everything being slightly slow - we literally had better usability and latency on our devices in the 90s than we have now. It all adds up.
Shaanie 670 days ago [-]
Another way to think about it is that software that focus on performance loses market share to software that focuses on other things.
Pannoniae 670 days ago [-]
And that's a perverse incentive, and we must evaluate why that is the case. It doesn't have to be that way.
evandale 670 days ago [-]
It probably has to do with the fact that a lot of people are patient enough to wait 10 seconds for something to finish if a) they understand it and b) it does the job.
The average person will put up with a lot of friction to use something they're familiar with and needs a lot of incentive to change. If your thumbnails fail to load every 10000 times or every 100000th profile picture upload fails most people will just retry and hope it works the second time, not find a different service or app to use.
The reason coding the shortest path first feels better is that you hit milestones early. However, it is a major generator of technical debt, and fleshing out the project is the hard part that takes longer and introduces breaking changes.
Taking time to plan, getting the API planned in advance, generalising the code (obviously not TOO much), and so on might feel less rewarding at first, because the early milestones are the hard part. But do it right and you end up seeing everything fall into place in quick succession, according to plan - and that’s a much greater sense of achievement IMO. It is often also a quicker way of seeing large projects to completion, though it doesn’t always feel like it at the time.
Building a Proof of Concept allows you to get the best of both worlds. It allows you to be naive, it allows you to write the bare minimum, and it gets something working that others can try out and give feedback on. As a bonus, it doesn’t generate technical debt, because you don’t build on the PoC - you use it as a reference while building the actual thing.
Ah yes, I remember that idea. Naturally we shipped the proof of concept.
Also "it is known" that rewrites from scratch are bad things, so it got iterated on instead of replaced.
Based on the internal architecture of other software I've seen I think this is a relatively popular development strategy.
I once in an earlier life worked in a very Java centered company, and we had a C# component that our "normal" Java blob was communicating with that an old team member left behind right before he left the company.
So no one in the team wanted to touch the C# parts, but once per year or so we had to when we (sometimes unknowingly) introduced backwards incompatible changes to the API that the C# component was using for communicating with our Java blob.
Last thing we were thinking about before I left was... NOT to rewrite the C# parts in Java properly, but to wrap all of it in a thin Java layer that we could use as an adapter when the API inevitably changed.
It was about the need to track down, secure licenses for and install the dev environment that you'd use for those 4-8 hours per year that it took to add support for the new API.
The entire company lived in the JVM otherwise.
--Fred Brooks, The Mythical Man Month, Ch. 12.
Back then, I've had my first crush on Lisp, which made me think a lot in terms of higher-order functions. When I noticed that thinking "in lispy ways" lets me avoid mental blocks, I used this to finally figure out the solution for my work problem; I then implemented a quick and dirty prototype in Common Lisp, tested it, and proceeded to rewrite it in PHP.
Obviously, it was a slog, but half-way through crying and thinking things like "if only PHP had #'mapcan...", I finally figured that, with a little abuse of some more obscure (back then) PHP features, I could implement in PHP all the high-level constructs I used in my Lisp solution. So I did, and then I was able to trivially translate the whole solution to PHP.
End result: it worked, but it's good that the CEO didn't manage to hire those promised additional developers for that project, because I do not envy anyone who would have to read through my hacky solution implemented on top of a non-idiomatic Common Lisp emulation layer...
So sure, write the PoC in whatever obscure language you like, but be aware that past certain size/burnout level, someone (or you) may figure it's easier to port the language you used than just the PoC.
In the end I actually were able to use Haskell "in anger" for a work project ... that quickly cured me and actually getting things done in Haskell became the same slog.
A "softer" way of handling this is to assign the prototyping / PoC development to a different team. The tribality / process worship / power dynamics between teams virtually guarantee that no code will be reused between the PoC and prod.
...And this is why PoCs should be small-scale but technically sound things, rather than a shortcut competition. People are rarely going to complain about a PoC delivered a week late, but they are definitely going to complain later on, after they ship the PoC against your opinion and development slows down.
I have one bit of code 'i do not claim any knowledge of' that I bashed out in under 1 day with promises that it will be re-written soon. 5 years on I still get questions about it. I get questions because it is a PoC that does one thing very well and everything else badly.
We build these things to flesh out ideas, to put something in front of PMs and ideally customers, to get feedback, and to make sure that we have a good bead on what the Most Important Problem is.
PoCs are disposable, sticks-and-ductape contraptions to demo features, and maybe have a small subset of users to play with for a limited time. They are by no means starting points for actual development.
If a PoC is later extended, it will of course have limitations. We do not need to change the meaning of PoC to full product to preemptively solve that.
Instead, when a PoC is done everybody involved needs to understand the implications. If people insist on misinterpreting them, that is on them. Those are political problems, not technical ones. They can be solved by aligning incentives.
TLDR. A PoC is a PoC, not a full product.
Politics eat technnological semantics for breakfast though. It's up to you to decide whether this is a hill you want to die on.
> They can be solved by aligning incentives.
Often enough, that means shipping the poc.
> Politics eat technnological semantics for breakfast though.
I see where you are coming from, but I have to disagree. I am not debating semantics, but reality. A prototype will always have limitations. That is the definition of a prototype.
I understand in reality sometimes it is worth to push through (startup), and sometime it is not needed but it is demanded. I am not starry-eyed, but it is useful to know what the reality is before one tries to bend it or adapt to it.
If you want, Nature/laws of physics eats politics for breakfast :)
Aligning incentives in this case could mean making the developer have a stake in the outcome (bonus), have competent people and clear deliverables, and bonuses for products that work well/convert. Not just for speed or busyness.
Basically you are assuming a sort of feudal relationship between someone handing arbitrary deadlines and (whatever you must do) and someone in charge of implementing it having no say in anything, and obeying blindly. That is not how the best engineering is done. Places exist (I have worked in some of them) where people can be professional, and a lot still gets done at the end of the day, even more than with the stick and harsh words approach.
A certain amount of back and forth is healthy and can produce much better outcomes for the company.
It would be that politics eats tech, but culture eats politics.
And strengthening your PoCs is a predictable cultural consequence of any organisation where leadership forces PoCs in production.
For leadership to align incentives is to prevent engineers from suffering the consequences of PoCs pushed to production. What you described may work in some situations.
> A certain amount of back and forth is healthy and can produce much better outcomes for the company.
The reaction I described is a kind of back and forth, if a bit conflictual. However, it's leadership's responsibility to ensure that communication happens in a non-conflictual way.
Alternately, a PoC should address the core business and usability questions. Everything else, including technical feasibility, is secondary and trivially solved with enough resources.
In the order of importance, the first question is "is this worth doing", only then you ask "how can we do this"
- Fred Brooks
There has been some pushback against that idea on here, but I’m firm on it.
My approach these days is actually to get the PoC done with a lot of assistance from ChatGPT and copilot. It generally works great and fulfils the basic criteria, but isn’t production grade in the slightest. That’s fine - it means I can provide PoCs in the space of an hour or two, and it gives me something to aim for and refer to.
So to agree with the article I think it's unwise to make all your decisions at the point where you know the least.
By getting something working you improve your understanding and then you can choose optimisations and abstractions in a judicious manner - no point in optimising things that end up having no impact and no point in introducing abstractions that in practice never will be used.
There are those who imagine that you can completely plan work before lifting a finger and it's a problem to struggle with them sometimes. Another one is when some aspect of the outcome is big in people's minds.
I was once on a project where we thought we'd be charging people based on their usage of the product. This made the reporting system very critical because if we messed anything up we'd be cheating our customers or giving them freebies. In the end we realised nobody wanted to pay that way so this huge design consideration which made everything much more complicated was gone. This sort of pattern happens often and it was a mistake to start that way. But that was a requirements mistake rather than a programming one and this is why your requirements are so critical. A single sentence in a document can double the cost of a project and your customers often don't realise that.
It also happens at the product level - building features too early or too deeply for a poorly understood workflow; abstracting things that don't need to be abstracted for probably YEARS; over-engineering for major scale despite having no users; knowing that ONE DAY you'll need, say, multi-language support, so on Day 1 you over-complicate everything by insisting on a language framework etc.
Beginners lack the foresight of 'what this will look like at scale' or maybe better said 'why this won't work at scale', but that's ironically why they are better early on - speed matters a lot more than anticipating potential scale issues years later.
My response to these types is always to point out that we already have tools that translate a complete plan of a project into software: They're called compilers, and that plan is source code.
You can still take the shortest path while focusing on the most uncertain first, but it is another concern that needs to be prioritised.
After you have that platform, the next target can be the biggest uncertainty / hardest problem that you need to solve to achieve an MVP. "We have X, can we get to an MVP?"
(After you have an MVP, the hardest problems may still await you but you can prioritize based on what increases value.)
My approach converged to something that can be understood as a form of progressive enhancement, so basically providing the simplest usable version of a given feature, bearing in mind that eventually you'll have to expand it to what was originally requested - but that's all in separate tickets.
Some examples:
Six different payment processors? Start with one or two.
SPA frontend? Start with server-rendered. The tech is there to smoothly transition from one to the other, but it's possible that this will never be required.
That colour picker shaped like a peacock, following the mouse with its gaze? Just use a regular colour picker, but make it easily swappable. Where's that in the requirements anyway?
What's interesting is that more often than not enhancements lose priority in favour of new features.
Meanwhile some universal techniques like preferring pure functions where reasonable, using immutable data structures and actually having an architecture take as much time as doing sloppy work and go a long way into ensuring maintainability.
There are very few real world scenarios where "just make it work" is a good approach to tackle engineering problems that require senior developers (read; developers with extensive experience in the related problem domain) in the first place.
My approach, which is I suspect it similar to what you're describing, is to define functional contracts first. Making them work is actually pretty low on the priority list, since that's consistently also the easiest thing to get right.
The hard part is correctly defining expected behaviour, interfaces and so on. In your example the hard part is constructing an interface/API for a payment processor that satisfies your business needs on the consuming side and is reasonably implementable for the relevant major payment providers. Actually implementing one, the making it work bit, is just not where the senior expertise adds value.
Unless, of course, you enjoy shipping glorified POCs to your customers because your stakeholders "saw it work" and blissfully ignored the metric tons of tech debts you just introduced, not to mention a skewed perception of the amount of effort needed for a that deliverable.
Sort of insidious, but stakeholders are happy because they get their needs catered to in a timely manner.
> If it’s a true greenfield project you are “prototyping”, if it’s part of an existing project you are making a “tracer bullet”.
In the C2 wiki, someone paraphrases it like this:
> In PragmaticProgrammer, they talk about TracerBullets in the context of building an ArchitecturalPrototype - a bare-bones skeleton of your system that is complete enough to hang future pieces of functionality on. It's exploratory, but it's not really a prototype because you are not planning to throw it away - it will become the foundation of your real system. https://wiki.c2.com/?TracerBullets
Wikipedia has a description in the context of Scrum here: https://en.wikipedia.org/wiki/Scrum_(software_development)#T...
Does anyone remember more about what Pragmatic Programmer says about this topic?
"We once undertook a complex client-server database marketing project. Part of its requirement was the ability to specify and execute temporal queries. The servers were a range of relational and specialized databases. The client GUI, written in Object Pascal, used a set of C libraries to provide an interface to the servers. The user's query was stored on the server in a Lisp-like notation before being converted to optimized SQL just prior to execution. There were many unknowns and many different environments, and no one was too sure how the GUI should behave. This was a great opportunity to use tracer code. We developed the framework for the front end, libraries for representing the queries, and a structure for converting a stored query into a database-specific query. Then we put it all together and checked that it worked. For that initial build, all we could do was submit a query that listed all the rows in a table, but it proved that the UI could talk to the libraries, the libraries could serialize and unserialize a query, and the server could generate SQL from the result. Over the following months we gradually fleshed out this basic structure, adding new functionality by augmenting each component of the tracer code in parallel. When the UI added a new query type, the library grew and the SQL generation was made more sophisticated. Tracer code is not disposable: you write it for keeps. It contains all the error checking, structuring, documentation, and self-checking that any piece of production code has. It simply is not fully functional. However, once you have achieved an end-to-end connection among the components of your system, you can check how close to the target you are, adjusting if necessary. Once you're on target, adding functionality is easy."
They later list some of the advantages of this approach-- - Users get to see something working early - Developers build a structure to work in.
And differentiate it from prototyping--
"The tracer code approach addresses a different problem. You need to know how the application as a whole hangs together. You want to show your users how the interactions will work in practice, and you want to give your developers an architectural skeleton on which to hang code. In this case, you might construct a tracer consisting of a trivial implementation of the container packing algorithm (maybe something like first-come, first-served) and a simple but working user interface. Once you have all the components in the application plumbed together, you have a framework to show your users and your developers. Over time, you add to this framework with new functionality, completing stubbed routines. But the framework stays intact, and you know the system will continue to behave the way it did when your first tracer code was completed."
It's still a great book 20 years later, I highly recommend picking up a copy.
However, in my experience, when you take this approach the majority of organizations will make the prototype the product. You will never throw out that code. It will simply be added on to, papered over, and mixed up with everything else. What started off as a fine prototype becomes a error-ridden ball-of-mud that nobody understands anymore. Where working on that code takes longer and longer and carries a higher risk of introducing even more errors.
The key thing with prototypes is that you have to mercilessly rip that code out before people start extending it and relying on it otherwise it's going to stick around.
A way to avoid this is to start with a clearly defined data model between components, and then _within the context of each of those_ hit the gas towards an MVP, flesh that out, refactor, etc etc etc.
Not always a possibility, I'll grant, but a ton of headache can be saved by being religious about API-ifying those services which can be made into APIs. Stable I/O, chaotic move-fast-break-stuff for internals.
After a while it becomes impossible to convince anyone to ditch the prototype because of all the testing hours that have been poured into it, making for a very strong sunk cost fallacy.
At the end of the day, most code is deleted anyway. But even then people still need to understand it. I am attempting to write in the "lingua franca" and to make it clear to myself and others what I understand about the problem.
If you don't know how to build the thing, the author's advice is on the right track. If you're unfamiliar with the problem space, just bang away at it. Write that god object, that 500 line function, hardcode all the things. You wouldn't be able to come up with useful abstractions so don't bother trying. Get your stupid terrible code to work, the tiny demo operational, and then take a step back and understand your creation and refactor.
If you do know how to build the thing, if this is your 5th time writing a task scheduler and you know what the ground work for a new one should look like, trust your instincts.
In this case, it's not about applying "object-oriented design & algorithms & design patterns & frameworks & abstractions & higher-order functions & monoids & whatever else you found on Hacker News". It could be writing a god object or 500 line function, but only if these are easy to understand.
Basically, I think we should write programs as communication to ourselves and others and as a form of "theory-building". I think our artifacts should fit into this objective and communicate understanding. (I am heavily Naur-pilled, e.g. https://pages.cs.wisc.edu/~remzi/Naur.pdf)
The first pass in situations you have never been in before is always going to be completely worthless. You do not have any tacit knowledge of the problem space. Your prototype is strictly to gain that tacit knowledge, zero mental stamina should be spent on anyone that isn't between the chair and the keyboard understanding that code.
Once you've built that prototype, you use it as a reference for the ground up "conservative" or whatever other implementation strategy you want to take for the real thing.
You can write code conservatively by avoiding higher-level abstractions and preferring boring technologies. If you look at the engineering indulgences discussed within the article, they are all things that you can conservatively avoid without any prior understanding of the problem.
The intuition behind this approach is described within Sandi Metz's "The Wrong Abstraction" talk (https://sandimetz.com/blog/2016/1/20/the-wrong-abstraction).
I also disagree that zero mental stamina should be spent on anyone that isn't between the chair and the keyboard. If you're a software engineer working within a company, the approach should generally be understandable by your team at least.
The timescale for that really depend on the project. I'm digging in git history from ~2013 every few weeks, in an app which definitely runs and handles lots of transactions today. That code is not getting deleted any time soon and any explanations about the reasons in the PRs are extremely helpful. Understandability is great in this case.
My point was that even if logic isn't valuable understandability helps the person reading it decide whether to delete it or replace it. Code that can't be understood and sits within vast applications providing no value is often very difficult to remove and ends up being a long-term cost to a business.
My best guess is the idea is that it maps to aspects of a user story like "get input from the user," "do some business logic," "show output to user." So even the first slice is a working prototype in some superficial sense. The elephant organs might be app components (UI, database, etc.), but in the first slice you don't have a complete UI (maybe you have text input) and you don't have a production database (maybe you just have an in-memory dictionary) and you don't have robust business logic. You have the whole stack, but each part of the stack is incomplete. That's what I think makes it a vertical slice.
A horizontal slice (what not to do) would be one complete elephant organ. Maybe that's a production transactional database. So in the first slice you have a complete database or you've written the final business logic, but none of the other things that you would need in a mockup/prototype/MVP or an integration test.
Anyway, this is my best guess.
I found the comments really helpful so I wrote up some thoughts on different approaches to tackling projects. Hope this might be helpful to others.
https://joshuatauberer.medium.com/bullet-time-and-elephant-h...
Product should make sure to create an MVP aka the fastest solution for A-B.
Code should be done right no matter what, you’re being paid as an expert to do that, if they would want whatever crappy code gets it done they would do it themselves with some nocode solution and test the hypothesis.
As I've written in a recent thread... that may be the case in the academic world, but certainly not in the business world, where time-to-market and profitability always trump code quality if not explicitly required / audited by client contracts.
Computations along novel curved beam configurations have to be correct, 400 m deep billion dollar / annum mine stope angles need to be both aggressive and safe, et al.
Often there's not as much competition as might be in other domains, and while profitability pays the bills the real onus is on the production of provably correct software (to the greatest degree practical).
The answer is very simple: you don't.
What actually happens is more like :
"deliver as fast a possible, no matter what"
...
"The poc was delivered in a week, why are new features so slow? And can you explain what this refactoring item adds to the bottom line?"
What are your business requirements? How much budget do you have? Deadlines? Do you already have a clearly defined audience?
If you're a company like Figma then dedicating resources to crafting the hell out of the product & pushing the envelope in terms of maintainability, tests, performance & software craftmanship is a must. Probably going directly from A -> B is not scalable.
If you're a company with 200 costumers and 3 developers then I feel it's the opposite. Dedicating time & resources into all those premature optimizations might kill your company.
I remember seeing something along the lines of "Over-engineering cited as major cause of product failure. Because it never ships."
I get to a working version of the feature quicker. From that, I get more insights. Sometimes I even realize that the simple and dumb version is good enough already.
I would say that this is the single most efficient rule/heuristic I have for making sure my productivity stays high.
My code ran in about 4 seconds.
Then I thought I'd try a slightly larger optimization that involved ~128 bit values. That ran in a second, so obviously the switch to large integers either doesn't happen at 64 bits, or Python just handles it really well.
Then I thought to just do the math and let Python sort the results. One line of Python. Took ~20 seconds to write. Calculated the 2-million digit number and then did the modulo. Ran in a small fraction of a second. <sigh>
But I do agree about the CI/CD part. In my case however, it's because many of these CI/CD services uses their own propriety config formats which are unfriendly for local testing. I can't remember how much time I've spent on hot-trying Travis CI just to get the build process right. I imagine things could feel a lot different if the services supports NixOS or just Dockerfile based script, because I can at least try the script locally before invoking the online service.
Does this article present findings from other projects? Does it have a personal code story? Does it use any data or even antidotal evidence to support it's claims.
The answer to all of these is NO it does not...it's just a half hearted article talking about a fundamental problem in modern programming with no real solutions other that an axe to grind that they can't even really elaborate on the origins of.
https://wiki.c2.com/?MakeItWorkMakeItRightMakeItFast
But I do struggle with how far to take that. I worry I’m getting too deep and need to focus on features.
> spend days setting up a CI/CD pipeline
This should/does take minutes. It's like 15 lines of YAML for most CI providers. Ideally it's a part of the template you use for new code.
> use a cool new library they just found
Integrating a useful lib that makes the code simpler should be done from day 1. Don't code your own platform lib and switch to SDL halfway through development. Don't code your "tracer bullet" on Win32 API calls when you're going to be using libuv. And like CI/CD, integrating libraries into the build should be painless
> if it’s software that’s going to ship, it needs tests
Oftentimes the tests are the only way you know if the code is even minimum viable, even manages to be the "tracer bullet". Ok you implemented a new feature, what says the code even runs and doesn't segfault immediately if there's not a test to build the new code into and run? Not comprehensive tests, but something
Should if you discount tool selection, learning tools, managing access control, deal with technical and organization imposed constraints, documentation and alignment across the team.
Your team has a CI system of choice, plugging a new thing into that CI system should be trivial, if it's not you're doing CI very poorly.
Hard-to-use is a bug.
And the difference between a senior dev and a junior dev is knowing when to stop.
It is the same.
Just write/code then, when the thing is done, start the reviewing stage.
Anyway, you never know everything about your project when you start it.
Starting with a "MVP", something that can be implemented quickly (relative to "the perfect thing") and provides some immediate benefit or feedback, pretty much always works better for me.
Its something I still stuggle a lot with. Its hard for me to get things done, because whatever I build never holds up to what I want it to be. But I think, I am getting better of accepting that, and just getting _something_ done.
If you are unhappy with the fact that you planned to do more, then congratulations, you can just get things done again!
There is a singular high level design pattern/abstraction that you can use in actuality to start off your projects.
There is no name for this pattern but it is essentially this:
Segregate io and mutations away from pure functions. Write your code in modular components such that all your logic is in pure functions and all your io and mutations are in other modules.
Why does this style of organization work? Because delineation and organization of every form of application you can think of benefits from breaking out your program organization along this pattern.
Your pure functions will be the most modular, reusable, and testable. You will rarely need to rearchitect logic in pure functions... Instead typically you write new modules and rearrange core functions and recompose them in different ways with newly added pure functions to get from A to B.
The errors and organizational mistakes will happen at the io layer. Those functions likely need to be replaced/overhauled. It's inevitable. Exactly like the author says this section of your program is the most experimental because you are exploring a new technological space.
But the thing is you segregated this away from all your pure logic. So then you're good. You can modify this section of your project and it remains entirely separate from your pure logic.
This pattern has several side effects. One side effect is it automatically makes your code highly unit testable. All pure functions are easily unit tested.
The second side effect is that it maximizes the modularity of your program. This sort of programming nirvana where you search for the right abstraction such that all your code reaches maximum reusability and refractors simply involve moving around and recomposing core logic modules is reached with pure functions as your core abstraction primitive.
You're not going to find this pattern listed in a blog post or anything like that. It's not well known. A software engineer gains this knowledge through experience and luck. You have to stumble on this pattern in order to know it. Senior engineers as a result can spend years following the hack first philosophy in the blog post without ever knowing about a heavy abstraction that can be reused in every single context.
If you don't believe me. Try it. Try some project that segregates logic away from IO. You will indeed find that most of your edits and reorganization of the logic happens with things that touch io. Your pure logic remains untouched and can even be reused in completely different projects as well!
The "boundary" argument works even better where it's about network I/O or video output, because applications are even less likely to connect code modules using such types of data.
Other than that I don't buy much into that I/O vs "pure" at all, it's pretty much an arbitrary categorization since there really isn't much of a difference between writing to disk or memory. I think the idea that writing to memory is somehow purer comes mostly from Haskell and similar languages that somewhat enforce immutability at the language level. Given that it is still an artificial categorization I don't take this too seriously. The one benefit I see is that it often does improve clarity of architecture to have mostly construct-consume-discard data access patterns without any mutability after construction. Oh, and potentially you can handle errors from disk I/O, while you cannot really handle errors from memory I/O.
For the vast majority of any task that requires writing code, performance is the least of your concerns; your code is fast enough, your compiler and runtime is fast enough, the hardware is fast enough. Use decent algorithms / don't do anything stupid (e.g. n+1 if you use an ORM), but don't fret too much whether it's fast enough either.
If your code is working and pretty, nine times out of ten it's fast enough. For the last 10%, measure before you make assumptions.
I always have conflicting thoughts when faced with such statements. On one hand, you are right - in many cases any single code unit you write is not going to be the performance bottleneck anyway, and if it is you can optimize it later.
On the other hand, there are scaling characteristics, both algorithmic and architectural. Once you chose architecture that is just bad at the scale you target, it is going to be increasingly difficult to change that.
I guess the takeaway here is that we often forget the distinction between code performance and product performance.
Obviously, there are limits. And maybe a difference in perspective here reflects our respective typical uncertainty.
I'd rather optimise based on user feedback and with production traces than "in a vacuum".
Very often I don't even know if the feature or product is a good idea or how much / whether anyone will actually use it.
Optimisations usually come at the cost of some flexibility and this can hurt when there's a need to evolve the product in a direction I didn't expect (and for some reason that happens way more often than it seems like it should).
Information acquisition costs are the worst, aren't they? They are everywhere and they don't appear neatly on your bill.
Maybe, but it's perhaps not as rare as you think.
My main current project exists because development of the previous version was aborted after a year due to fundamental technology/architecture choices which it turned out would never achieve sufficient performance on the target devices.
Yesterday I was handed someone elses Android app to debug, it turns out to take 8 minutes for each (incremental!) compile for some reason. That's a performance problem which hugely slows down development.
A couple of years ago I wrote a moderately complicated one-way sync script to take data from one system and feed it into another. I had to artificially limit number of requests per minute to about 200 because apparently otherwise I was putting "massive load" on the target system causing it to auto-scale up several times. This was mostly GETs which the occasional small POST/PUT. Something very wrong there!
The average person will put up with a lot of friction to use something they're familiar with and needs a lot of incentive to change. If your thumbnails fail to load every 10000 times or every 100000th profile picture upload fails most people will just retry and hope it works the second time, not find a different service or app to use.