My understanding of Agile is fairly simple, but I’d like to think profound, or deep.
Plan-driven approaches
If you can predict the future well (enough), then you make a plan and you follow it, maybe with minor adjustments. You optimize for good plans, and you optimize for ability to follow the steps. And then…if you’re any good, you automate that nonsense, so it only has to be built 1x, and it’s automatic forever. This is why manual assembly lines have been taken over by substantially robotic assembly lines and also why kiosks are replacing workers at lots of counter-order restaurants. The path is: Study situation, Conclude correctness, implement, automate.
Maybe you don’t automate — maybe you just perfect and repeat a million times.
Hypothesis-driven approaches (Agile)
IF, on the other hand, you can’t predict the future well (enough), there’s the other approach. Internally, in my head, I call the approach Tinker culture. I found it for the second time in my high school group project physics “catapult contest.” With 1kg of materials, how far can you launch an egg without it breaking. Doing this in ‘87, before the internet was much more than email and newsgroups, and I wasn’t using usenet much at the time… what do you do? You do some theory, you try stuff, you learn from your attempt, and you try again. I ended up using a (dangerous) gopher trap, after breaking a fishing pole that my dad told me to be careful with…and some bubble wrap on the egg. The winners in my year were (and have been every time I’ve seen this done with a weight limit) slingshots (my year using medical tubing). But you had a real problem — and you got to wrestle through it where theory (at least high school calculus+physics theory) didn’t give me enough to finish solving the problem. But I didn’t recognize this approach yet. I was annoyed at doing group projects, even with the only other kid my age in the physics class.
After that, My third notice was in ‘90-91 in college freshman engineering lab at Harvey Mudd. Building robots. Building preliminary computer vision systems. When you build things … you get some theory, and you sketch what you’re going to do, and then you do it, and then you find out how you were wrong and how wrong you were about how it works — and then you fix it. Usually the fix doesn’t involve going back to do a redesign. The fix is … “These two pieces were supposed to be touching, and there’s a 1/4 inch gap. What can we use to bridge it.”
Really, though, my first time really encountering the problem — “plan, do, wrong, make a bunch of changes so it actually works” was Computer Science 110 (Still taught by the Math department) at Cal Poly San Luis Obispo, in Winter Quarter (January) ‘81. We were doing Basic on an HP mainframe. Actually, before that, I’d done a bunch of tinkering and playing on (A) my dad’s Alpha Micro minicomputer at his office (B) School Apple II+ (C) School TRS 80 (D) That stupid Sinclair computer that didn’t require typing whole words, but the G would print GOTO rather than G under unclear circumstance
The way you code — then as now — is: Look at the problem, understand the problem, sketch a design, code the design, find out the first of 100 ways you were wrong, and fix it (code+design intent). Repeat the fixing and design-changing until you find out your design was completely unworkable, or it works.
Having read a lot (sometimes 1000 pages / day of real content when younger ), I didn’t really encounter this way of thinking except occasionally in some hard Science Fiction … and even then, the model we’re talking about: plan / try / fail / adjust / fail / adjust / fail / drawingBoard / fail / adjust / succeed model wasn’t very well pictured. I think it’s because the world has been getting richer at a phenomenal pace, and software is special. If you build a robot, and have to rebuild it, because of your plan it costs materials. If you build a rocket, and have to rebuild it because your plan failed, or the rocket exploded on test, it costs a lot of materials. In the 70s — or in CSC 110 in Spring 1980 (6 months before I took it), even the cost of rebuilding the programs was too high.
Software is special
So … I blame the lack of this approach in engineering as “too expensive” until Moore’s law kicked us in the butts. From a different essay of mine — this past almost-50 years of computer improvements have made the costs of executing computer instructions, using computer memory and computer storage, and sending data drop in costs by a factor that’s hard to imagine. My best analogy … better this year while I’m doing construction on my house … is that the drop in computer prices is equivalent to the drop in price if Diamonds and Gold dropped to the price of Dirt and Gravel. A cubic centimeter of gold, costs about $2000 right now. Fill Dirt — one ton of fill dirt is about a cubic meter, and costs on the order of $ 20 (lots of rounding).
So … One MILLIONTH of the quantity sells for 100 times the price — gold vs. dirt. Not quite a billion times multiplier, but close. That’s the difference in cost of computing when I started coding in 1979, and now, 45 years later. Well, the computing costs may be even more of a difference.
Software is different from everything else for ONE overwhelming reason: Because the cost of trying something is basically free. Literally anything we can do that uses computing time and energy instead of human time and energy is a savings, even if it takes the computer a billion steps to do the thing a human could do in 1 step … it’s still cheaper.
Aside: Yes, energy costs. Mostly nonsense but occasionally relevant. Calculus teaches us that infinity times zero depends on which is more aggressive between the infinity and zero. Even though cost is going to zero fast, usage on some topics is going to infinity faster.
Computing approaches are an economics problem and the economic answer to an awful lot of computing is “try, fix — use more processing and less thinking”. It takes tinker culture and amplifies it.
Maintainability
Somewhere along the way — maybe in the early 80s…maybe some superheroes a bit before that, but I didn’t read those folks much — someone discovered that if you’re going to take advantage of the incredibly low cost of trying things in software, you have to change how you write code…or else your changes can literally cost hundreds or thousands of times more than they ought to.
Aside: That’s not what really happens 99% of the time. If a change to what the code does costs $100, we make the change. If a change to the code costs $100,000 … we tend not to make the change. So … it’s not that the code changes cost more — it’s that they never happen. And then we never make experiments.
So … a bunch of folks in the 80s — I’ve taken to calling them the OOPSLA crowd, because those are the ones I read — discovered that software needed to be easy to change. And so they built an entire software paradigm around “cost of change”. What’s super weird to me, though, as I was reading their work in the 90s … is that they never said that. They said all sorts of things … but the obvious, blatant, overwhelming #1 win of what they were doing was: making software easy to change.
Aside: According to me, there were 2 fundamental approaches in play here:
A: “understand the problem well enough that our initial design captures well the directions that the customer might change”. I call this Object Oriented Analysis, and think that the modern heir to this kingdom is Domain Driven Design.
B: “understanding the problem is nice, but the core win is making code easy to change. Even if I’m wrong about the problem, easy to change code will make everything work”. I call this Object Oriented Programming, and the modern heir to this kingdom is eXtreme Programming, now usually called Software Craftsmanship.
I’m a B-approach guy. My most common software-theory discussion partner (though it took me almost a decade to realize it) is an A-approach guy. The Coplien-Martin debate on TDD is the best place to see the conflict of approaches, and failure to communicate between them.
RUP was an A-Approach attempt to make software somewhat easy to change. Your primary goal is to understand the problem really well in order to make better software.
Agile, founded by eXtreme Programming, was a B-approach. Your primary goal is to make software easy to change, in order to make better software.
Product
And then DotCom showed up at the turn of the millennium. And the world turned upside down.
You could have an idea, and try it and adjust along the way.
Google and Facebook and Twitter had great ideas and solid implementations.
Netflix, though…Netflix had a good idea and then changed it and changed it and changed it again. Do you remember when Netflix was a mail-order rental of physical DVDs with movies and games on them? Have you seen the recent suite of Netflix mobile games that they’re branching to?
What about Amazon? An online Bookstore became an online marketplace, an e-book creator, and a cloud provider.
The great product idea, championed first in my reading of Steve Blank, and then articulated a lot more clearly by Eric Ries, was … your first idea is probably wrong, and you don’t know that until you put it in front of customers. So how do you treat your product idea as a hypothesis, rather than as a truth.
My short answer: 2 parts.
1: You need a science or tinker approach: Miniplan, Try, Observe, Try, Repeat.
2: But that doesn’t matter unless your code is also easy to change. Because no one except Elon Musk tinkers if each experiment costs a million $.
<end ramble>
Agile is, at it’s very core, that recognition.
Prediction is hard, always, especially about the future, and especially new things.
Building new software (or additional functionality) is an attempt to predict what someone will need in the future (even when they told us what they wanted, it’s still a guess that’s often wrong).
So we need to optimize around that recognition.
How do you optimize around “These are hypotheses…we have to try, then learn, then change”? Agile is/was an attempt to answer that question.
Optimizing for effective rapid change that puts the SOFT in software:
S — Small Hypotheses over both Large Hypotheses and Small Steps. The bigger the hypothesis, the more work we are wasting when we (often) find out we’re wrong. Also, importantly, it’s not JUST small units of work, but small units of thinking.
O — Opportunity Cost over Completion. The single most important idea in economics is that if you’re currently eating an apple, you’re not eating the orange, eating the hamburger, or digging a hole. If you’re putting more energy into following the plan, you’re putting less energy into changing what we’re doing based on new information. Cost (all cost) is primarily what you aren’t doing because you are doing this thing — it’s not the time and $ you spent — it’s what ELSE you could have spent it on. In simpler terms, focus on the most important stuff first, sequence it 1,2,3 and expect some of the low (not first) importance stuff to NEVER get done. My favorite way of choosing most important is RAT — Riskiest Assumption Test…but there are others. CoD — Cost of Delay is second on my list.
F — Feedback Systems over FOLLOWING Plans. The first question if you’re pursuing a discovery process: “What should we build” is “how will we know if it’s the right thing?” If you’re in a real tinker world, the second question is: “How fast will I know? Or “How fast will I learn something?” Optimize around rapid feedback and ability to adjust. Plans, in a hypothesis/tinker model are important hypotheses about what we will do. Expecting them to work is hubris.
T — Together Work over Independence or Handoffs. Over the past 80 years of optimizing discovery processes, be they Car Design by Toyota, Nuclear Weapon invention in the Manhattan Project, Skunk Works Airplanes, Modern Military fast-response, or most software development, we’ve found that handoffs are bad and lose a lot of information. If I do step 1, and hand it to you, it’s substantially worse and not notably faster than if you and I do step 1+2 together. Same with steps 3,4,5,6. Tinker methods optimize working together to solve problems over working independently. When people can learn to use multiple brains together, it is just better than when folks or teams have to send work back and forth.
If you are building software to be used by someone, that activity is ALWAYS discovery, then you want to attend to the SOFT of the software, and as a co-equal part, you want to attend to Software Maintainability so the SOFT can work.