24 October 2009

Rambo Learns Quickly

Develop software in a way that lets you learn and respond quickly

I cannot fathom working in a software shop where you gather requirements, disappear into your cubicle for a year, and come out with the product. Duh! That product was *so* last year!

Getting something out there and learning quickly is key to successful software development. "Good software today is better than great software tomorrow" almost sums it up. It is missing one part of the equation: learning.

If you have good software today and you want to have great software in the future, how do you get there? What is your road map? A clue: data. Whether that data be customer feedback or (as is oft the case on our team) user data -- you should make *as few decisions as possible* until you have data to back them up.

If you are able to deliver fast, you are able to begin to gather knowledge sooner than if you had waited to deliver. To gather knowledge, though, you have to:

  1. Have a general idea of what data you want to look at
  2. Have a way to find and analyze that data

If you were to deploy software without any logging, how would you know how it is being used? If you were to deploy software without sending error reports, how would you know when things are going incorrectly?

In "Journeyman to Master" they discuss "Tracer Software". Their point is that when shooting a machine gun you can spend your time analyzing environmental variables (which, they contend, will likely change over time) and calculating the trajectory of the bullets based on that information (which is alot of up-front work that has not yet helped you hit your target) ...

... or you can pseudo-aim and fire. In machine guns they have tracer ammunition [wikipedia.org] that allow a human eye to detect where their stream of bullets is hitting: "ready, fire, aim."

This of course does *not* mean that you should fire as soon as you grab the gun, then turn around until you see your target. What it means is that once you have a good idea of where your target is, you do not waste any more time on finding the perfect way to reach that target; you start firing, and correct your aim thereafter.

In software development, this means that you do not find the perfect solution. You gather the basic requirements from the user, prioritize (find the general area of your target), and get something out that they can see in order to help you correct your aim to hit the target. That "something" is what Pragmatic Programmers call "tracer software".

So here is to us spending less time like physicists and more like Rambo: First Blood Part II.

Disclaimer: the Rambo method does not apply in all situations. Some situations may indeed require the services of a sniper. In those situations you should follow the correct protocol which (if I have learned anything from movies, which I obviously have) goes something like "ready, aim, aim some more, confirm your aim, confirm your target, confirm you have permission to fire, fire, confirm the target was hit, have someone else confirm the target was hit." Or something like that.

They think that your early ending
Was all wrong
For the most part theyre right
But look how they all got strong
Thats why I say hey man nice shot
What a good shot man
  -- Hey Man, Nice Shot by Filter

21 October 2009

DISS: Do It Simple, Stupid!

Do it simply, quickly, and learn. Then go back and do it again.

Reading the "Quality" chapter in "Implementing Lean" book (our Tech Bible), I was really trying to overlay their prescribed development cycle over our recent projects, and see what things would be like if we had followed their prophecies to the letter.

Step One: Write Tests

We would be using TDD in a religious fashion. They begin each user story by writing test cases that need implementation in order to pass. We tend to write test cases as things start solidifying or (in the case of our Selenium tests (read: regression tests) we write them after customer delivery). In my next projects, I will be using TDD to see how it really works in our environment. More on that later.

With black-box testing (which, let's face it, is what you are doing in TDD), you have to think of not only the cases where "it should work", but also the cases where "it should not break". Very different. That is a challenge that I have had with Black Boxing -- you have to be very careful to think (read: pre-imagine) the different harsh treatment that your software will be enduring down the road. Suffice to say, it is *very* important that your TDD takes into account what you *don't* want your system to do as much as what you *do* want it to do.

Step Two: Solve in the Simplest Way

We would do the simplest thing possible to get the test cases passing. The idea is, if you write your tests cases correctly, then when they pass you are theoretically done.

A main component of Lean Development is "Delivering Fast". The faster your code is out there, the faster you can learn from it. The more knowledge you have, the better decisions you can make. Delivering Fast does necessarily mean making reckless decisions. Martin Fowler's Technical Debt Quadrant should help illustrate the concept that:

It is OK to Incur Technical Debt, *IF* it is Prudent

Delivering Fast means that you incur Deliberate, Prudent debt in order to gain the most knowledge the most quickly. Which leads to...

Step Three: Refactor

After "going into the hole", technically, you need to clean up after yourself so that it is obvious what the code is meant to do, and how it is meant to do it *and* so you do not have to pay interest on that debt down the road.

Martin Fowler's idea of Prudent, Inadvertent debt is a long term concept, where you do not necessarily even notice the interest you are paying until long after you took out the loan. This is not the situation that warrants immediate refactoring. Rather, it is the Prudent, Deliberate debt that you need to pay down. Let me say this, though:

Do not attempt Step Three without having done Step One

If you do not have good testing in place, refactoring can be both tedious and dangerous. And it is not supposed to be! Refactoring is supposed to be a no-sweat, "Now I see that this will be much easier to work with if we do it like this", post-learning experience. It should not be a "I hope I did not break something" experience.

The reason that The R Word is shunned in so many workshops is because they have legacy (read: un-tested) code. I know: I currently work in one!

Personal Take-Aways

I personally am admittedly, chronically, and (perhaps?) notoriously infamous for over-engineering situations. Some might call it a lack of common sense. Some might call it a dull Occam's Razor. Some might call it not "Seeing the Whole"

This means that I often fall into a trap that could be avoided by printing out this concept from the Implementing Lean book and taping it to my monitor:

Do it Simple *Then* Refactor!

How many times have you gone in to implement a user requirement and seen an opportunity for refactoring. So as you are implementing, you are refactoring with the "future implementation" in mind. And how many times have you realized, during this process, that you did not foresee a technical difficulty? And what does that mean: that neither your implementation nor your refactoring can proceed. Everything has to be reverted, and you have to go down the one-at-a-time path anyways. So start one-at-a-time from the very beginning!

So here's to doing the simple thing the first time. And to learning from it. And to cleaning things up when the situation warrants it.

Do it Simple, Stupid!

Hey what would it mean to you?
To know that it'll
Come back around again
Hey whatever it means to you
Know that everything
Moves in circles
  -- "Circles" by Incubus

19 October 2009

Broken Windows

In considering our codebase, we must realize that we *always* have to be proactive in guarding against code rot.

The more we touch a certain piece of code, the more susceptible it is to degradation. For new developers coming into our codebase -- or indeed, for existing developers simply seeing a new section of the code for the first time -- the feeling can sometimes be overwhelming. Legacy paths are so brittle and so entwined that you can spend a good chunk of time stepping through just to make sure you understand correctly what is going on. By the end of this process, you have seen enough to realize that these objects are in very poor shape, but (since they are legacy and not well tested), you feel almost powerless to fix them.

Surely it was not always this way. 8 years ago, when these legacy objects were being created, I believe that there must have been a sense of design and of "seeing of the whole" that was lost as:

  1. the original developers moved on -- either to different opportunities or to managerial roles -- so there were less people seeing the whole
  2. the "whole" got alot bigger and became more difficult to see
  3. n-th generation developers did not take the time to try to see the whole

The "Broken Window Theory" of software development proposes that once we start letting certain aspects of our codebase sit around with obvious flaws or we let Technical Debt build up, we will see an exponential decay of quality as developers make changes. This is based off the premise that if you do not keep a clean house, others are less likely to clean up after themselves.

With Broken Windows, the keyword is "care". Once you hit a certain level of disrepair, people kind of say "Fuck it" and just make their h4x changes. Hooligans. That's equivalent to throwing another rock through the building's few-remaining windows.

Instead, we should be like a frog being dropped into hot water. We should immediately say "wait a minute, that's really, really shitty" and raise that flag to Stop the Line. We not only have the authority, we have the responsibility. Besides, if you have legacy objects then you need to realize that your ability to respond quickly will be inversely related to how important those objects are -- so test them! And if you need to make changes to a tested object, you're covered! (no pun intended)

So here's to us being less like hooligans -- who contribute to the problem because they do not feel the responsibility to contribute to the solution -- and more like ... Japanese Auto Workers? Raise the issue, make the changes, and hopefully you will save:

  1. future developers from massive confusion
  2. future code aborts from undetected defects
  3. and future broken windows, since developers are less likely to get mud on the carpet if it has obviously been cleaned recently

Once again, we are hungry for a lynching
That's a strange mistake to make
You should turn the other cheek
Living in a glass house
 -- Life in a Glass House by Radiohead

12 October 2009

Black Listing

I'm not just saying this because modern society might say this is wrong -- I'm saying this because I think that there is a better way !

So I am currently the lead of this API . Recently we have had this influx of mailinator.com accounts signing up . The fact that 100% of these have occurred in the last 2 weeks (out of multiple years of operations) lends to the fact that these are all the same person .

Our API has daily limits ... and by signing up for multiple accounts, this (these? can't be too sure) developers are essentially bypasing those checks.

But I do not care about the limits ! They are entirely arbitrary ! I could just as soon multiply them by a factor as ten . They exist to differentiate the Developers from the Partners . If you are using our API in such a way that you require more requests per day than is the default, we would like to talk with you.

And most of the time this happens . We make it clear that we have limits, and that we are very willing to discuss with you the raising of those limits (do not read: razing) . We encourage this interaction, because it benefits both parties (hell, we often have paying partnership deals if we think it is in our best interest to incentivize you to drive traffic to our site) .

So why do people sign up for multiple accounts with throw-away email addresses ?

Well, from their perspective, they might just see this signup as just another spam well. A burden rather than a communication channel (does anyone else besides me have a ".trash" or ".spam" version of their email address ?) Or they believe we as an API team are detached from our product.

And that is our fault . If we made it apparent to the Developers that 1) we do not spam this address 2) we will only be using this address for important communications and 3) we are available ... then I do not think that we would have this issue of throw-away addresses .

So we could blacklist mailinator.com addresses. But then how hard is it to register for a new Yahoo! address? Where does it end?

No, the correct approach (in our situation) is better communication. A higher emphasis on the interaction between Developers and our API team.

This is how it works
You're young until you're not
You love until you don't
You try until you can't
You laugh until you cry
You cry until you laugh
And everyone must breathe
Until their dying breath
-- "On the Radio" by Regina Spektor

07 October 2009

Google Barcode

Unless they were extremely slow before, the top Google results for "barcode generator"s must be experiencing an inane amount of traffic recently (http://twitpic.com/kkilx) .

Hopefully Google pre-thought to help them with this flux (http://twitpic.com/kkjmz).

Talk about the Slashdot Effect!

Why man, he doth bestride the narrow world
Like a Colossus, and we petty men
Walk under his huge legs and peep about
To find ourselves dishonorable graves
  -- Cassius, Julius Caesar, Act 1, Scene 2