05 September 2010

In Boris Gloger's "Estimation in Depth" deep dive at the South African Scrum Gathering he introduced us to Magic estimation.


Boris introduced it a little something like this:
So you've got a backlog of about 100-200 items and you have a backlog estimation meeting. First thing to do is put a numerical estimate on everything in the backlog. Using magic estimation this should take 10-15 minutes.
At which most of the room burst into laughter.
I mean, he's got to be kidding, right? Estimation is a slog of negotiation and explanation preceding multiple rounds of planning poker. 1-3 minutes an item if you're cooking on gas.

Well, we left the deep dive room for a patch of floor and the results were... astounding. Okay, I'll say it: magic.

Function of Estimation

At the bar that evening, I found myself explaining it as follows.
Estimation is a function that maps stories to complexity. Poker is one algorithm for the mapping, magic is another.
estimation ( story ) = complexity
Poker is an algorithm in which the whole team engages with each item sequentially, having an up-front discussion to attempt to have a good understanding and thus have an agreement when the cards are revealed.
Magic estimation is then a kind of parallel sort, each actor applying his internal evaluation function to the set.
And it happens in silence.
It starts out with each actor sorting his subset into complexity point bins, then each actor evaluates the full set (already estimated by at least one team member) and moves only those that appear to be anomalies. As the process iterates, convergence is reached as we find stable points that all internal models can agree on.


The "magic estimation check list" was put together by Gennine and Alister in our output session and is a good summary of the rules of the game.
The image to the right is their flip-chart poster. I've repeated their points with some elaboration below.
  1. Start with the Product Backlog of user stories
  2. Team will play, product owner will watch (and learn)
  3. Lay the estimation estimation cards down on the floor, spaced out as per their values (as in the perspective picture above) e.g.
    123 5  8    13     20                   40
  4. Hand out user stories to team
  5. Explain rules: no talking, no non-verbal communication
  6. Each team member estimates, place stories at points
  7. Each team member checks estimates, re-estimate and move if desired (once all own cards are down)
  8. Product owner marks fall-outs (too large or keeps bouncing)
  9. Discuss fall-outs until reach agreement
Estimation is done!! It's surprising when you get to the end, and that's it.
A final check if anyone has a burning need to move any items helps to get everyone to realise they're happy with the result.


A bit more on the fall outs from point 8.

Some stories start to bounce around like oscillators from Conway's Game of Life, the product owner must watch for these and pull them out for more explanation. Once the game is done, the team can investigate what the differences of opinion were and get clarification from the product owner. I've tried switching to poker at this point, which has worked quite well.
Some stories make their way out to the 100+ boundary. When stories end up here, the product owner pulls these out too. These need either: explanation from the product owner to break down the confusion that led to the high complexity, or breaking down into estimable chunks.

vs Poker

Before trying it, I'd thought it would be okay, but not as good as poker at getting to good estimates.
Now, I feel it's as good if not better than poker at getting to estimates. Poker does foster communication, but this can be done independently of estimation if you're doing magic.
Avoiding the conversations of poker allows us to avoid arguments and, in the words of Oscar Wilde,
"Arguments are to be avoided; they are always vulgar and often convincing."
Without the convincing influence, magic estimation is able to capture each team member's instant conclusions that Malcolm Gladwell discusses in Blink, then get all of those to agree. Or disagree.
Being part of a complex dynamic system feels like a kind of magic as it converges. Getting to this picture with around 80 items in about 5 minutes... was magic!

Posted on Sunday, September 05, 2010 by David Campey


02 September 2010

The open space marketplace in action this morning at the Cape Town Scrum Gathering. Fun being part of a complex dynamic system.

Sessions I convened today:

  • Agile contracting
  • Job swapping (bus 2 bus) for cross pollination

Sessions I attended, paraphrased:

  • Extreme public openness; what (code, practice) should we share?
  • Team psychology
  • Wildcard ad-hoc group
  • Trust
  • Working environments
  • Feeling oppressed? Let's make a play. w/ Alan Cyment

Will be attempting to blog my notes and impressions from these "soon". Looking forward to getting all of the scribe's notes from the day.

Second half to an excellent gathering. Well done sugsa!

Posted on Thursday, September 02, 2010 by David Campey

1 comment

Cape Town gathering day 1 concluded in a networking event where I caught a few moments of Henrik's improv on the baby grand at the Westin Grand.

Before I plunge into day 2, a sketch of what stood out on day 1.

Henrik's Keynote

Henrik Kniberg's keynote was excellent, setting the tone for the gathering. He revisited the core scrum values (not the agile values) with frequent references to the "black book" and closed by stoking the scrumban fire a bit.

Deep dive: Boris Gloger

Estimation in depth. Wow, Boris broke my brain in three fabulous ways:

  1. Magic estimation — parallel sorting algorithm to achieve estimation of hundreds of items in minutes.
  2. Contracting on business value, not time and materials — with an exit for the client if they're satisfied before the fixed cost is reached.
  3. Changing Level of Done - choose the set of constraints (definition of done) that allow you to start delivering features, then change the constraints and get everything to done again.

More to come. Off to the wildcard that is open spaces!

Posted on Thursday, September 02, 2010 by David Campey

1 comment

31 August 2010

Mohamed Bray Sugsa August 2010

This month's sugsa was a cross-talk from the Business Analysis community.

The Talk

Mohamed described an iterative process as just multiple quick failures (wagile, thanks Karen), if we decide to skimp on requirements. His stat was that 70% of defects are injected during the requirements phase. After gathering, the challenge becomes requirements management and communication. This communication is difficult because the audience is broad and diverse.

Requirements communication should have a few characteristics.

  • connect: we need collaboration and traceability
  • actionable: when is it done? get commitment (intention ≠ commitment)
  • units: time & money

Communication is not just representation.


Interesting (and new) fact to me is the existence of the Business Analysis Body of Knowledge (BABOK). Although the $30 pdf ($60 for print) price has put it onto my library backlog, and not too near the top.



Peter Hundermark probed a bit on whether traceability was really important.

To me, as a developer, traceability had always meant the lengthy and painful process at the end of waterfall where we point to screens an show where each feature can be found.

Mohamad extended this backwards to linking requirements to stakeholders and the origin of the item, so that we can get feedback on the intention when required. "Where it came from and where it went". Which is actually quite appealing.

BA Smells

Marius de Beer was looking for the "smells" or indicators that BA has gone bad.

Requirements that are not quick to apprehend are smelly. They should be easy to interpret upfront. In that context, 90% of requirements are not clear. As an example, when we talk about oranges, you should get a vivid picture of an orange in your head.


Asking context free questions is a technique to probe further while attempting to listen with a solution in mind.

Parting thought

The difference between regular analysts and BBND analysts (Mohamad's employer) is that they are 100% responsible for delivery.

Posted on Tuesday, August 31, 2010 by David Campey

No comments

07 June 2010

Complete Word

A long while back I included the "Complete Word" shortcut:


in my coding practice, avoiding the tricks I had developed to prompt the visual studio intellisense back into giving me suggestions for word completion.

This saves a lot of "micro-time" and improves flow.

Parameter Info

Another time when I found myself coaching intellisense is when looking at the parameter info. This would be achieved, e.g. by re-typing a comma or the opening ( of a method call.

On Friday I realised there had to be a keyboard short-cut, tonight I hunted, and found the treasure:


Ahhh, the joy!

Posted on Monday, June 07, 2010 by David Campey

No comments

12 May 2010

The problem

Stored in a varchar column, payload_xml, we have xml that looks something like this:
<?xml version="1.0" encoding="utf-16"?>  
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema">    
I need to return the underlined section as a varchar.

The solution

With the help of Tim Chapman's article Shred XML data with XQuery, I've arrived at the following code:
select cast(cast(cast(payload_xml as nvarchar(max)) as xml)
as varchar(max)) as PaymentArrangementExternalReference,
Which is, admittedly a bit of a mess, but each part has its purpose, and it works!


Interesting observations from inside out:
  • cast as nvarchar so that the encoding matches "utf-16"
  • otherwise sql server fails with the cryptic 'XML parsing: ... unable to switch the encoding'
  • data() function returns the actual content of the node, not the whole node
  • I didn't need to use [] indexers because there is only one element.
Lesson learned: use the xml datatype for the column.


To handle different namespaces, e.g.

  <?xml version="1.0" encoding="utf-16"?>

add a namespace declaration to the query as follows:

select @x.query(
  'declare namespace ap="http://il.net.za/Agreement/Payment";

Posted on Wednesday, May 12, 2010 by David Campey

No comments

18 February 2010

Last night I ran through a basic tutorial on Fitnesse and .net. It went a little something like this.

Download and run Fitnesse

Download, double-click. And then...

Huh? I double clicked the jar file, got a question about enabling network comms, and nothing else happened.

What had actually happened was some files were extracted and the server started running and listening for connections.

Best plan for getting started is to run it as above from the command line using "[path to java bin]java.exe -jar fitnesse.jar

Create a test page

The tutorial pointed to FitNesse.DotNet.DotNetFitServer, which is an empty wiki page. Throwing caution to the wind, I bravely soldiered on.

All looked fine until I clicked the test button and was presented with a nice red X in the upper right corner, clicking on which yielded the detail: java.io.IOException: Cannot run program "fit.FitServer": CreateProcess error=2, The system cannot find the file specified, which I was pretty much expecting, so off I went to find a suitable program.

I downloaded both FitSharp and FitnesseDotNet because I couldn't tell which would be right for me. Updating the TEST_RUNNER to the full path to the FitServer.exe from the FitnesseDotNet distribution did the trick (FitSharp is for slim testing and has a different COMMAND_PATTERN and TEST_RUNNER).

Create & Hook fitnesse to the .net Class

Initially fitnesse couldn't find the type in any of the loaded assemblies. Removing the namespaces (as per a useful comment), got it to hook up, and I did a little "dance, or a jig".

Get to Green

Well, not quite green because one of the quotients in the table is incorrect. But the tests shows up this as an incorrect result.


Next step is figuring out what fixture to use to get complex types into fitnesse.

Posted on Thursday, February 18, 2010 by David Campey

No comments

21 January 2010

Just learned something new about the DOS copy command from this comment on a keyboard ninja article.

copy /a *.txt aggregate.txt

Will aggregate the contents of all the .txt files into aggregate.txt. This is useful to me for .csv files.

(to disambiguate: 1.txt contains "one one one" &c.)

So cool; not often I find out something new about DOS.

Posted on Thursday, January 21, 2010 by David Campey

No comments