Tasty bits
December 14, 2008 | CommentsAn interesting presentation on agile design; a rerun of a good talk on Design Studios in the agile process; and an article about getting real with agile and design.
One of the things I was quite surprised about at XP Day was how little talk I found around integrating design and development. I suspected at first that this might be because (a) there's no hard-and-fast rules or (b) everyone was sick of talking about it and going round in circles, but Joh corrected me. I should've proposed an open space about it, but a combination of being fascinated by everything else there and a little ill on the morning of day two put a stop to that - ah well.
I'm doing a talk at UX Matters in January, and I think I might try and draw together a few lessons we've learned over the last few years at FP. In the interests of massaging these into a coherent form, a few thoughts:
- I don't think that designers and developers are as far apart in aspirations as they're sometimes presented. I don't see a love of documentation, or producing documentation, from either side. Good people from both disciplines relish communication, create models and prototypes, and accept change (often managed through iteration).
- My own output gets better when I work collaboratively and with folks from different disciplines (usually pigeonholed as design, development and business). I don't believe I'm atypical here.
- The terms "design" and "development" are each placeholders for a set of activities, some of which are more easily estimated and managed than others.
- Design, development and the business are heavily intertwingled: decisions made in one area frequently impact on the others. Reducing the cultural or geographic distance between them speeds decision-making.
Oh, and if you get a chance avail yourself of a copy of HCI Remixed - it's a series of essays from top-notch HCI types (Bill Buxton, Scott Jenson and Terry Winograd all stood out for me) on works that influenced them. Very dip-innable, and a few gems here.
Mobile and green
December 14, 2008 | CommentsCompletely unformed and probably original thought: but is there something innately more environmentally friendly about a medium which is forced to deal with the limited power provided by batteries, and is therefore efficient by design; and which is deliberately minimal in its use of bandwidth and networked resources?
Agile and Corporate Strategy
December 13, 2008 | CommentsIs uncertainty really unmanageable?
All our plans are based on assumptions. Market changes break assumptions.
Strategic planning tries to plan for the long term. Someone does some inspection to model possible futures and construct scenarios. It's the assumptions that are critical, but they get forgotten in travelling along the lines set out by the main scenario.
Sensitivity analysis is another method: create a model, change key variables, and work out which variable affects outcomes the most. It's a good means of discarding some scenarios, but in general only looks at local events, rather than causes: "this set of customers stops buying", not why they've stopped.
Then there's emergent strategy, with folks on the shop floor making decisions. In practice this usually only allows for minor adjustments, anything major needs to go through planning: so it's flexible administration of rigid policy.
Three disabling assumptions:
- We predicate the configuration of our business on stable market conditions, as we can't respond to chaos;
- To change is to lose face;
- Change is difficult and expensive;
The result:
- All uncertainty must be treated as a risk;
- Change is only made in fits and starts as poor alignment between company configuration and market realities becomes unbearable;
Risk management isn't enough; the clash between technology and social change is accelerating: look at the holiday and music industries already. Organisations can't cope with sudden changes because they can't anticipate. They're reactive.
Action is preferred to inaction because management likes to be seen to act. Decisions are made before uncertainty is resolved. Changing strategies means losing face.
What's missing? We need to make our assumptions explicit: what people will pay, what the market is, etc. Financial ones tend to be more explicit than non-financial ones.
How can we model assumptions? Look at all of them and their knock-on effects. (Shows v complicate model of technical, political, competitive etc assumptions)
It's tough to model P&L, though you can model likely demand and likely costs of supply.
Use the model to reduce response times. You need to react before the outcome of a change in conditions occurs: the trigger point needs to be earlier. You need to plan systemic responses to predicted outcomes and assign or acquire enablers and resources for critical response capability - which is a cost.
What do we need to understand about markets? Complexity, competitors, substitutes and complements. Cumulative impact of remote events on local variables: several factors combining. Positive feedback loops and negative (damping) ones, abrupt changes caused by positive feedback.
Currently we work with a Bayesian network model: embed the dynamics of the system into a cause/effect structure. We have a pile of indicators for each variable: confidence, valid range, etc.
To plan:
- create systems to gather information about each variable
- establish a timebox, matching to the clock rate of market change
- automate data collection
- view current status through a dashboard
- review and prioritise
Deploy options according to trigger conditions.
This implies redundancy of effort and assets, which doesn't fit well with minimising costs. Audience member points out that maximising utilisation suboptimises for throughput (which I think is the thesis behind Slack).
Is it all worth it? The value of systems is enhanced. A lot of this is about protecting old assets rather than creating new ones. This is an agile system: incremental, evolutionary, frequent delivery.
There's prejudice remaining: redundancy is seen as expensive... when we know agile is cheaper.
XP Day: Nat Pryce, TDD and asynchronous systems
December 11, 2008 | CommentsXP Day: Nat Pryce, TDD and asynchronous systems
Case study of 3 different systems, dealing with asynchrony on system development and TDD.
Symptoms this can lead to in tests:
- Flickering tests: tests mostly succeed, but occasionally fail and you don't know why;
- False positive: tests run ahead of the system, you think you're testing something but your tests aren't exercising behaviour properly;
- Slow tests: thanks to use of timeouts to detect failures;
- Messy tests: sleeps, looping polls, synchronisation nonsense;
Example: system for administering loans. For regulatory reasons certain transactions had to be conducted by fax. Agent watches system, posts events to a JMS queue, consumer picks up events, triggers client to take actions.
Couldn't test many components automatically, had to do unit tests and manual QA. System uses multiple processes, loosely joined.
They built their own system: a framework for testing the Swing GUIs, using probes sent into Swing, running withing Swing threads, taking data out of GUI data structures and back onto test threads. Probes hide behind an API based on assertions.
Second case study: device receiving data from a GPS, doing something with this info, translating it into a semantically richer form and using it to get, e.g. weather data from a web service.
System structured around an event message bus. Poke an event in, you expect to get an event out: lots of concurrency between producers and consumers.
Tested with a single process running entire message bus (different from deployed architecture); tests sent events onto message bus, the testbed captured events in a buffer and the test could make assertions based on these captured events. Web services were faked out. Again, all synchronisation was hidden behind an API of assertions with timeouts to detect test failure.
Third case study: grid computing system for a bank. Instead of probing a swing app, used WebDriver to probe a web browser running out-of-process. Probes repeat, time out, etc. Slow tests only occur when failures happen, which should be rare. Assertion-based API hides these timeouts, and stops accidental race conditions caused by data being queried whilst it's being changed.
Question: the fact that you use a DSL to hide the nasties of synchronisation doesn't help solve the symptoms in the first slide, does it?
Polling can miss updates in state changes. Event traces effectively let you log all events so you don't miss anything. Assertions need to be sure that they're testing the up-to-date state of the system. You need to check for state changes.
Question: what about tests when states don't change? Tests pass immediately.
You need to use an idiom to test that nothing happens.
It's difficult to test timer-based stuff ("do X in a second") reliably. Pull out these parts into third party services. Pulled out scheduler, tested carefully, gave it a simple API, developed a fake scheduler for tests. To test timer-based events you need to fake out the scheduler.
XP Day: Coaching self-organising teams, Joseph Pelrine
December 11, 2008 | CommentsXP Day: Coaching self-organising teams, Joseph Pelrine
AKA "zenzen wakari-masen"
AKA "how to be a manipulative bastard without anyone knowing"
But first... slime mold. Japanese scientists recently got one to solve a small maze, through cells self-organising. They communicate by secreting pheromones.
But we are not slime mold!
We don't think rationally, unless we're autistic. Our subconscious follows a first-fit (not best-fit) pattern matching algorithm based on past experience, which the conscious mind rationalises according to the dominant discourse. Our ancestors saw danger and ran; to do this fast, evolution optimised to bypass the conscious mind.
What is self-organisation? Amongst primates, it's the fight for alpha-male dominance: probably not the kind of self-organisation we want unless you're the alpha. What are the models to understand how teams work?
(Discussion exercise around how far we let a team self-organise)
There are 2 general directions to the questions Joseph placed on the board: type X personality (believes "most people are lazy, leave them alone and they'll do nothing"), and type Y (believes "working can be like learning, and fun, people left to their own devices will achieve great things").
One issue in getting a team to self-organise is letting them do this. One direction to take is making small changes and seeing how they go.
RUP: "the sound a project makes when it crashes against a wall".
RUP has a built-in mechanism: if it goes wrong, you got the process wrong. There's a similar problem with self-organisation: an organisation that doesn't work right is the fault of bad management. There is a theory: what if the organisation is dysfunctional but still doing what it needs to do? Most companies say "the customer comes first"; rubbish, the CEO does in practice!
The theory is that the main purpose of any organisation is to provide for the needs and desires of a group of people in that organisation. Look at AIG: got government bailout then sent its managers on a half-million dollar retreat. We need to get support at a high level when bringing agile in.
I propose to take this idea a step further: if you have a dysfunctional team, what would they be if they were doing exactly what they should be doing? Kurt Levin, the father of social psychology, theorises that
B = f(P,E)
i.e. behaviour is a function of a person and their environment. He tried to explain psychological interactions in terms of the mathematics of topology. We talk about self-organisation, but who or what is the self? Self-organisation of a system is the interactions between the agents of a system.
For example, at Christmas our 25y.o. son is visiting. My wife treats him like a child, but tries not to. The system has defined a set of roles that need to be played; we will gravitate to acting out these roles.
In a team, we play roles. Remove the pessimist from a team, and someone else takes up the role. It goes further: there are ghost roles. Ever get the feeling in a meeting that people are being careful about what they say, that there's someone else in the room?
These roles get set up by a self-organisational process. How do we get to a point where the fight for alpha-maleness doesn't dominate this process?
The answer lies in chicken soup: made from all sorts of ingredients. You need more than ingredients though, you need heat. For most people who aren't chefs, heat is boolean. A good cook learns to play with heat. To make people a good team, you need to do the same, to effect a change from outside - without working directly with them.
Consider these stages in the cooking analogy:
- Burning: feel threatened, panic, burn out.
- Cooking: the target level, where heat is high enough that ingredients blend but retain individual identities. You can still taste the carrot in good chicken soup.
- Stagnating: what was once soup is now a substrate for bacterial growth. In a team, this is where discipline stops: documentation isn't being written, meetings aren't being attended, etc.
- Congealing: where norms are established ("this is how we do it around here") and change is difficult.
- Solidifying: where change isn't possible.
The trick with self-organisation: determining where the team is now, what models can we use for them, what can we do with these?
So, looking at models, the equation for gas in a confined space:
PV = mRT
T: temperature
R: a constant, forget it
m: molecules of gas in the enclosed area
P: pressure
V: volume
Compress the gas and temperature rises. Increase amount of gas and the same happens. This gives us two easily understood variables: the size of our timebox and the number of tasks we have to do. With a large timebox and 1 task, there's not much stress. With 30-40 tasks there's more stress. Similarly if the number of tasks is fixed and the size of the timebox increased, people get more relaxed. Parkinsons law: tasks grow to fill the size of the time available.
The temperature is proportional to the number of people in your team: its' easier to stress out 1 person than a team of 20.
Teams will naturally cool down and stagnate. There's a need for constant "heating" (coaching). Capacity-based planning is great, but even if you only commit to a subset of overall work in a given sprint, there's more heat: the team are aware there's more to do.
Creative use of conflict can create heat. In a football team you have 26 on the roster and 11 on the field. Only those on the field get lots of the benefits of being on the team: coaches know this and use this to motivate.
(Exercise on how we view conflict)
Another model from a psychologist who invented the term "flow": being in a state where you're on with yourself and your work. Every person has a set of skills, and meets challenges. When skills and challenges are in balance, you're in flow. It's not an absolute model; wherever you are in skill, there's a level of challenge appropriate relative to you. If challenges are more difficult than skills, you feel anxious. If they're too easy, you get bored.
Challenge: your developers find the daily stand-ups boring. To motivate them, raise the challenge level.
The passive version of this is the Peter Principle: people are promoted beyond their level of competence.
(part 2)
Not all individuals are at the same point. We work on plotting activities and people on the skills/challenge matrix. Joseph theorises that people need to move around across the matrix.
Shows the classic Gorilla/basketball video.
Types of power: reward, coercive, legitimate, expert, referent.
Lots of complexity here, we're playing with many values. So we can't predict the outcome of changes. We need to implement small changes and make decisions based on empirical data. Probe, sense, respond. Inspect and adapt. We can't have fail-safe... we need safe-fail!