Protected: OKRs
10 Tuesday Dec 2019
Posted Agile
in≈ Enter your password to view comments.
10 Tuesday Dec 2019
Posted Agile
in≈ Enter your password to view comments.
18 Monday Nov 2019
Posted Agile
in≈ Enter your password to view comments.
12 Tuesday Nov 2019
Posted Agile
inFlow Metrics show the state of the product, how quickly our work flows through the system, whether it’s likely that features will be completed by a date, and what kind of work is being prioritized.
They are tied to business value and provide us with the feedback we need to make decisions about the next steps for our product.
Perhaps more important, Flow metrics are based on data, not forecasting, and have consistently been shown to be more accurate than forecasting techniques.
The key metrics are:
Why it matters: “The single most important factor that affects wait time is capacity utilization.” – Dominica DeGrandis
If the amount of Work in Progress is greater than your team size (taking pairs & mobs into account), then you won’t be able to predict the delivery date of the work. Look for bottlenecks where work is getting stuck, and focus on creating flow there.
How to see it: You can either simply count the items in progress per week, and chart this over time, or use the Cumulative Flow Diagram that is generated automatically by Azure DevOps and Jira.
Some ways to improve: We want our WIP to be just a bit smaller than the team size (including Pairs). If work is queuing in one area, this is a Bottleneck; get the whole team to look at the best way to unblock items in Wait status.
The most effective way to clear excess WIP is move to a pull-based approach, only taking in the top priority stories and working on those till complete, before picking up new items.
This flow-based approach means more work gets completed at a higher quality.
If our stories per sprint are higher than our throughput, this is usually an indicator of high WIP as well. Even with the best of intentions, teams discover new requirements during the sprint, or have unexpected bugs etc. One way to accommodate unforeseen work as a (historically based) percentage at the start of the sprint; the other is to ensure a level of clarity (eg. using BDD) during the refinement ceremony.
Why it matters: Cycle time uses our historic performance to allow us to forecast average delivery dates of stories. It’s more reliable than velocity, as it is objective data based on actual duration that includes the realities of wait times, dependencies, rework etc. that we tend not to include in our forecasts, rather than being based on guesses about the future which are seldom accurate.
How to see it: Azure DevOps and Jira calculate this automatically, you just need to define the parameters.
Some ways to improve: The biggest challenge to Cycle Time is unaccounted waiting time.
Wait time increases wherever we have bottlenecks – a Value Stream Map exercise calculating Value Adding time (item being worked on) vs Wait time (giving you Total Time) will show where your longest area of delays are, i.e. bottlenecks to reduce. Both Jira & Azure DevOps can be queried to show when items are being worked on, and the total time.
Also, If you’re not using ‘Production’ as the end date, you may be hiding other bottlenecks in the overall flow that could be improved.
Why it matters: If the number of stories per sprint is greater than our Throughput, we are creating a bottleneck in the system which will actually slow down our capacity to deliver work, so it decreases productivityinstead of increasing it.
If our sprint stories match our historic Throughput, we should be confident of completing work within the defined time.
How to see it: Throughput is simply an average of the number of items completed in the defined time. If you use Sprints as the defined time, Azure DevOps & Jira should show this automatically. For other time periods, you can write a quick query to calculate this using the Sprint definition, and show it in a widget.
Some ways to improve: The fastest and most effective way to improve the Throughput: WIP ratio is to move to a PULL based system: only take on work when the system has capacity, don’t allow queues to build up, get all ‘waiting’ work moving and to complete. This increases predictability and allows us to respond effectively to change. You can also reduce queues by reducing bottlenecks; these two often work hand-in-hand. The most common appraoch is simply to increase capacity – while this can have an overall improvement, it is the least predictable.
Why it matters: This metric helps us to prioritize upcoming work. Flow Distribution is an indicator about the health / state of the product. It helps us understand what kinds of priorities are currently being focused on, and what we should focus on next to maximise customer happiness: delivering features they need while mitigating risk.
This metric is very context dependent, for example:
How to see it: This is a count of the different kinds of work completed during your time period (e.g. a sprint / Quarter), shown as a percentage (100% stack bar is very effective)
For this you need to categorise the work items, and then insert a widget in (both Azure DevOps and Jira have Stack Bars) displaying the data over time.
Typical Categories include Revenue Generation, Revenue Protection and Failure Demand.
Some ways to improve: Each product will have a different balance these work types at different levels of product maturity, and when facing external demands (new regulations; product competition; technical changes etc.).
Since this is a very context-specific metric, reviewing the Flow Distribution for the last few quarters along with priorites for the next quarter, can introduce healthy conversations about the next work coming up.
Reviewing Flow Distribution during a quarter is a useful way to see whether your aims for the quarter are being met; and adjust near-term priorities accordingly.
Why it matters: This metric tells us how much we could improve our delivery speed. And calculating this by co-creating a Value Stream Map, reveals us where best we could focus our attention to improve our effeciency.
The theory of constraints tells us that any system is capable of delivering at the Throughput rate of its weakest link.
How to see it: Flow Efficiency is Value Added time (the time that an item is being worked on) divided by the Total Time. The average flow time in organizations is 15%; this can go as low as 5% or up to about 40%.
To calculate yours:
This 12 minute video explains Flow Metrics really well: what they tell us, and how we get to them: https://www.youtube.com/watch?v=uBEZoXc4A5w
Slides from Dominica DeGrandis: Agile2019_Dominica_Flow_metrics
23 Friday Oct 2015
This past Tuesday I gave my first Keynote address at the Scrum Gathering South Africa.
It’s also my first public talk about agile education and the work we’re doing at codeX :D
The slides are up on slideshare, and Lukasz Machowski did a great writeup of the talk: Can Agility Change the World? – Notes from Scrum Gathering.
Overview:
At codeX we’re developing a breakthrough education model to address the skills shortage and the digital divide, using our experience training agile teams.
We believe in changing the future, and this is a story about what we’ve learnt about agility, diversity and making real change.
01 Saturday Mar 2014
Posted Agile, Creativity, Facilitation, Retrospectives, Scrum
inQuick post for the slides from my talk at AgileIndia2014.
The talk is in three sections:
– How do we generate new ideas?
– Motivation at Work
– Rethinking the Retro as a creative tool
Slideshare link: Building Creative Teams: Motivation, Engagement & Retrospectives
06 Sunday Jan 2013
Posted Agile, Facilitation, Games
inTags
What does neuroscience tell us about facilitation?
In Methods & Tools’ Winter 2012 magazine I explore how neuroscience supports facilitation methods, and use this to make a stab at categorizing facilitation activities according to different levels of interaction.
The journey into neuroscience was prompted by trying to explain why agile facilitation methods work so well. While there is much evidence that they do, there is very little rationale as to why they should. Looking at how our brains process information provides fascinating insight into the great results they generate.
Update: here’s a direct link to the article: Agile Facilitation & Neuroscience: Transforming Information into Action
16 Monday Jul 2012
Posted Agile, Books, Facilitation
inThe collaboration that underpins agile development depends on strong facilitation methods that ensure that all aspects of development are approached in an informed, focused and inclusive manner.
That’s a bit of a mouthful, but it’s still easier said than done. The titles below are my “go to” books for understanding the breadth, possibilities and challenges of facilitation. Together they cover a wide variety of tools for both the practices and (rather badly named) ‘soft skills’ of facilitation:
The more I work with facilitation techniques and practices, the more I think of them as a management style for collaborative organizations. The books below have been instrumental in shaping my perspective, and have been invaluable when facilitating conversations beyond the standard agile meetings.
17 Sunday Jun 2012
Posted Agile, Facilitation, Games, Links, Retrospectives, Scrum
inThis is the last of four posts covering facilitation games for the different phases of meetings – Check In, Opening, Exploring, Closing.
Closing activities form the “future” section of the agenda. Following the Exploring phase, they are focused around the question ‘How do our new insights help us move into the future?’
As with Opening and ‘Exploring-Divergent’ activities, there is a lot of overlap between ‘Exploring-Convergent’ and Closing activities. For me the distinction is the move to a planning phase: establishing a goal to move forward with, and the activities to support it. If the focus of the session is Planning, this could use up to half the allotted time, for others around a third to a quarter.
It’s also important to be aware of the time span available to implement change, and have the group select the most valuable area of focus within that context.
Finding the right focus is much more reliable when multiple interests are represented – it’s easier to avoid personal agendas and generates more discussion around what really is valuable and possible. Selection criteria, such as ‘what we are able to do now,’ ‘what fits best with our team objectives’ and ‘what do we have most passion for’, play a significant part in identifying an achievable goal the whole team is committed to.
While it’s generally agreed that we should create SMART goals, it’s hard to find activities that support goal clarification. Esther Derby’s article on Double Loop Learning provides some excellent questions to interrogate the goals for validity; and I use this format in retrospectives:
Physical interaction tends to be a more effective way to indicate the level of commitment or agreement than a purely verbal response, and is more likely to surface any hesitation, making it easier to clarify the boundaries of what can be achieved.
This nuts-and-bolts section identifies how to take a new possibility to a new reality, and could feel tiring or exciting – it helps to get this pace right. Again, it’s important to limit the actions to a realistic number.
I try to close all facilitated sessions with a quick feedback format that allows participants to review the experience, helps me to get to know the teams better, and helps me improve as a facilitator. The higher the trust relationship, the better the feedback, the more trust is built … and so on.
Another wrap-up mechanism is sharing individual perspectives; I do these in call-out fashion:
A strong closing session helps to build confidence that the way forward is relevant and attainable. Following thorough Opening and Exploring sections, this creates a reliable process for implementing beneficial action … and repeated consistently in retrospective format puts us well on the journey of effective, directed Continuous Improvement.
Most of these activities come from books, blogs or training sessions I’ve been part of; some I’ve created to meet specific needs. Where I can find attributions they are noted; if you see any I’ve missed, or know of links I haven’t found, please let me know in the comments below.
23 Monday Apr 2012
Posted Agile, Facilitation, Games, Links, Retrospectives, Scrum
inThis is the third of four posts covering facilitation games for the different phases of meetings – Check In, Opening, Exploring, Closing.
Exploring is essentially the ‘Present’ phase of facilitation, with two major sections within it: Exploring: Divergent and Exploring: Convergent.
Divergent games feel a lot like ‘Part 2’ to Data Gathering – and I think it is a bit of a grey area: I often find them so closely linked that the two sections can be combined, but sometimes there is value in having them both. Then Convergent exercises consolidate our findings in preparation for moving to the “Future” phase.
As I understand complexity theory in software development, the Exploring section relates to managing emergence, and ‘sensing’ in Cynefin’s Probe – Sense – Respond model. It’s this level of investigation that helps us to see what effects our actions are really having, identify positive and negative patterns that may be developing, as well as highlight unexpected areas of potential.
Here, we want to delve further into issues that are important, extending our understanding by looking through a different lens – of brainstorming, understanding risk, or in-depth analysis.
Generate Ideas / Breakthrough thinking:
I was fortunate to attend a session with Darian Rashied called “Facilitating Creativity for Breakthrough Problem Solving” at the London Scrum Gathering last year. In it, Darian explained how unexpected connections work to generate ideas: things that make no sense keep us occupied, we can’t walk away from them. This means we reach deeper and cross boundaries we would usually stay well within, in order to resolve the senselessness. According to John Medina in Brain Rules, this can even carry through to our sleep, hence the term “sleep on it”.
Using de Bono’s framework, Darian reinterpreted the game phases as follows:
Opening > Exploring (divergent) > Exploring (cohesive) > Closing
Provocation > Movement > Harvesting > Treatment
Provocation: ridiculous, fun, laughing – getting out of the serious mode activates a different part of brain which frees up our imagination
Movement: activities that stimulate mental leaps help us escape our normal, tried-and-tested thought patterns
Harvesting: reaping the benefit of our slightly altered viewpoints by creating space for the ridiculous, accepting and investigating all ideas
Treatment: taking ridiculous ideas and reshaping them back to practical applications
While some of the ideas below may seem whacky, they really do generate at the very least some interesting new viewpoints.
Risk detection:
Traditional Risk Matrixes and Risk Mitigation Strategies tend to fall far short of the mark for the complex work that makes up most of software development. The activities below work well as collaborative approaches for surfacing risks and assumptions, and are really valuable at the start of a project or in a planning phase for resolving rocky ground.
Avoiding failure is apparently a better evolutionary tactic than building on successes1 and this may be why we find it easier to visualize disaster than success. Whatever the reason, once we’re given permission to identify things that can go wrong, these activities can unleash a wealth of information. Be sure to create safety first and follow on with identifying mitigating actions, so that no-one is left with a sense of impending doom…
1 Mostly from Dave Snowden’s Podcasts discussing resilience and exaptation.
Root Cause Analysis2:
Sometimes we encounter issues that are really symptoms of deeply rooted organizational impediments. This is especially valid for recurring issues, as well as catastrophic events. Here we need to dig deep to unpack the root cause of the problem.
2 I owe this section to Carlo Kruger’s A3 Thinking session which he presented to SUGSA at the beginning of this month – Thanks Carlo!
Other formats:
These two formats are complete facilitation plans for generating insight from opposite standpoints: a strength-based, imagi-planning approach, and analytic problem analysis:
Once we’ve expanded our view, we need to start the converging process, making sense of what we have uncovered. These sorting exercises help to clarify where ideas are overlapping and identify dominant themes and needs. I typically do all of these in a session, with more or less detail as time allows.
Grouping > Clarifying > Interpreting
Through exploring our situation we seem to be answering the question ‘What does what we know about the Past tell us about the Present?’. By uncovering underlying themes and discovering experiments yet to be tried, we put ourselves in a position of strength – able to apply our insights in a way that can shape our future.
We take this information into the Closing section to identify specific probes to set up and actions to take that will help us get there.
Most of these activities come from books, blogs or training sessions I’ve been part of; some I’ve created to meet specific needs. Where I can find attributions they are noted; if you see any I’ve missed, or know of links I haven’t found, please let me know in the comments below.
02 Monday Apr 2012
Posted Agile, Facilitation, Games, Links, Retrospectives, Scrum
inThis is the second of four posts covering facilitation games for the different phases of meetings: Check In, Opening, Exploring, Closing.
The Opening phase of facilitation is the space in which the group starts to unpack the topic at hand. This is usually the “past” phase – looking at what has led us to this point in time.
It’s surprisingly difficult for us to look back at events that have passed and generate insight and understanding from them. Aside from struggling to remember everything that’s valuable, often there are deeply held beliefs or other organizational messages that sway our view of events until we look at them closely from a unbiased perspective.
The Gathering Data section of the meeting gives us the opportunity to ‘get back to the facts’ of what we’re dealing with, and with the Exploring section, has the most scope for variety, providing a multitude of ways to unpack the status of a team, project or company. And doing this in a group format allows us to combine individual memories to build up a reasonably comprehensive picture of what is happening in the environment.
In Gamestorming, the authors refer to ‘Meaningful Space’ as the use of visual space to sort our experiences, knowledge and feelings into comparative or relative areas. This is particularly valuable both for prodding our memories and clarifying areas of strong agreement, disagreement, and alternate perspectives.
This opening format is a great way to have team members tell their story; the narrative format grabs everyone’s attention and highlights the human side of the sprint / release etc.
A nice way to dissect information and prompt the group’s memory is to categorize experiences along a theme. The Learning Matrix is the most well known of these formats, and there are a variety of others below. When I can’t find something that fits, I often make these up. *Tip: Alliteration is an unexpectedly handy tool for maintaining overall cohesion.
The aim of the Opening phase is to establish the foundation from which we are building. It’s important not to start drawing conclusions directly from this data, but simply to help the group as a whole to remember as much detail as possible.
Once we have this, we move on to Exploring, where we delve deeper into extending and interpreting the data we’ve gathered.
Most of these activities come from books, blogs or training sessions I’ve been part of; some I’ve created to meet specific needs. Where I can find attributions they are noted; if you see any I’ve missed, or know of links I haven’t found, please let me know in the comments below.