top of page

Search Results

55 items found for ""

  • Analyzing Throughput in ActionableAgile

    Throughput is a flow metric that tells us about the rate work items are finished in a given process. ActionableAgile has multiple charts that can give you information about your Throughput: Throughput Histogram Throughput Run Chart Cycle Time Scatterplot Cumulative Flow Diagram The first two are specifically made to relay information about the Throughput of your process. The last two happen to tell us throughput as a byproduct! Want to learn more about Throughput in general? Check out our "What is Throughput?" blog post. To learn more about these four charts in ActionableAgile, keep reading. Histogram The Throughput Histogram is a bar chart that displays how often you experience certain daily Throughput – in other words, the frequency of Throughput values. You can use the histogram to see what throughputs are most likely for ONE given instance of your time unit - one day or one week, etc. This is often not sufficient enough for forecasting across multiple instances of your time unit (multiple days, weeks, etc.). Read more about our Throughput Histogram in our product documentation. Run Chart The Throughput Run Chart is a line chart that shows you the variation in your Throughput data over time. This is, hands down, the best chart to use for straight Throughput analysis because of the time axis. We believe that all time-based metrics are best analyzed on a time-based chart. Time-based charts allow you to see patterns in your data over time and ask questions to learn more about how your team worked and why. You cannot discern this pattern-based information in a histogram. Read more about our Throughput Run Chart in our product documentation. Cycle Time Scatterplot The purpose of the Cycle Time Scatterplot is to tell us all about a different flow metric called Cycle Time. However, as the Cycle Time Scatterplot has data points representing all finished work across a time axis, we can look at those points and indirectly calculate Throughput values. In the Scatterplot, you'll toggle on the Summary Statistics box via the Chart Controls. In the example above, you can see that 305 work items were completed in 106 days. As you use other chart controls, including the date or item filters, the summary statistics will update so at any given time you see the total throughput for a set number of days. You do not see how the throughput values change over time as you do in the Run Chart. Read more about our Cycle Time Scatterplot in our product documentation. Cumulative Flow Diagram The Cumulative Flow Diagram is a stacked area chart that is built by adding information from a daily snapshot of your process each day. One of the things you can see in the Cumulative Flow Diagram is how many items left one part of the process and entered the next part. Because Throughput is defined as the number of items that finish in a given unit of time, you can get Throughput information by looking at how area band that denotes your "finished state" is changing over time. However, the CFD doesn't provide this information for you at a glance. That's what the Throughput Run Chart is for. The other related information you can get from the CFD is the average throughput, also known as the average departure rate. You see this by turning on the rate lines. Read more about our Cumulative Flow Diagram in our product documentation. In summary... There are many ways to learn about the Throughput of your process in ActionableAgile. So, here are our suggestions: Use the Throughput Run Chart for seeing how your Throughput changes over time. Use the Cumulative Flow Diagram to see how Throughput interacts with other flow metrics. Finally, use Monte Carlo simulations that work with your Throughput data to forecast efforts containing multiple work items. Excited to explore flow with your team? Try ActionableAgile for free today and reach out if you need any help via our support portal.

  • Analyzing WIP in ActionableAgile

    WIP (or Work In Progress) is a flow metric that tells us how many work items are in progress at any given time in any process - that is items that have started but not yet finished. Once you know how to measure WIP, you will want to start analyzing the data. There are three charts in ActionableAgile that provide insights into current and past WIP levels. WIP Run Chart Aging Work in Progress Chart Cumulative Flow Diagram WIP Run Chart The WIP Run Chart is a line chart that shows the number of items in progress per day across time. With this ability to clearly see how WIP levels change over time, you can get early signals of changes in Cycle Time and Throughput - for better or worse! This allows you to have better conversations about the impact of WIP on your process. Learn more about the WIP Run Chart in our product documentation. Aging Work in Progress Chart Another chart where WIP can be seen is the Aging Work in Progress chart. The primary purpose of this chart is to analyze another flow metric, Work Item Age, but you can also calculate WIP for the day being viewed. While you can click on a dot in the WIP Run Chart to see which items were in progress on a given day, this chart allows you to see more details about the WIP from any given day in greater detail. From here you can see what workflow status each work item is in as well as the age of each work item. On this chart you can use Aging Replay control to see this information about WIP for any day reflected in your data. Learn more about the Aging Work In Progress Chart in our product documentation. Cumulative Flow Diagram The final chart that provides insight into WIP within ActionableAgile is the Cumulative Flow Diagram. This chart provides a visualization of the interplay between WIP, Cycle Time, and Throughput. The height of the color bands in the CFD show you an actual count of items in each workflow stage on any given day. You can use the chart’s WIP Tooltips control to show WIP by stage, or collectively as a system, as your cursor moves through the timeline. By looking at the thickness of the color band(s) over time, you can see how WIP changes and the correlating change in Approximate Average Cycle Time and Average Throughput. You may even be able to help determine good WIP limits by looking how much WIP you had when Throughput and Cycle Time were ideal. Learn more about the Cumulative Flow Diagram in our product documentation. In Summary... There are many ways to learn about the WIP in your process with ActionableAgile. So, here are our suggestions: Use the WIP Run Chart for seeing how your WIP changes over time. Use the Aging Work in Progress Chart to learn more about the WIP from any given day. Use the Cumulative Flow Diagram to see how WIP interacts with other flow metrics and decide on any adjustments you might need to make in your WIP levels. Excited to explore flow with your team? Try ActionableAgile for free today and reach out if you need any help via our support portal.

  • Managing Work Item Age in ActionableAgile

    Work Item Age is the elapsed time since the work item started. It is one of four key flow metrics alongside Cycle Time, Throughput, and WIP. Of the four flow metrics it is, arguably, the most important because controlling age is a key way to improve process predictability. ActionableAgile provides a feature-rich Aging Work in Progress chart to help you measure and control Work Item Age. The Aging Work In Progress Chart The Aging Work in Progress chart is a lot like a visual board as the columns reflect your workflow stages and the items show as dots in the appropriate column. The vertical placement of the dot reflects the items Work Item Age. A dot may reflect more than one work item if they are in the same workflow stage and have the same Work Item Age. How to use the chart to manage Work Item Age Only while an item appears on this chart can you exert any control over where it will end up in your Cycle Time data. If you look at the last column of this chart you will notice that there are no work items represented. When an item reaches this workflow stage, it is complete and appears as historical data on your Cycle Time Scatterplot instead. Nothing you do now can change how long it took to complete that item. Because you use Cycle Time data to answer “How long will it take?” for a single work item, Work Item Age should be a key consideration when making your plan for the day. But, knowing the age of a work item isn’t enough information on its own. In order to know if the age of a work item is bad, good, or indifferent you need context. ActionableAgile overlays percentile lines from the Cycle Time data to add this context right where you need it. In the image above below you can see that 85% of past items have finished in 16 days or less. Now, you can keep that in mind as you track work items and make daily plans. If you want to maintain that level of predictability, you’ll need to continue to finish 85% of work items in 16 days or less. Getting early signals of slow work It's easy to know if an item near the end of the workflow is in danger of finishing beyond the desired age. Knowing that about items early in the workflow is more difficult. ActionableAgile’s pace percentiles help provide early signals that work is moving at a slower pace than past work. Learn more about the Aging Work in Progress chart and the various chart controls in our product documentation. In Summary... If you can only measure and manage one thing, make it Work Item Age. At its core, Work Item Age is a process improvement metric. When you see items aging more than expected, you can experiment with tactics to see if they help. There is no single fix but common tactics include limiting WIP, controlling work item size, reducing dependencies, and more. Once you manage Work Item Age, your Cycle Time data should stabilize and make forecasting easier! Excited to explore flow with your team? Try ActionableAgile for free today and reach out if you need any help via our support portal.

  • When an Equation Isn't Equal

    This is post 1 of 9 in our Little's Law series. Try an experiment for me. Assuming you are tracking flow metrics for your process -- which if you are reading this blog, you probably are -- and calculate your average Cycle Time, your average Work in Progress (WIP), and your average Throughput for the past 60-ish days. [Note: what data to collect and how to turn that data into the four basic metrics of flow is covered in a previous blog post]. The exact number of days doesn't really matter as long as it is arbitrarily long enough for your context. That is, if you have the data, you could even try this experiment for longer or shorter periods of time. Now take your historical average WIP and divide it by your historical average Throughput. When you do that, do you get your historical average Cycle Time exactly? Another quick disclaimer, for the purposes of this experiment, it is best if you don't pick a time period that starts with zero WIP and ends with zero WIP. For example, if you are one of the very few lucky Scrum teams that starts all of your Sprints with no PBIs already in progress, and all PBIs that you start within a Sprint finish by the end of the Sprint, then please don't choose the first day of the Sprint and the last day of the Sprint as the start and endpoint for your calculation. That's technically cheating, and we'll explain why in a later post. You've probably realized by now that we are testing the equation commonly referred to as Little's Law (LL): CT = WIP / TH where CT is the average CT of your process over a given time period, WIP is the average Work In Progress of your process for the same time period, and TH is the average Throughput of your process for the same time period. It may seem obvious, but LL is an equation that relates three basic metrics of flow. Yes, you read that right. LL is an equation. As in equal. Not approximate. Equal. In your above experiment, was your calculation equal? My guess is not. Here's an example of metrics from a team that I worked with recently (60 days of historical data): WIP: 19.54, TH: 1.15, CT: 10.3 In this example, WIP / TH is 16.99, not 10.3. For a different 60-day period, the numbers are: WIP: 13.18, TH: 1.03, CT: 9.1 This time, WIP / TH is 12.80, not 9.1. And one last example: WIP: 27.10, TH: 3.55, CT: 8.83, WIP / TH is 7.63, not 8.83. Better, but still not equal. If you are currently using the ActionableAgile tool, then doing these calculations is relatively easy. Simply load your data, bring up the Cumulative Flow Diagram (not that I normally recommend you use the CFD), and select "Summary Statistics" from the right sidebar. Here is a screenshot from an arbitrary date range I chose using AA's preloaded example data: From the above image, you'll see that: WIP: 26.40, TH: 3.04, CT: 9.48 However, 26.40 / 3.04 is 8.68, not 9.48. As evidence that I didn't purposefully select a date range that proved my point, here's another screenshot: Where 28.11 / 3.51 equals 8.01, not 8.86. In fact, I'd be willing to bet that in this example data -- which is from a real team, by the way -- it would be difficult to find an arbitrarily long time period where Average Cycle Time actually equals Average WIP divided by Average Throughput. Just look at the summary stats for the whole date range of pre-loaded data to see what I'm talking about: 21.21 / 2.31 equals 9.18, not 9.37 -- still close, but no cigars. I'd be willing to bet that you had (or will have) similar results with your own data. If you tried even shorter historical time periods, then the results might even be more dramatic. So what's going on here? How can something that professes to be an equation be anything but equal? We'll explore the exact reason why LL doesn't "work" with your data in an upcoming blog post, but for now, we'll actually need to take a step back and explore how we got into this mess, to begin with. After all, it is very difficult to know where we are going if we don't even know where we came from... Explore all entries in this series When an Equation Isn't Equal (this article) A (Very) Brief History of Little's Law The Two Faces of Little's Law One Law. Two Equations It's Always the Assumptions The Most Important Metric of Little's Law Isn't In the Equation How NOT to use Little's Law Other Myths About Little's Law Little's Law - Why You Should Care About Daniel Vacanti, Guest Writer Daniel Vacanti is the author of the highly-praised books "When will it be done?" and "Actionable Agile Metrics for Predictability" and the original mind behind the ActionableAgile™️ Analytics Tool. Recently, he co-founded ProKanban.org, an inclusive community where everyone can learn about Professional Kanban, and he co-authored their Kanban Guide. When he is not playing tennis in the Florida sunshine or whisky tasting in Scotland, Daniel can be found speaking on the international conference circuit, teaching classes, and creating amazing content for people like us.

  • Probabilistic vs. deterministic forecasting

    Do you hear people throwing around words like probabilistic and deterministic forecasting, and you aren't sure exactly what they mean? Well, I'm writing this blog post specifically for you. Spoiler alert: it has to do with uncertainty vs. certainty. Forecasting is the process of making predictions based on past and present data (Wikipedia). Historically the type of forecasting used for business planning was deterministic (or point) forecasting. Increasingly, however, companies are embracing probabilistic forecasting as a way to help understand risk. What is deterministic forecasting? Just like fight club, people don't really talk about deterministic forecasting. It is just what they do, and they don't question it - at least until recently. I mean, if it is all someone knows, why would they even think to question it or explore the pros and cons? But what is it really? Deterministic forecasting is when only one possible outcome is given without any context around the likelihood of that outcome occurring. Statements like these are deterministic forecasts: It will rain at 1 P.M. Seventy people will cross this intersection today. My team will finish ten work items this week. This project will be done on June 3rd. For each of those statements, we know that something else could happen. But we have picked a specific possible outcome to communicate. Now, when someone hears or reads these statements, they do what comes naturally to humans... they fill in the gaps of information with what they want to be true. Usually, what they see or hear is that these statements are absolutely certain to happen. It makes sense. We've given them no alternative information. So, the problem with giving a deterministic forecast when more than one possible outcome really exists is that we aren't giving anyone, including ourselves, any information about the risk associated with the forecast we provided. How likely is it truly to happen? Deterministic forecasts communicate a single outcome with no information about risk. If there are factors that could come into play that could change the outcome, say external risks or sick employees, then deterministic forecasting doesn't work for us. It doesn't allow us to give that information to others. Fortunately, there's an alternative - probabilistic forecasting. What is probabilistic forecasting? A probabilistic forecast is one that acknowledges the range of possible outcomes and assigns a probability, or likelihood of happening, to each. The image above is a histogram showing the range of possible outcomes from a Monte Carlo simulation I ran. The question I effectively asked it was "How many items we can complete in 13 days?" Now, there are a lot of possible answers to that question. In fact, each bar on the histogram represents a different option - anywhere from 1 to 75 or more. We can, and probably should, work to make that range tighter. But, in the meantime, we can create a forecast by understanding the risk we are willing to take on. In the image above we see that in approx 85% of the 10,000 trials we finished at least 19 items in 13 days. This means we can say that, if our conditions stay roughly similar, there's an 85% chance that we can finish at least 19 items in 13 days. That means that there's a 15% chance we could finish 18 or less. Now I can discuss that with my team and my stakeholders and make decisions to move forward or to see what we can do to improve the likelihood of the answer we'd rather have. Here are some more probabilistic forecasts: There is a 70% chance of rain between now and 1 P.M. There's an 85% chance that at least seventy people will cross this intersection today. There's a 90% chance that my team will finish ten or more work items this week. There's only a 50% chance that this project will be done on or before June 3rd. Every probabilistic forecast has two components: a range and a probability, allowing you to make informed decisions. Learn more about probabilistic forecasts Which should I use? To answer this question you have to answer another: Can you be sure that there's a single possible outcome or are there factors that could cause other possibilities? In other words, do you have certainty or uncertainty? If the answer is certainty, then deterministic forecasts are right for you. However, that is rarely, if ever, the case. It is easy to give into the allure of the single answer provided by a deterministic forecast. It feels confident. Safe. Easy. Unfortunately, those feelings are an illusion. Deterministic forecasts are often created using qualitative information and estimates but, historically, humans are really bad at estimating. Our brains just can't account for all the possible factors. Even if you were to use data to create a deterministic forecast you still have to pick an outcome to use and often people choose the average. Is it ok being wrong half the time? It is better to be vaguely right than exactly wrong. Carveth Read (1920) If the answer is uncertainty (like the rest of us) then probabilistic forecasts are the smart choice. By providing the range of outcomes and the probability of each (or a set) happening, you give significantly more information about the risk involved with any forecast, allowing people to make more informed decisions. Yes, it's not the tidy single answer that people want but its your truth. Carveth Read said it well: "It is better to be vaguely right than exactly wrong." Remember that the point of forecasting is to manage risk. So, use the technique that provides as much information about risk as possible. How can I get started? First, gather data about when work items start and finish. If you're using work management tools like Jira or Azure DevOps then you are already capturing that data. With that information you can use charts and simulations to forecast how long it takes to finish a single work item, how many work items you can finish in a fixed time period, or even how long it can take you to finish a fixed scope of work. These are things we get asked to do all the time. You don't even need a lot of data. If you. have at least 10 work items, preferably a representative mix, then you have enough data to create probabilistic forecasts. Once you have the data you need, tools like ActionableAgile™️ and Portfolio Forecaster from 55 Degrees help you determine the forecast that matches your risk tolerance with ease. You can also use our tools to improve the predictability of your process. When you do that you are happier with your forecasts because you get higher probability with a narrower range of outcomes. If you're interested in chatting with us or other users on this topic, join us in our community and create a post! See you there!

  • Is your workflow hiding key signals?

    There are lots of signals that you can get from visualizing your work - especially on a Kanban board. You can see bottlenecks, blockers, and excess work-in-progress, but one signal you don't often get to see is the answer to the question, "How much longer from here?" Now, to get that signal, you have to have a process that models flow. By flow, I mean the movement of potential value through a system. Your workflow is intended to be a model of that system. When built in that way, your workflow allows you to visualize and manage how your potential value moves through your system. Managing flow is managing liability and risk A tip is to look at your workflow from a financial perspective. Work items you haven't started are options that, when exercised, could deliver value. Work items you have finished are (hopefully) assets delivering value. The remainder - all the work items that you've spent time and money on but haven't received any value in return yet (work-in-progress) are your liabilities. What this helps us clearly demonstrate is that our work-in-progress is where most of our risk lies. Yes, we could have delivered things that don't add value (and hopefully, there are feedback loops to help identify those situations and learn from them.) You can also have options that you really should be working on to maximize the long-term value they can provide. But, by far, the biggest risk we face is taking on too much liability and not managing that liability effectively - causing us to spend more time and money than we should to turn them into assets. Expectations versus reality We, humans, have a tendency to look at things with rose-colored glasses (ok, most of us do.) So, when we start a piece of work, we think it will have a nice, straight, and effective trip through the workflow with no u-turns or roadblocks. More often than not, that's not the case, and there are many reasons for that. One of the biggest reasons is how we build our workflow. When we build our workflow to model the linear progression of work as it moves from an option to an asset, you're more likely to have that straight path. If you build your workflow to model anything else - especially the different groups of people that will work on it then you end up with an erratic path. You can get a picture of how work moves between people (if you use tools like Inspekt). But what you don't get is a picture of how work moves through a lifecycle from option to asset. This is a problem if you think you're using your workflow to help optimize flow because you aren't seeing the signals you think you are. In a situation like this, what you have is a people flow -- not a work flow. That's great if you want to focus purely on managing resource efficiency (keeping people busy) but poor if you want to optimize flow and control your liabilities. The signal you can only get from a true workflow Once you can truly say that you have modeled the life cycle of turning options into assets, you can say that a card's position in the workflow reflects how close or far away it is from realizing its potential value. What this means is that when you move to the right in your workflow, then you're signaling you're closer to turning the liability into an asset, and when you move it to the left (backward) in your workflow, you're moving farther away from that outcome. (Does it make more sense now why we handle backward movement the way we do in ActionableAgile now?) Model your workflow so that how you move a work item is signal of movement towards or away from realising its potential value When you can say this, then you can start signaling how long an item is likely to take to become an asset. With tools like ActionableAgile's Cycle Time Scatterplot, you can see how long it's likely to take for an item to be completed from any workflow stage. It's like when you go to Disney World or someplace like it, and you're in line for a ride, and you see a sign that says your wait is 1 hour from this point. Each column of your workflow can have that metaphorical sign. Except you can also know the likelihood associated with that information. Want to make a change? Don't stress if you just learned that your workflow isn't all it's cracked up to be. You can make a change! It's all about board design and policies. If you want tips on how to change your board or process, check out my blog post on how to design your board to focus on flow, or watch my talk below on this topic from Lean Agile London 2022!

  • The Deviance of Standard Deviation

    Before getting too far into this post, there are two references that do a far better job than I ever will at explaining the deficiency of the standard deviation statistic: "The Flaw of Averages" by Dr. Sam Savage (https://www.flawofaverages.com/) Pretty much anything written by Dr. Donald Wheeler (spcpress.com) Why is the standard deviation so popular? Because that's what students are taught. It's that simple. Not because it is correct. Not because it is applicable in all circumstances. It is just what everyone learns. Even if you haven't taken a formal statistics class, somewhere along the line, you were taught that when presented with a set of data, the first thing you do is calculate an average (arithmetic mean) and a standard deviation. Why were taught that? It turns out there's not a really good answer to that. An unsatisfactory answer, however, would involve the history of the normal distribution (Gaussian) and how over the past century or so, the Gaussian distribution has come to dominate statistical analysis (its applicability--or, rather, inapplicability--for this purpose would be a good topic for another blog, so please leave a comment letting us know your interest). To whet your appetite on that topic, please see Bernoulli's Fallacy by Aubrey Clayton. Arithmetic means and standard deviations are what is known as descriptive statistics. An arithmetic mean describes the location of the center of a given dataset, while the standard deviation describes the data's dispersion. For example, say we are looking at Cycle Time data and we find that it has a mean of 12 and a standard deviation of 4.7. What does that really tell you? Well, actually, it tells you almost nothing--at least almost nothing that you really care about. The problem is that in our world, we are not concerned so much with describing our data as we are with doing proper analysis on it. Specifically, what we really care about is being able to identify possible process changes (signal) that may require action on our part. The standard deviation statistic is wholly unsuited to this pursuit. Why? First and foremost, the nature of how the standard deviation statistic is calculated makes it very susceptible to extreme outliers. A classic joke I use all the time is: imagine that the world's richest person walks into a pub. The average wealth of everyone in the pub is somewhere in the billions, and the standard deviation of wealth in the pub is somewhere in the billions. However, you know that if you were to walk up to any other person in the pub, that person would not be a billionaire. So what have you really learned from those descriptive statistics? This leads us to the second deficiency of the standard deviation statistic. Whenever you calculate a standard deviation, you are making a big assumption about your data (recall my earlier post about assumptions when applying theory?). Namely, you are making an assumption that all of your data has come from a single population. This assumption is not talked about much in statistical circles. According to Dr. Wheeler, "The descriptive statistics taught in introductory classes are appropriate summaries for homogeneous collections of data. But the real world has many ways of creating non-homogeneous data sets.." (https://spcpress.com/pdf/DJW377.pdf). In our pub example above, is it reasonable to assume that we are talking about a single population of peoples' wealth that shares the same characteristics? Or is it reasonable that some signal exists as evidence that one certain data point isn't routine? Take the cliched example from the probability of pulling selecting marbles from an urn. The setup usually concerns a single urn that contains two different coloured marbles--say red and white--in a given ratio. Then some question is asked, like if you select a single marble, what is the probability it will be red? The problem is that in the "real world," your data is not generated by choosing different coloured marbles from an urn. Most likely, you don't know if you are selecting from one urn or several urns. You don't know if your urns contain red marbles, white marbles, blue marbles, bicycles, or tennis racquets. Your data is generated by a process where things can--and do--change, go wrong, encompass multiple systems, etc. It is generated by potentially different influences under different circumstances with different impacts. In those situations, you don't need a set of descriptive statistics that assume a single population. What you need to do is analysis on your data to find evidence of signal of multiple (or changing) populations. In Wheeler's nomenclature, what you need to do is first determine if your data is homogenous or not. Now, here's where proponents of the standard deviation statistic will say that to find signal, all you do is take your arithmetic mean and start adding or subtracting standard deviations to it. For example, they will say that roughly 99.7% of all data should fall within your mean plus or minus 3 standard deviations. Thus, if you get a point outside of that, you have found signal. Putting aside for a minute the fact that this type of analysis ignores the assumptions I just outlined, this example brings into play yet another dangerous assumption of the standard deviation. When starting to couple percentages with a standard deviation (like 68.2%, 95.5%, 99.7%, etc.), you are making another big assumption that your data is normally distributed. I'm here to tell you that most real-world process data is NOT normally distributed. So what's the alternative? As a good first approximation, a great place to start is with the percentile approach that we utilize with ActionableAgile Analytics (see, for example, this blog post). This approach makes no assumptions about single populations, underlying distributions, etc. If you want to be a little more statistically rigorous (which at some point you will want to be), then you will need the Process Behaviour Chart advocated by Dr. Donald Wheeler's continuation of Dr. Walter Shewhart's work. A deeper discussion of the Shewhart/Wheeler approach is a whole blog series on its own that, if you are lucky, may be coming to a blog site near you soon. So, to sum up, the standard deviation statistic is an inadequate tool for data analysis because it: Is easily influenced by outliers (which your data probably has) Often assumes a normal distribution (which your data doesn't follow) Assumes a single population (which your data likely doesn't possess) Any analysis performed on top of these flaws is almost guaranteed to be invalid. One last thing. Here's a quote from Atlassian's own website: "The standard deviation gives you an indication of the level of confidence that you can have in the data. For example, if there is a narrow blue band (low standard deviation), you can be confident that the cycle time of future issues will be close to the rolling average." There are so many things wrong with this statement that I don't even know where to begin. So please help me out by leaving some of your own comments about this on the 55 Degrees community site. Happy Analysis! About Daniel Vacanti, Guest Writer Daniel Vacanti is the author of the highly-praised books "When will it be done?" and "Actionable Agile Metrics for Predictability" and the original mind behind the ActionableAgile™️ Analytics Tool. Recently, he co-founded ProKanban.org, an inclusive community where everyone can learn about Professional Kanban, and he co-authored their Kanban Guide. When he is not playing tennis in the Florida sunshine or whisky tasting in Scotland, Daniel can be found speaking on the international conference circuit, teaching classes, and creating amazing content for people like us.

  • How do you use pace percentiles on ActionableAgile's aging chart?

    It is inevitable that there are ways that the software creator intends a feature to be used, and there are ways that it ends up being used. 🤓 Sometimes, these unintended uses can be even better than the initial idea, but other times, they can end up causing harm. In a recent chat with Daniel Vacanti, we discussed this very thing about ActionableAgile™️ Analytics. I can say I was more than mildly surprised when one of my favorite features came up: the pace percentile feature on ActionableAgile's Work Item Aging chart. I love this feature because it helps you get early signals of slow work. However, after talking to and training many people, Dan saw that people very often misinterpret what this particular signal really tells us. How did he come to that conclusion? He talked to them about the decisions they would make because of the signals and saw that they weren't necessarily picking up what was intended. Instead, the decisions people were likely to make could lead to worse outcomes than currently presented on the chart. What do you think? Are you interpreting the signals correctly? Head over to our user community to discuss!

  • Schrodinger's Work Item and the Quest for Value

    This article is a guest contribution from Julie Starling, ActionableAgile customer, and was originally posted on her blog. Jump down to read more about Julie. We're all familiar with Schrodinger's cat right? The cat in a box which has the state of both dead and alive whilst the box is closed … when the box is opened it is one or the other. I can't help but see the parallels to work items in our system. Schrodinger's Work Item An active item in our system represents both potential value and waste ...until we deliver it, we do not know which it is. Potentially Valuable – In most instances, we engage with our customers to understand what is valuable to them. Even in cases where direct customer communication is limited, we often hold a genuine belief in the value of what we're delivering. However, complete certainty about its value remains elusive until we actually deliver the item and receive feedback. Only when our work item is in the hands of our customers can we truly determine whether the time invested has indeed been valuable. Waste - Until we deliver the item, the time we are spending on it can also be considered waste, as until it's delivered there is always a risk it won't be delivered and the time spent up until now will have been for nothing… these situations happen all the time and can be for a number of reasons, be it a change in strategy due to a global pandemic or change of requirement from our customers and everything else in between. It can also have been waste if we deliver it an no one uses it, it doesn't deliver the expected outcome or if we don't get any valuable feedback. Let The Cat Out The Box Whilst we understand that work in our system is potentially not valuable, we shouldn't be using this as a reason to not be experimental with what we deliver! Instead, we should think about getting work items out of our system as efficiently as we can. This way we can find out if it was actually valuable as quickly as possible, learn from this answer and move on with this new knowledge. Compromising quality is also not the answer! Two ways to get the cat out of the box... 1. Don’t Start! If you haven’t started working on an item then you haven’t started potentially wasting time. You can then put your efforts on keeping work that has started active and flowing. 2. Finish It! If you’ve started...then finish! One way to get an item out of a system is to finish it. On their own they may seem like two obvious and probably unhelpful points. However if we look at the bigger picture, we shouldn’t start items until we know they have the best chance of flowing through our system. When we do start, we should be managing that work in progress always with a goal of finishing. We want to keep our work flowing and keep the work as busy and active as possible. If we start items before they can flow there can be a lot of sitting around in the system. The longer the item is in the system the possibility of the item being waste just increases as the world around us changes or items become stale. Don’t Put the Cat in The Box, But If You Do, Don’t Keep It In There Longer Than Necessary In essence we shouldn’t start work until it’s the right time for our system, and when we do start it, we should be managing the work in progress with the goal of finishing. There are a number of ways in which we can manage work in progress, including... 1. Limit the amount of Work In Progress By not having too much in our system we are able to focus on what is active, less context switching and spend our efforts on keeping our work busy (keep work busy before people). If you are in a situation where you have a team of busy people and a number of work items that aren’t actively being worked on, then you probably need to start controlling your WIP. 2. Make items small The smaller work items are the easier they are going to flow through your system. We need to make sure our items are right-sized and represent the smallest possible chunk of potential value. This will help flow but it will also help us get the necessary feedback we need to know if we need to pivot in the quest for value. With this approach if the world around us changes and what we were delivering is no longer relevant, we’ve also minimized the amount of waste. 3. Take action on items that are unnecessarily aging Any item that is staying in the system unnecessarily long needs action taken on it. This could range from splitting the work item down, resolving blockers or even kicking it out of the system! But how do we know if an item is unnecessarily aging? ...I’ll be covering that in my next post. Similar to the state of Schrodinger's Cat being unknown until perceived, our work items exist in a superposition of potential value and waste. That is, until they are delivered and observed by our customers. Actively managing the work in the system shortens the time to understand its fate! TLDR; We can’t assume all work will be as valuable as we expect when we decide to do it. Work not finished has a dual nature of both potentially valuable or waste until we deliver and get feedback. To get the answer to ‘was it valuable?’ as quickly as possible we should be focusing on flow. Keep items in our system for a short of a time as possible. Keep inactive time to a minimum. Whilst work is in our system, we should be actively managing it with a goal to getting it out (at a high quality) as soon as we can. Techniques such as managing WIP, right sizing items and taking action on aging items help us to do this. About Julie Starling, Guest Writer Julie is passionate about the efficient delivery of value to customers and avoiding the illusion of certainty. In recent years she has specialized in how data can be used to drive the right conversations to do this. She encourages teams to use data in actionable ways and adjust ways of working to maximize their potential. She has spent over 15 years working in and alongside software delivery teams. In her spare time, she loves to travel, snowboard, and is obsessed with houseplants!

  • Arrêtez par Toutatis avec vos "story points!

    This article is a guest contribution from José Coignard, ActionableAgile customer, Professional Kanban Trainer, and Agile Coach in Europe’s largest financial institution. It was originally posted on his blog. Jump down to read more about José. Story points, c'est quoi? Un peu d'historique... Ce concept largement répandu et très mal répandu (à mon plus grand désarroi, vous le comprendrez si vous allez au bout de cet article) a été initialement créé par Ron Jeffries au sein de l'extreme programming (XP). Story points (ou points d'effort, de complexité en français) est une estimation relative d'un élément de travail. Le concept englobe des notions d'effort, de complexité, de temps, de risques, de niveau de compétences, de nombre de café requis pour terminer le sujet (oups je dérive...), etc. A la base c'était une estimation en temps pour implémenter une histoire utilisateur (les éléments qui étaient à connotation de valeurs pour un utilisateur final dans XP). Ron a rapidement décrit cela comme le temps idéal pour compléter une histoire à 2 personnes en pair programming, si le monde autour d'eux voulait bien leur ficher la paix. Seulement ce n'était jamais le cas et il y avait des risques, de la complexité cachée, etc. Il a donc introduit un facteur multiplicateur à cela (dans son expérience, c'était aux alentours de 3). Ce qui a fini par lui faire transformer cette estimation en temps idéal en une notion de points car les parties prenantes ne comprenaient pas qu'une journée idéale de travail puisse se transformer finalement en 3 jours réels. (Merci les parties prenantes qui ne comprennent pas que dans ce monde de travail intellectuel, on ne peut pas voir les choses de façon déterministe... Et que c'est un véritable danger de le faire!) Bref fin de la parenthèse, revenons à nos moutons... L'idée et l'utilisation de Ron (et de sa team à l'époque) de ces story points étaient tout simplement d'avoir des discussions au sein de l'équipe et de pouvoir juger si l'équipe était en train de ce challenger sur quelque chose de faisable ou pas sur l'itération. Donc Ron transforme cette notion en point... en story points! ("Nants ingonyama bagithi baba" ... Sur un célèbre air... Le lion est né!). Et c'est la que le drame commence. Quel drame? Que s'est-il passé? Sérieusement?! Vous osez me poser la question? Bon allez parce que c'est vous, je vous explique... Un célèbre framework apparaît et prend de plus en plus d'ampleur sur le marché: Scrum.  Il faut dire que le framework est plutôt séduisant et vient avec des potentiels bénéfices intéressants (que beaucoup prendront à mon sens comme des promesses). Pour une raison propre à l'être humain toujours très inventif, des pratiquants de Scrum commencent à utiliser les story points de Ron. Sauf que... l'utilisation de ces story points commencent à dévier de l'utilité initiale. Des personnes commencent à les utiliser pour faire des projections et des plannings au-delà d'une itération, pour comparer la performance de plusieurs équipes, pour pressuriser les équipes à toujours faire plus de story points par itération (cette fameuse vélocité)... Tenez, arrêtons-nous sur ce dernier point. Est-ce que dans la notion de story points de Ron, il y a une quelconque notion de valeur pour l'utilisateur final, pour le business? Bingo! Vous avez raison... AUCUNE! Alors pourquoi donc des managers, des parties prenantes, logiquement intéressés par le fait que le business se porte bien et de mieux en mieux, viendraient mettre la pression aux équipes pour qu'elles augmentent leur vélocité?! Alors? Alors? Oui j'entends timidement la réponse au fond de la salle... "Parce qu'ils n'ont rien compris à ce que c'était ! Et pensent que plus de vélocité = plus de valeur" Merci! Ça m'évite de le dire! Malheureusement, ce non-sens se propage, c'est une maladie hautement contagieuse et virale... Le monde est pris dans cette pandémie d'utilisation des story points, à mille lieux de l'idée originelle de Ron et de l'utilité qui l'avait amené à créer cela. Le pauvre... Car malheureusement, il est bien identifié comme le créateur des story points. Il en fera d'ailleurs un mea-culpa, que je vous donne ici en français: "J'aime dire que j'ai probablement été l'inventeur des story points, et si je l'ai fait, j'en suis désolé maintenant." Bon, bah, c'est bien beau tout ça, mais comment sortir de cela? Je suis content que vous posiez la question. Donc, ce que vous voulez savoir, c'est si vous ne vous challengez pas trop sur une itération, avec un objectif inatteignable avant même d'avoir démarré. De plus potentiellement avec une communication hasardeuse auprès de parties prenantes, qui prendront cet objectif comme une promesse à la fin de l'itération et qui vous attendront de pied ferme si vous ne l'honorez pas. Bonne nouvelle, c'est possible et sans les story points ! Et je dirai même que j'ai mieux et plus prédictible que ce que vous pouviez avoir avec des story points (sous certaines conditions). Bon soyons honnête, ce que je vous donne après ce n'est pas moi qui l'ai inventé, mais les personnes qui sont derrière la stratégie Kanban (la vraie) comme Daniel S. Vacanti par exemple (Mais comme je sais qu'il tient beaucoup à rendre hommage aux personnes qui étaient avec lui, je vous donne ma traduction de son bouquin où il raconte l'histoire et rend cet hommage : https://drive.google.com/file/d/1QJu4FQdBG1iFwzn4H4wTtPT0DuMPfweM/view?usp=sharing) Par contre, j'ai pas mal utilisé cela, dans différentes équipes et je peux vous assurer que ça fonctionne très, très bien ! (et je ne suis pas le seul à le confirmer). C'est parti et je vais essayer de vous synthétiser cela en moins de 200 pages... Ce que vous voulez, c'est une estimation plutôt fiable, la plus fiable possible, quand bien même nous soyons dans un monde d'incertitude. Pouvons-nous faire ces 10 items correspondants à notre objectif, ou seulement 8, ou plutôt 15 ? Et puis aussi on le verra un peu plus tard, afin de communiquer, répondre à des questions du type "Quand est-ce que vous pensez pouvoir compléter cette version?" Et si vous comptiez simplement le nombre d'éléments? D'histoires? Et si vous regardiez historiquement ce que vous avez réussi à faire? Bah non, ça ne marchera pas, les éléments ne sont pas de la même taille! Exactement! on a des sujets plus gros que d'autres et c'est bien pour ça qu'on estime en story points! Oki ... Oki... Calmez-vous ! Laissez-moi compléter... Je ne suis pas complètement en désaccord avec vous. Mais! Pensez-vous que vos estimations en story point soient justes? Bah oui, quand même, on est rodé! Attends, c'est vrai qu' "estimation"... Le mot en lui-même comporte une part de doute, une estimation peut-être fausse. Oui, exactement! Je dirai même que vous devez partir du principe qu'elle est fausse (bon parfois, mais plutôt rarement, vous tomberez juste.) Pourquoi? Parce que vous n'êtes pas dans un monde déterministe. Tenez! À quand remonte votre dernière découverte de quelque chose que vous n'aviez pas imaginé vous tomber dessus en réalisant une story? Je parierai que ça fait 1 jour, pas plus! C'est normal... Dès que vous allez commencer le travail sur le sujet, vous allez obtenir des informations qui vous feront entrer dans la réalité, la vraie réalité des choses et puis potentiellement, il y aura un changement intéressant à mettre en œuvre pour mieux satisfaire l'utilisateur. Il va se passer tout un tas de choses imprévues, vous allez avoir l'univers contre vous pour que ça ne se passe pas comme vous l'imaginiez. Ouais, bon d'accord, ce n'est pas faux! Vous n'avez pas compris quelque chose? (Désolé pour les non-fans de Kamelot ;-)) Donc, comment faire en ne comptant que les éléments, qui ne font pas la même "taille"? Alors, ils doivent faire, non pas la même taille, mais la bonne taille. J'entends par là que vous devez convenir en équipe d'une taille, en nombre de jours (ça servira pour autre chose d'être dans cette unité.), qui sera un point de comparaison et un point limite pour les éléments que vous allez travailler. Pour être vraiment dans l'idée de voir les choses de façon non-déterministe (si si c'est important et donc je vais le dire 1000 fois dans l'article), vous allez associer à cette timebox une probabilité de rester dedans (ou si vous voulez le voir dans l'autre sens une probabilité de la dépasser). Ça donne par exemple : "Taille de nos éléments de pas plus de 10 jours à 85% du temps". Dans la stratégie Kanban c'est ce qu'on appelle le niveau de service attendu (SLE). Avec ce SLE vous allez donc pouvoir définir vos éléments avec une bonne taille, ie une taille qui entre dans ce SLE. Attention, le sujet n'est pas de tomber juste, juste. C'est "pas plus de" 10 jours... Une timebox... Si c'est fini avant. Top ! Des utilisateurs pourront potentiellement bénéficier de quelque chose qui leur sera utile plus tôt. Donc au lieu de faire des estimations en story points, points d'effort ou complexité comme on les a renommés en France via des techniques de planning poker sur suite de Fibonnacci ou autre unité aidant à la relativité de l'estimation... Vous allez jouer à un planning poker un poil transformé, avec seulement 4 cartes : - 1 carte, j'en sais fichtre rien! - 1 carte, rentre assurément dans notre SLE - 1 carte, pas moyen que ça tienne dans le SLE - 1 carte, je n'ai pas compris le sujet, on peut réexpliquer? Voici différents scénarii possibles : Plein de "j'en sais fichtre rien!"... Il faut alors très certainement rediscuter pour mieux comprendre ce qu'il en retourne... Et au bout d'un moment si ça ne s'éclaircit pas par les discussions, il faut se lancer et faire. Le seul moyen de savoir concrètement si c'est dur ou pas et si ça dépassera le SLE ou pas. Plein de "je n'ai pas compris le sujet, on peut réexpliquer?" Idem, il faut rediscuter, éclaircir... Et potentiellement se lancer. "C'est en marchant qu'on apprend à marcher !" Que des "rentre assurément dans notre SLE", bon bah allez passons au sujet suivant Pas mal ou plein de "pas moyen que ça tienne dans le SLE"... Il faut rediscuter et trouver des moyens de découper le sujet pour arriver à sortir l'élément le plus important qui entre dans le SLE et remettre le reste à des discussions futures. (ou immédiates si c'est très important pour aussi s'assurer que ça entre dans le SLE) Un mixte de tout ça... Il faut rediscuter, tout le monde n'a pas compris la même chose, il y a peut-être des idées intéressantes de découpage, etc. Oki! Une fois que circulent dans notre workflow et que se réalisent dans nos itérations des éléments de bonne taille, on va pouvoir prendre le débit (nombre d'éléments qui se terminent par jour) et utiliser cela pour faire une projection probabiliste de ce qu'il est possible de faire dans une itération (dans la timebox de l'itération). Un excellent moyen de faire cette projection probabiliste et d'utiliser une simulation de Monte-Carlo en prenant en entrée ce débit par jour. Ceci vous donnera quelque chose de beaucoup plus précis que ce que vous faisiez avant et en plus un choix de voir ce que ça donne avec 50% de probabilités, 70%, 85%, 95%... Quel niveau de risque êtes-vous prêt à prendre? Ça sera aussi des discussions que vous pourrez avoir avec cette démarche. Exemple de simulation de Monte-Carlo ci-dessous : Sur cet exemple, il y a 85% de chances que l'équipe arrive à réaliser 6 éléments ou plus dans l'itération à venir (de 3 semaines), 7 ou plus à 70% de chance, 4 ou plus à 95% de chances. Un autre avantage de cette démarche, c'est de pouvoir donner une meilleure réponse à "Quand cela sera-t-il terminé?" Avez-vous déjà réussi à tomber juste dans votre prédiction de date de release en vous basant sur les story points (ou la vélocité)? Si c'est le cas, bravo ! Vous devriez jouer au loto ;-) Je présume, en étant sûr de ne pas me tromper, que comme moi, vous n'avez jamais vu cela ou par miracle une fois et pas beaucoup plus. Attention, quand je dis ça, c'est bien évidemment en étant honnête et sans avoir trituré le périmètre, ou bafoué la qualité ou les deux, pour faire entrer absolument la release à la date convenue et communiquée. Eh bien, vous savez pourquoi? Parce qu'il n'y aucune corrélation entre les story points et la durée que vous allez prendre pour terminer les éléments. Enfin aucune corrélation... un degré de corrélation très, très faible (0,2 ... 0,3 allez max 0,4). Donc est-ce que cela fait sens de réaliser des projections sur des dates, sur la base de story points, qui ont une corrélation très faible avec la durée que ces éléments prennent en réalité pour être réalisés? Pas vraiment, ou même aucunement! Tenez c'est cadeau, un exemple de non-lien entre story points et temps de cycle (durée pour qu'un élément traverse le workflow). Exemple réel, le même schéma s'est présenté plus d'une dizaine de fois à moi. Un collègue ProKanban Trainer a fait l'exercice sur plus de 100 équipes, toujours pareil... Et toujours avez un taux de corrélation ne dépassant pas les 0,4 et plutôt autour de 0,2. Ce graphique vous présente en axe X (horizontal) l'estimation en story points, en axe Y (vertical) le temps de cycle pour terminer l'élément. Vous voyez donc que des éléments avec beaucoup de points se terminent bien plus rapidement que d'autres avec moins de points estimés. Inversement, des éléments avec peu de points estimés se terminent bien moins rapidement que des éléments avec beaucoup de points estimés. Réponse à la question concernant un seul élément Si la question ne porte que sur un seul élément, alors dépendant de là où se trouve l'élément dans le flux de travail, vous pourrez répondre grâce à votre temps de cycle historique. Exemple : L'élément n'est pas encore démarré. Votre temps de cycle historique est de 12 jours ou moins pour 85% des éléments. Vous pourriez alors répondre "si nous démarrons aujourd'hui vous avez 85% de chances que nous fournissions cet élément dans les 12 jours à venir" (Au fait, je parle en jour calendaire... Pourquoi ? Ça fera certainement l'objet d'un autre article) Exemple 2 : L'élément est démarré. Le workflow est "Affinage > Dev > Recette > Terminé" et l'élément est dans l'état "Dev" avec un âge dans le flux de 5 jours (ie il a déjà passé 5 jours entre "affinage" et "dev"). Vous pourriez alors répondre "nous avons déjà travaillé 5 jours sur le sujet, il y a maintenant moins de 85% de chances que nous finissions dans les 7 jours suivants" (Oui, car la probabilité de finir un élément déjà démarré dans la timebox du SLE décroît dès le premier jour... Ça fera certainement également l'objet d'un autre article pour entrer dans ce détail, pour le moment sachez juste que c'est moins de 85% de chances en simplifiant les choses). Réponse à la question concernant plusieurs éléments (pour une release par exemple) Dans le cas où la projection probabiliste porterait sur plusieurs éléments, il faudra prendre le débit en élément par jour pour pouvoir répondre. En réalité, c'est la même technique que pour vous lorsque vous regardez combien d'éléments, vous pouvez prendre dans votre itération, juste qu'au lieu de le voir en nombre d'éléments dans une timebox donnée, on regarde où nous emmène le fait de faire un certain nombre d'éléments. Donc débit journalier et simulation de Monte-Carlo. Attention à bien prendre la bonne période pour le débit historique. Vous devez prendre une période de débit historique qui corresponde (à priori) au mieux à l'avenir que vous projetez. Enfin, n'hésitez pas dans la réponse à la question de proposer plusieurs niveaux de risque. Vous pourriez très bien répondre : "Nous avons 70% de chances de terminer cette release pour le 15 février, 85% de terminer le 24 février et 95% de terminer le 1er mars". Cela plaira à votre interlocuteur sans aucun doute et vous pourrez convenir du niveau de risque acceptable pour lui et ainsi continuer à monitorer cette projection dès lors que vous avez de nouvelles données de débit historique. Avec la simulation de Monte-Carlo ci-dessus, pour une release comportant 40 éléments à compléter la simulation nous donne : 70% de chances de compléter cela pour le 2 Mai 2024, 85% de chance pour le 13 Mai 2024, 95% de chance pour le 21 Mai 2024. Comment convaincre de lâcher les story points et de basculer en principe de bonne taille? Je dirai avec mon expérience qu'il n'y a pas de recette miracle. Mais une qui a souvent fonctionné pour moi est de montrer la différence entre les deux approches de projections (même sans être dans une logique de bonne taille). Vous avez certainement déjà une ou plusieurs release historique. Vous connaissez donc factuellement votre date de livraison. Regardez ce que vous aviez prédit avec une projection par la vélocité (Et encore une fois, soyez honnête, prenez la première projection là où vous n'aviez pas encore trituré le périmètre, la qualité... Pour faire entrer la release au chausse pieds à la date voulue). Maintenant, prenez le débit historique par jour des éléments, faites une simulation de Monte-Carlo avec cet historique en vous remettant à la date de démarrage et avec le nombre d'éléments que vous aviez imaginé au début de cette release. Je mettrai ma main au feu, que vous avez un meilleur résultat, plus proche de la réalité avec la simulation de Monte-Carlo. Et attention, c'était sans faire une approche de bonne taille pour vos éléments. Vous allez augmenter la prédictibilité en faisant cela et en vous tenant à contrôler l'âge de vos éléments dans votre flux par rapport à ce SLE que vous choisirez. Conclusion Bon eh bien, ça n'aura pas fait un bouquin de 200 pages, mais un bien long article quand même. Si vous êtes arrivés jusque-là (en ayant tout lu), bravo! J'espère que cela vous aidera à mettre de côté les story points ou, en tout cas, la très mauvaise utilisation que beaucoup (trop) de monde en font. Je vous invite, vraiment et très sérieusement, à vous pencher sur la stratégie Kanban dans sa globalité si vous voulez vraiment réussir à vous passer des story points et avoir tous les bénéfices de cette stratégie pour optimiser votre efficacité, votre efficience et votre prédictibilité. Si vous mettez en place uniquement ce que j'ai décrit dans cet article, vous obtiendrez certainement des résultats intéressants, mais vous aurez moins de chances que cela se produise que si vous mettez en place les 3 pratiques clés Kanban appuyées des 4 métriques de flux essentiels. En tous cas, vous allez être limité à un moment sur le niveau de prédictibilité que vous allez pouvoir atteindre. Donc si vous voulez creuser tout cela, je ne peux que vous recommander de venir avec moi en formation sur la stratégie Kanban, de lire mes autres articles, de me suivre sur LinkedIn (https://www.linkedin.com/in/jose-coignard/) , de lire les ressources gratuites qui se trouvent sur https://prokanban.org, de venir vous abonner au Slack de ProKanban.org https://join.slack.com/t/prokanban/shared_invite/zt-2a4ofpd9g-7PvTd5RiV5h17tCmUdVxuA Sources : L'histoire des story points racontée par Ron lui même : https://ronjeffries.com/articles/019-01ff/story-points/Index.html About José Coignard, Guest Writer José Coignard is a French Professional Kanban Trainer and Agile Coach in Europe’s largest financial institution. As a user and advocate of ActionableAgile Analytics, he is pursuing a quest to acculturate and develop the Kanban strategy in his company and beyond for French and French-speaking people.

  • Customer Story: John Lewis Teams Connect Around Outcomes with ActionableAgile™️ Analytics

    This customer story was originally published on March 19, 2021. We asked Ben Parry, Partner & Integration Delivery Lead at John Lewis, about his mission to reduce integration delivery lead time by 25% and how the improved metrics and reporting from ActionableAgile Analytics help make that possible. Here’s what he had to say! ActionableAgile Analytics connects my team more to outcomes and helps us to respond to trends over time. Outside my team, it’s now possible to have a common language around charts and metrics. The scope for cross-team learning is increasing. What was going on in your business that made you look for flow metrics and then eventually purchase ActionableAgile Analytics? The last five years have seen rapid growth in both our food and GM websites. Do more with less! Be disruptive! Be efficient! This push brought a focus on flow, and many people have been introduced to ActionableAgile Analytics to visualize it. I became involved as I was aware of the opportunity to act on insight from data in non-digital contexts. I believed that waiting time could be reduced/eliminated on one of the strategic projects that I was involved with. I wondered if planning could be driven more by data; one big project was reissuing plans every six weeks - was there a lighter-weight way to forecast delivery? I’d taken customer feedback that a ‘consistent sense of urgency’ would be appreciated - there seemed to be months in refinement but days for build. Soon after, I started a Lean Six Sigma Green Belt project to see if I could improve my team’s delivery. There were suspicions we were slow; could we become more efficient? To track progress over time, I invested time in getting the most from ActionableAgile Analytics. What did success look like for you at that time? Success was making waste visible and having better conversations on how to reduce them. We needed to work out which wastes were worth tackling first. I used ActionableAgile Analytics and SigmaXL to look at trends in queuing and activity cycle times. The initial goal was to concentrate on a key metric and increase awareness of that within my team. This was lead time - a customer outcome, not an activity output. The second was to communicate this upwards to my sponsor, which helped me complete my Green Belt Project. How has ActionableAgile helped to achieve that success? My approach has definitely evolved over time. I’m having better conversations about aging work. I intervene on the top ten oldest tickets or the oldest in a status. Last year, I could also justify recruitment decisions based on the arrival and departure rates on Cumulative Flow Diagrams (CFDs). Which features or benefits do you like best about ActionableAgile Analytics? Speed. The time between loading a board from JIRA and inspecting the related CFD is less than 5 minutes. There is no data refinement required - data is simply pulled from JIRA. I like zooming into a CFD and hiding statuses, etc. - something I can’t do with the native Jira CFD. ActionableAgile Analytics is great for detective work! What was the most valuable thing using ActionableAgile Analytics has brought, and why? My team is now more connected to outcomes (lead time to deliver value to system test) and our delivery trends over time. Outside my team, it’s now possible to have a common language around charts and metrics. The scope for cross-team learning is increasing. What results (qualitative or quantitative) have you seen because of using ActionableAgile Analytics? I’ve inspected how the digital agile teams manage flow and conducted an experiment to see what I could make of their data blindfolded. In doing that, I developed a repeatable process where I could build a report to look at WIP, lead time, throughput, and age in 90 minutes and then playback to the team in 30 mins. Reception to this has been positive, and I’ve had good conversations understanding why my perception of work through data may differ from the reality on the ground. E.g., a dip in throughput - “Yes, that’s where we lost a developer!” I champion the use of flow metrics and ActionableAgile Analytics as lead of our Flow Optimisation Community (>100 members). What’s next in your journey? I’m keen to look at flow beyond and within individual teams, perhaps tracking Epics across teams. At the organisation level, I’ve also been consulted on both: whether we have the right mix of work (Feature v Compliance work) as well as how to get a consistent way to train on metrics so Partners can ‘self-serve’. The worsening economy makes the hunt for waste increasingly urgent. It’s great to have the insight from ActionableAgile Analytics in my back pocket.

  • Discover the New ActionableAgile™️ Analytics

    If you're using a Cloud version of ActionableAgile Analytics, you've probably seen frequent updates throughout the year. I'm happy to share that we have another significant update scheduled for later this month, which will mark the completion of our major 5.0 release. For Jira Data Center users looking forward to it, rest assured - version 5.0 will be ready for download around the end of June. ActionableAgile Analytics 5.0 introduces a range of new features and improvements designed to enhance your user experience and make analyzing your data easier than ever. Keep reading to find out more. What's New in ActionableAgile Analytics 5.0 Re-imagined Dashboard We've made the dashboard your starting point for all data sets. It offers a comprehensive view of key metrics, making it easier for you to track progress and make informed decisions quickly. Our eventual goal is to enhance the dashboard further by adding more details and allowing you to customize what appears there in the future. Certain elements from the previous dashboard have been carried over to the new version, such as tracking work in progress (WIP), cycle time expectations, and simulation results for how many items can likely be completed within 30 days. We've taken it a step further by including information not only for the 85th percentile but also for additional percentiles like 50th, 70th, and 95th. To address user confusion around the previous stability insights, we've introduced a Pace chart that analyzes work started versus completed each month along a combined WIP and Age chart. This chart presents three interconnected metrics on a single graph - showing, for any date, the number of items in progress, the total combined work item age (a valuable risk indicator), and the average age of individual work items. If you’re familiar with Little’s Law, you’ll see that monitoring these charts will help you establish and maintain your system’s stability! New Help Center and Newsfeed Communicating with users within the app itself is hands-down the best way to share vital information. Therefore, we’ve introduced a brand-new Help Center accessible from the question mark icon located in the bottom-right corner of the app. You can find easy links to our support portal, community site, roadmap, and more here. The Help Center features a newsfeed where we share updates on new features or other essential information. A red dot on the help center icon indicates unread newsfeed updates. We're thrilled to start this journey of discovering how we can connect with you, our valued users. Whether you are a seasoned professional or new to ActionableAgile Analytics, these tools are designed to assist you in maximizing the value you seek from the application. Easily Collapsible Sidebar Let’s be real here - diving into the app settings never really made sense just to collapse or expand the sidebar. We're stepping up our game with a sidebar that you can conveniently hide and show whenever you want, allowing you to have more screen space for what truly matters—your data. Intuitive Workflow Stages Control Long-time users of our app know that the first checked stage in the workflow stages control marks where work is considered started, and the last checked stage marks where work is first considered finished. But that's not good enough for us! We want users to intuitively know how the app works. So, we've updated the control to add visual signals so you can see how the stages are classified based on your selections. Painless Data Selection for Charts Have you had a chance to check out our new and improved date control, where you can select a specific portion of your data to analyze? In addition to dragging to select a subset of your data, you can instead use the inputs located just above. This reduces the pain users have previously experienced when trying to pick a specific date range. And don't forget, once created, you can still easily drag, drop, or even resize your selection. Enhanced Chart and Simulation UI The Flow Efficiency Chart and Monte Carlo Simulations (How Many and When) have undergone significant improvements! The updated design makes it easier for everyone to grasp the information at a glance. The purpose of these charts is now more evident, with essential details no longer tucked away in the sidebar. This marks just the start of our mission to simplify new user onboarding. We are committed to further enhancing these and the rest of our charts as we move forward. Zoom Capability Would you like to examine a specific section of your chart more closely? You can now zoom in without affecting any of the calculations activated when modifying the selected data for the chart. These small enhancements can greatly improve your analysis of complex data sets! Improved Source Data Readability We've also focused on making it easier to view the source data that ActionableAgile Analytics uses to render the charts and run the simulations. Now, we have frozen header rows and columns, which allow sorting so you can easily find what you’re looking for. And Much, Much More... These are just a few highlights of what's new in ActionableAgile Analytics 5.0. It is impossible to list every single difference as we've literally overhauled every feature and touched every line of code to bring you a more powerful and user-friendly experience. Why the Big Update? You might be curious about the reasoning behind our decision to implement such significant changes. The answer is straightforward—transitioning from legacy code empowers us to: Comprehensively grasp every facet of our codebase. Leverage modern, supported libraries such as Highcharts, facilitating the swift addition of new features and charts. However, these updates do come with a small task for our users. You'll need to reconfigure the settings in the chart controls for each data set you visit, but only once per data set. But, this change will allow us to have configurations that are compatible with features we'd like to add, such as deep-linking to a specific, full-configured chart for a data set. Join Our Live Webinar To help you get acquainted with all the new features and improvements, we're excited to announce that we're hosting a live webinar on June 25th at 15:30 (GMT+2). During this session, we'll walk you through the changes and answer any questions you might have. Please don't miss this opportunity to learn more about ActionableAgile Analytics 5.0 and get your questions answered. If you can't make it, please don’t worry. We’ll have the recording available following the event. RSVP now! Stay tuned! Are you excited to dive in? You'll be pleased to know that these updates are just around the corner, likely by the end of June—or maybe even sooner, depending on where you use ActionableAgile Analytics. We'll ensure all users see an announcement pop-up in the app once we've launched version 5.0. We're eager to hear what you think! Head on over to the 55 Degrees community and let us know!

bottom of page